xxxviii its related products in new formats. Kim Davis, as Associate Managing We are privileged to have compiled this 19th edition and areEditor, has adeptly ensured that the complex production of this multi-enthusiastic about all that it offers our readers. We learned much in the authored textbook proceeded smoothly and efficiently. Dominik Pucek process of editing Harrison’s and hope that you will find this edition a oversaw the production of the new procedural videos and Priscilla uniquely valuable educational resource. Beer expertly oversaw the production of our extensive DVD content. The Editors Jeffrey Herzich ably served as production manager for this new edition. Chapter 1 The Practice of Medicine the practice of Medicine The Editors THE PHYSICIAN IN THE TWENTY-FIRST CENTURY No greater opportunity, responsibility, or obligation can fall to the lot of a human being than to become a physician. In the care of the suf-1 part 1: General Considerations in Clinical Medicine fering, [the physician] needs technical skill, scientific knowledge, and human understanding.… Tact, sympathy, and understanding are expected of the physician, for the patient is no mere collection of symp toms, signs, disordered functions, damaged organs, and disturbed emotions. [The patient] is human, fearful, and hopeful, seeking relief, help, and reassurance. —Harrison’s Principles of Internal Medicine, 1950 The practice of medicine has changed in significant ways since the first edition of this book appeared more than 60 years ago. The advent of molecular genetics, molecular and systems biology, and molecular pathophysiology; sophisticated new imaging techniques; and advances in bioinformatics and information technology have contributed to an explosion of scientific information that has fundamentally changed the way physicians define, diagnose, treat, and attempt to prevent disease. This growth of scientific knowledge is ongoing and accelerating. The widespread use of electronic medical records and the Internet have altered the way doctors practice medicine and access and exchange information (Fig. 1-1). As today’s physicians strive to integrate copious amounts of scientific knowledge into everyday practice, it is critically important that they remember two things: first, that the ultimate goal of medicine is to prevent disease and treat patients; and second, that despite more than 60 years of scientific advances since the first edition of this text, cultivation of the intimate relationship between physician and patient still lies at the heart of successful patient care. Deductive reasoning and applied technology form the foundation for the solution to many clinical problems. Spectacular advances in biochemistry, cell biology, and genomics, coupled with newly developed imaging techniques, allow access to the innermost parts of the cell and provide a window into the most remote recesses of the body. Revelations about the nature of genes and single cells have opened a portal for formulating a new molecular basis for the physiology of systems. Increasingly, physicians are learning how subtle changes in many different genes can affect the function of cells and organisms. Researchers are deciphering the complex mechanisms by which genes are regulated. Clinicians have developed a new appreciation of the role of stem cells in normal tissue function; in the development of cancer, degenerative diseases, and other disorders; and in the treatment of certain diseases. Entirely new areas of research, including studies of the human microbiome, have become important in understanding both health and disease. The knowledge gleaned from the science of medicine continues to enhance physicians’ understanding of complex disease processes and provide new approaches to treatment and prevention. Yet skill in the most sophisticated applications of laboratory technology and in the use of the latest therapeutic modality alone does not make a good physician. When a patient poses challenging clinical problems, an effective physician must be able to identify the crucial elements in a complex history and physical examination; order the appropriate laboratory, imaging, and diagnostic tests; and extract the key results from densely populated computer screens to determine whether to treat or to “watch.” As the number of tests increases, so does the likelihood that some incidental finding, completely unrelated to the clinical problem at hand, will be uncovered. Deciding whether a clinical clue is worth pursuing or should be dismissed as a “red herring” and weighing whether a proposed test, preventive measure, or treatment entails a greater risk than the disease itself are essential judgments that a skilled clinician must make many times each day. This combination of medical knowledge, intuition, experience, and judgment defines the art of medicine, which is as necessary to the practice of medicine as is a sound scientific base. CLINICAL SKILLS History-Taking The written history of an illness should include all the facts of medical significance in the life of the patient. Recent events should be given the most attention. Patients should, at some early point, have the opportunity to tell their own story of the illness without frequent interruption and, when appropriate, should receive expressions of interest, encouragement, and empathy from the physician. Any event related by a patient, however trivial or seemingly irrelevant, may provide the key to solving the medical problem. In general, only patients who feel comfortable with the physician will offer complete information; thus putting the patient at ease to the greatest extent possible contributes substantially to obtaining an adequate history. An informative history is more than an orderly listing of symptoms. By listening to patients and noting the way in which they describe their symptoms, physicians can gain valuable insight. Inflections of voice, facial expression, gestures, and attitude (i.e., “body language”) may offer important clues to patients’ perception of their symptoms. Because patients vary in their medical sophistication and ability to recall facts, the reported medical history should be corroborated whenever possible. The social history also can provide important insights into the types of diseases that should be considered. The family history not only identifies rare Mendelian disorders within a family but often reveals risk factors for common disorders, such as coronary heart disease, hypertension, and asthma. A thorough family history may require input from multiple relatives to ensure completeness and accuracy; once recorded, it can be updated readily. The process of history-taking provides an opportunity to observe the patient’s behavior and to watch for features to be pursued more thoroughly during the physical examination. The very act of eliciting the history provides the physician with an opportunity to establish or enhance the unique bond that forms the basis for the ideal patient-physician relationship. This process helps the physician develop an appreciation of the patient’s view of the illness, the patient’s expectations of the physician and the health care system, and the financial and social implications of the illness for the patient. Although current health care settings may impose time constraints on patient visits, it is important not to rush the history-taking. A hurried approach may lead patients to believe that what they are relating is not of importance to the physician, and thus they may withhold relevant information. The confidentiality of the patient-physician relationship cannot be overemphasized. Physical Examination The purpose of the physical examination is to identify physical signs of disease. The significance of these objective indications of disease is enhanced when they confirm a functional or structural change already suggested by the patient’s history. At times, however, physical signs may be the only evidence of disease. The physical examination should be methodical and thorough, with consideration given to the patient’s comfort and modesty. Although attention is often directed by the history to the diseased organ or part of the body, the examination of a new patient must extend from head to toe in an objective search for abnormalities. Unless the physical examination is systematic and is performed consistently from patient to patient, important segments may be omitted inadvertently. The results of the examination, like the details of the history, should be recorded at the time they are elicited—not hours later, when they are subject to the distortions of memory. Skill in physical diagnosis is acquired with experience, but it is not merely technique that determines success in eliciting signs of disease. The detection of a few scattered petechiae, a faint diastolic murmur, or a small mass in the abdomen is not a question of keener eyes and ears or more sensitive fingers but of a mind alert to those findings. Because physical findings can change with time, the physical examination should be repeated as frequently as the clinical situation warrants. FIgURE 1-1 Woodcuts from Johannes de Ketham’s Fasciculus Medicinae, the first illustrated medical text ever printed, show methods of information access and exchange in medical practice during the early Renaissance. Initially published in 1491 for use by medical students and practitioners, Fasciculus Medicinae appeared in six editions over the next 25 years. Left: Petrus de Montagnana, a well-known physician and teacher at the University of Padua and author of an anthology of instructive case studies, consults medical texts dating from antiquity up to the early Renaissance. Right: A patient with plague is attended by a physician and his attendants. (Courtesy, U.S. National Library of Medicine.) Given the many highly sensitive diagnostic tests now available (particularly imaging techniques), it may be tempting to place less emphasis on the physical examination. Indeed, many patients are seen by consultants after a series of diagnostic tests have been performed and the results are known. This fact should not deter the physician from performing a thorough physical examination since important clinical findings may have escaped detection by the barrage of prior diagnostic tests. The act of examining (touching) the patient also offers an opportunity for communication and may have reassuring effects that foster the patient-physician relationship. diagnostic Studies Physicians rely increasingly on a wide array of laboratory tests to solve clinical problems. However, accumulated laboratory data do not relieve the physician from the responsibility of carefully observing, examining, and studying the patient. It is also essential to appreciate the limitations of diagnostic tests. By virtue of their impersonal quality, complexity, and apparent precision, they often gain an aura of certainty regardless of the fallibility of the tests themselves, the instruments used in the tests, and the individuals performing or interpreting the tests. Physicians must weigh the expense involved in laboratory procedures against the value of the information these procedures are likely to provide. Single laboratory tests are rarely ordered. Instead, physicians generally request “batteries” of multiple tests, which often prove useful. For example, abnormalities of hepatic function may provide the clue to nonspecific symptoms such as generalized weakness and increased fatigability, suggesting a diagnosis of chronic liver disease. Sometimes a single abnormality, such as an elevated serum calcium level, points to a particular disease, such as hyperparathyroidism or an underlying malignancy. The thoughtful use of screening tests (e.g., measurement of low-density lipoprotein cholesterol) may be of great value. A group of laboratory values can conveniently be obtained with a single specimen at relatively low cost. Screening tests are most informative when they are directed toward common diseases or disorders and when their results indicate whether other useful—but often costly—tests or interventions are needed. On the one hand, biochemical measurements, together with simple laboratory determinations such as blood count, urinalysis, and erythrocyte sedimentation rate, often provide a major clue to the presence of a pathologic process. On the other hand, the physician must learn to evaluate occasional screening-test abnormalities that do not necessarily connote significant disease. An in-depth workup after the report of an isolated laboratory abnormality in a person who is otherwise well is almost invariably wasteful and unproductive. Because so many tests are performed routinely for screening purposes, it is not unusual for one or two values to be slightly abnormal. Nevertheless, even if there is no reason to suspect an underlying illness, tests yielding abnormal results ordinarily are repeated to rule out laboratory error. If an abnormality is confirmed, it is important to consider its potential significance in the context of the patient’s condition and other test results. The development of technically improved imaging studies with greater sensitivity and specificity proceeds apace. These tests provide remarkably detailed anatomic information that can be a pivotal factor in medical decision-making. Ultrasonography, a variety of isotopic scans, CT, MRI, and positron emission tomography have supplanted older, more invasive approaches and opened new diagnostic vistas. In light of their capabilities and the rapidity with which they can lead to a diagnosis, it is tempting to order a battery of imaging studies. All physicians have had experiences in which imaging studies revealed findings that led to an unexpected diagnosis. Nonetheless, patients must endure each of these tests, and the added cost of unnecessary testing is substantial. Furthermore, investigation of an unexpected abnormal finding may be associated with risk and/or expense and may lead to the diagnosis of an irrelevant or incidental problem. A skilled physician must learn to use these powerful diagnostic tools judiciously, always considering whether the results will alter management and benefit the patient. PRINCIPLES oF PATIENT CARE Evidence-Based Medicine Evidence-based medicine refers to the making of clinical decisions that are formally supported by data, preferably data derived from prospectively designed, randomized, controlled clinical trials. This approach is in sharp contrast to anecdotal experience, which is often biased. Unless they are attuned to the importance of using larger, more objective studies for making decisions, even the most experienced physicians can be influenced to an undue extent by recent encounters with selected patients. Evidence-based medicine has become an increasingly important part of routine medical practice and has led to the publication of many practice guidelines. Practice guidelines Many professional organizations and government agencies have developed formal clinical-practice guidelines to aid physicians and other caregivers in making diagnostic and therapeutic decisions that are evidence-based, cost-effective, and most appropriate to a particular patient and clinical situation. As the evidence base of medicine increases, guidelines can provide a useful framework for managing patients with particular diagnoses or symptoms. Clinical guidelines can protect patients— particularly those with inadequate health care benefits—from receiving substandard care. These guidelines also can protect conscientious caregivers from inappropriate charges of malpractice and society from the excessive costs associated with the overuse of medical resources. There are, however, caveats associated with clinical-practice guidelines since they tend to oversimplify the complexities of medicine. Furthermore, groups with different perspectives may develop divergent recommendations regarding issues as basic as the need for screening of women in their forties by mammography or of men over age 50 by serum prostate-specific antigen (PSA) assay. Finally, guidelines, as the term implies, do not—and cannot be expected to—account for the uniqueness of each individual and his or her illness. The physician’s challenge is to integrate into clinical practice the useful recommendations offered by experts without accepting them blindly or being inappropriately constrained by them. Medical decision-Making Medical decision-making is an important responsibility of the physician and occurs at each stage of the diagnostic and therapeutic process. The decision-making process involves the ordering of additional tests, requests for consultations, and decisions about treatment and predictions concerning prognosis. This process requires an in-depth understanding of the pathophysiology and natural history of disease. As discussed above, medical decision-making should be evidence-based so that patients derive full benefit from the available scientific knowledge. Formulating a differential diagnosis requires not only a broad knowledge base but also the ability to assess the relative probabilities of various diseases. Application of the scientific method, including hypothesis formulation and data collection, is essential to the process of accepting or rejecting a particular diagnosis. Analysis of the differential diagnosis is an iterative process. As new information or test results are acquired, the group of disease processes being considered can be contracted or expanded appropriately. Despite the importance of evidence-based medicine, much medical decision-making relies on good clinical judgment, an attribute that is difficult to quantify or even to assess qualitatively. Physicians must use their knowledge and experience as a basis for weighing known factors, along with the inevitable uncertainties, and then making a sound judgment; this synthesis of information is particularly important when a relevant evidence base is not available. Several quantitative tools may be invaluable in synthesizing the available information, including diagnostic tests, Bayes’ theorem, and multivariate statistical models. Diagnostic tests serve to reduce uncertainty about an individual’s diagnosis or prognosis and help the physician decide how best to manage that individual’s condition. The battery of diagnostic tests complements the history and the physical examination. The accuracy of a particular test is ascertained by determining its sensitivity (true-positive rate) and specificity (true-negative rate) as well as the predictive value of a positive and a negative result. Bayes’ theorem uses information on a test’s sensitivity and specificity, in conjunction with the pretest probability of a diagnosis, to determine mathematically the posttest probability of the diagnosis. More complex clinical problems can be approached with multivariate statistical models, which generate highly accurate information even when multiple factors are acting individually or together to affect disease risk, progression, or response to treatment. Studies comparing the performance of statistical models with that of expert clinicians have documented equivalent accuracy, although the models tend to be more consistent. Thus, multivariate statistical models may be particularly helpful to less experienced clinicians. See Chap. 3 for a more thorough discussion of decision-making in clinical medicine. Electronic Medical Records Both the growing reliance on computers and the strength of information technology now play central roles in medicine. Laboratory data are accessed almost universally through computers. Many medical centers now have electronic medical records, computerized order entry, and bar-coded tracking of medications. Some of these systems are interactive, sending reminders or warning of potential medical errors. Electronic medical records offer rapid access to information that is invaluable in enhancing health care quality and patient safety, including relevant data, historical and clinical information, imaging studies, laboratory results, and medication records. These data can be used to monitor and reduce unnecessary variations in care and to provide real-time information about processes of care and clinical outcomes. Ideally, patient records are easily transferred across the health care system. However, technologic limitations and concerns about privacy and cost continue to limit broad-based use of electronic health records in many clinical settings. As valuable as it is, information technology is merely a tool and can never replace the clinical decisions that are best made by the physician. Clinical knowledge and an understanding of a patient’s needs, supplemented by quantitative tools, still represent the best approach to decision-making in the practice of medicine. Evaluation of outcomes Clinicians generally use objective and readily measurable parameters to judge the outcome of a therapeutic intervention. These measures may oversimplify the complexity of a clinical condition as patients often present with a major clinical problem in the context of multiple complicating background illnesses. For example, a patient may present with chest pain and cardiac ischemia, but with a background of chronic obstructive pulmonary disease and renal insufficiency. For this reason, outcome measures such as mortality, length of hospital stay, or readmission rates are typically risk-adjusted. An important point is that patients usually seek medical attention for subjective reasons; they wish to obtain relief from pain, to preserve or regain function, and to enjoy life. The components of a patient’s health status or quality of life can include bodily comfort, capacity for physical activity, personal and professional function, sexual function, cognitive function, and overall perception of health. Each of these important areas can be assessed through structured interviews or specially designed questionnaires. Such assessments provide useful parameters by which a physician can judge patients’ subjective views of their disabilities and responses to treatment, particularly in chronic illness. The practice of medicine requires consideration and integration of both objective and subjective outcomes. Chapter 1 The Practice of Medicine Women’s Health and disease Although past epidemiologic studies and clinical trials have often focused predominantly on men, more recent studies have included more women, and some, like the Women’s Health Initiative, have exclusively addressed women’s health issues. Significant sex-based differences exist in diseases that afflict both men and women. Much is still to be learned in this arena, and ongoing studies should enhance physicians’ understanding of the mechanisms underlying these differences in the course and outcome of certain diseases. For a more complete discussion of women’s health, see Chap. 6e. Care of the Elderly The relative proportion of elderly individuals in the populations of developed nations has grown considerably over the past few decades and will continue to grow. The practice of medicine is greatly influenced by the health care needs of this growing demographic group. The physician must understand and appreciate the decline in physiologic reserve associated with aging; the differences in appropriate doses, clearance, and responses to medications; the diminished responses of the elderly to vaccinations such as those against influenza; the different manifestations of common diseases among the elderly; and the disorders that occur commonly with aging, such as depression, dementia, frailty, urinary incontinence, and fractures. For a more complete discussion of medical care for the elderly, see Chap. 11 and Part 5, Chaps. 93e and 94e. Errors in the delivery of Health Care A 1999 report from the Institute of Medicine called for an ambitious agenda to reduce medical error rates and improve patient safety by designing and implementing fundamental changes in health care systems. Adverse drug reactions occur in at least 5% of hospitalized patients, and the incidence increases with the use of a large number of drugs. Whatever the clinical situation, it is the physician’s responsibility to use powerful therapeutic measures wisely, with due regard for their beneficial actions, potential dangers, and cost. It is the responsibility of hospitals and health care organizations to develop systems to reduce risk and ensure patient safety. Medication errors can be reduced through the use of ordering systems that rely on electronic processes or, when electronic options are not available, that eliminate misreading of handwriting. Implementation of infection control systems, enforcement of hand-washing protocols, and careful oversight of antibiotic use can minimize the complications of nosocomial infections. Central-line infection rates have been dramatically reduced at many centers by careful adherence of trained personnel to standardized protocols for introducing and maintaining central lines. Rates of surgical infection and wrong-site surgery can likewise be reduced by the use of standardized protocols and checklists. Falls by patients can be minimized by judicious use of sedatives and appropriate assistance with bed-to-chair and bed-to-bathroom transitions. Taken together, these and other measures are saving thousands of lives each year. The Physician’s Role in Informed Consent The fundamental principles of medical ethics require physicians to act in the patient’s best interest and to respect the patient’s autonomy. These requirements are particularly relevant to the issue of informed consent. Patients are required to sign a consent form for essentially any diagnostic or therapeutic procedure. Most patients possess only limited medical knowledge and must rely on their physicians for advice. Communicating in a clear and understandable manner, physicians must fully discuss the alternatives for care and explain the risks, benefits, and likely consequences of each alternative. In every case, the physician is responsible for ensuring that the patient thoroughly understands these risks and benefits; encouraging questions is an important part of this process. This is the very definition of informed consent. Full, clear explanation and discussion of the proposed procedures and treatment can greatly mitigate the fear of the unknown that commonly accompanies hospitalization. Excellent communication can also help alleviate misunderstandings in situations where complications of intervention occur. Often the patient’s understanding is enhanced by repeatedly discussing the issues in an unthreatening and supportive way, answering new questions that occur to the patient as they arise. Special care should be taken to ensure that a physician seeking a patient’s informed consent has no real or apparent conflict of interest involving personal gain. The Approach to grave Prognoses and death No circumstance is more distressing than the diagnosis of an incurable disease, particularly when premature death is inevitable. What should the patient and family be told? What measures should be taken to maintain life? What can be done to maintain the quality of life? Honesty is absolutely essential in the face of a terminal illness. The patient must be given an opportunity to talk with the physician and ask questions. A wise and insightful physician uses such open communication as the basis for assessing what the patient wants to know and when he or she wants to know it. On the basis of the patient’s responses, the physician can assess the right tempo for sharing information. Ultimately, the patient must understand the expected course of the disease so that appropriate plans and preparations can be made. The patient should participate in decision-making with an understanding of the goal of treatment (palliation) and its likely effects. The patient’s religious beliefs must be taken into consideration. Some patients may find it easier to share their feelings about death with their physician, who is likely to be more objective and less emotional, than with family members. The physician should provide or arrange for emotional, physical, and spiritual support and must be compassionate, unhurried, and open. In many instances, there is much to be gained by the laying on of hands. Pain should be controlled adequately, human dignity maintained, and isolation from family and close friends avoided. These aspects of care tend to be overlooked in hospitals, where the intrusion of life-sustaining equipment can detract from attention to the whole person and encourage concentration instead on the life-threatening disease, against which the battle ultimately will be lost in any case. In the face of terminal illness, the goal of medicine must shift from cure to care in the broadest sense of the term. Primum succurrere, first hasten to help, is a guiding principle. In offering care to a dying patient, a physician must be prepared to provide information to family members and deal with their grief and sometimes their feelings of guilt or even anger. It is important for the doctor to assure the family that everything reasonable has been done. A substantial problem in these discussions is that the physician often does not know how to gauge the prognosis. In addition, various members of the health care team may offer different opinions. Good communication among providers is essential so that consistent information is provided to patients. This is especially important when the best path forward is uncertain. Advice from experts in palliative and terminal care should be sought whenever necessary to ensure that clinicians are not providing patients with unrealistic expectations. For a more complete discussion of endof-life care, see Chap. 10. The significance of the intimate personal relationship between physician and patient cannot be too strongly emphasized, for in an extraordinarily large number of cases both the diagnosis and treatment are directly dependent on it. One of the essential qualities of the clinician is interest in humanity, for the secret of the care of the patient is in caring for the patient. —Francis W. Peabody, October 21, 1925, Lecture at Harvard Medical School Physicians must never forget that patients are individual human beings with problems that all too often transcend their physical complaints. They are not “cases” or “admissions” or “diseases.” Patients do not fail treatments; treatments fail to benefit patients. This point is particularly important in this era of high technology in clinical medicine. Most patients are anxious and fearful. Physicians should instill confidence and offer reassurance but must never come across as arrogant or patronizing. A professional attitude, coupled with warmth and openness, can do much to alleviate anxiety and to encourage patients to share all aspects of their medical history. Empathy and compassion are the essential features of a caring physician. The physician needs to consider the setting in which an illness occurs—in terms not only of patients themselves but also of their familial, social, and cultural backgrounds. The ideal patient-physician relationship is based on thorough knowledge of the patient, mutual trust, and the ability to communicate. The dichotomy of Inpatient and outpatient Internal Medicine The hospital environment has changed dramatically over the last few decades. Emergency departments and critical care units have evolved to identify and manage critically ill patients, allowing them to survive formerly fatal diseases. At the same time, there is increasing pressure to reduce the length of stay in the hospital and to manage complex disorders in the outpatient setting. This transition has been driven not only by efforts to reduce costs but also by the availability of new outpatient technologies, such as imaging and percutaneous infusion catheters for long-term antibiotics or nutrition, minimally invasive surgical procedures, and evidence that outcomes often are improved by minimizing inpatient hospitalization. In these circumstances, two important issues arise as physicians cope with the complexities of providing care for hospitalized patients. On the one hand, highly specialized health professionals are essential to the provision of optimal acute care in the hospital; on the other, these professionals—with their diverse training, skills, responsibilities, experiences, languages, and “cultures”—need to work as a team. In addition to traditional medical beds, hospitals now encompass multiple distinct levels of care, such as the emergency department, procedure rooms, overnight observation units, critical care units, and palliative care units. A consequence of this differentiation has been the emergence of new trends, including specialties (e.g., emergency medicine and end-of-life care) and the provision of in-hospital care by hospitalists and intensivists. Most hospitalists are board-certified internists who bear primary responsibility for the care of hospitalized patients and whose work is limited entirely to the hospital setting. The shortened length of hospital stay that is now standard means that most patients receive only acute care while hospitalized; the increased complexities of inpatient medicine make the presence of a generalist with specific training, skills, and experience in the hospital environment extremely beneficial. Intensivists are board-certified physicians who are further certified in critical care medicine and who direct and provide care for very ill patients in critical care units. Clearly, then, an important challenge in internal medicine today is to ensure the continuity of communication and information flow between a patient’s primary care doctor and these physicians who are in charge of the patient’s hospital care. Maintaining these channels of communication is frequently complicated by patient “handoffs”—i.e., from the outpatient to the inpatient environment, from the critical care unit to a general medicine floor, and from the hospital to the outpatient environment. The involvement of many care providers in conjunction with these transitions can threaten the traditional one-to-one relationship between patient and primary care physician. Of course, patients can benefit greatly from effective collaboration among a number of health care professionals; however, it is the duty of the patient’s principal or primary physician to provide cohesive guidance through an illness. To meet this challenge, primary care physicians must be familiar with the techniques, skills, and objectives of specialist physicians and allied health professionals who care for their patients in the hospital. In addition, primary care doctors must ensure that their patients will benefit from scientific advances and from the expertise of specialists when they are needed both in and out of the hospital. Primary care physicians can also explain the role of these specialists to reassure patients that they are in the hands of the physicians best trained to manage an acute illness. However, the primary care physician should retain ultimate responsibility for making major decisions about diagnosis and treatment and should assure patients and their families that decisions are being made in consultation with these specialists by a physician who has an overall and complete perspective on the case. A key factor in mitigating the problems associated with multiple care providers is a commitment to interprofessional teamwork. Despite the diversity in training, skills, and responsibilities among health care professionals, common values need to be reinforced if patient care is not to be adversely affected. This component of effective medical care is widely recognized, and several medical schools have integrated interprofessional teamwork into their curricula. The evolving concept of the “medical home” incorporates team-based primary care with linked subspecialty care in a cohesive environment that ensures smooth transitions of care cost-effectively. Appreciation of the Patient’s Hospital Experience The hospital is an 5 intimidating environment for most individuals. Hospitalized patients find themselves surrounded by air jets, buttons, and glaring lights; invaded by tubes and wires; and beset by the numerous members of the health care team—hospitalists, specialists, nurses, nurses’ aides, physicians’ assistants, social workers, technologists, physical therapists, medical students, house officers, attending and consulting physicians, and many others. They may be transported to special laboratories and imaging facilities replete with blinking lights, strange sounds, and unfamiliar personnel; they may be left unattended at times; and they may be obligated to share a room with other patients who have their own health problems. It is little wonder that a patient’s sense of reality may be compromised. Physicians who appreciate the hospital experience from the patient’s perspective and who make an effort to develop a strong relationship within which they can guide the patient through this experience may make a stressful situation more tolerable. Trends in the delivery of Health Care: A Challenge to the Humane Physician Many trends in the delivery of health care tend to make medical care impersonal. These trends, some of which have been mentioned already, include (1) vigorous efforts to reduce the escalating costs of health care; (2) the growing number of managed-care programs, which are intended to reduce costs but in which the patient may have little choice in selecting a physician or in seeing that physician consistently; (3) increasing reliance on technological advances and computerization for many aspects of diagnosis and treatment; and (4) the need for numerous physicians to be involved in the care of most patients who are seriously ill. In light of these changes in the medical care system, it is a major challenge for physicians to maintain the humane aspects of medical care. The American Board of Internal Medicine, working together with the American College of Physicians–American Society of Internal Medicine and the European Federation of Internal Medicine, has published a Charter on Medical Professionalism that underscores three main principles in physicians’ contract with society: (1) the primacy of patient welfare, (2) patient autonomy, and (3) social justice. While medical schools appropriately place substantial emphasis on professionalism, a physician’s personal attributes, including integrity, respect, and compassion, also are extremely important. Availability to the patient, expression of sincere concern, willingness to take the time to explain all aspects of the illness, and a nonjudgmental attitude when dealing with patients whose cultures, lifestyles, attitudes, and values differ from those of the physician are just a few of the characteristics of a humane physician. Every physician will, at times, be challenged by patients who evoke strongly negative or positive emotional responses. Physicians should be alert to their own reactions to such patients and situations and should consciously monitor and control their behavior so that the patient’s best interest remains the principal motivation for their actions at all times. An important aspect of patient care involves an appreciation of the patient’s “quality of life,” a subjective assessment of what each patient values most. This assessment requires detailed, sometimes intimate knowledge of the patient, which usually can be obtained only through deliberate, unhurried, and often repeated conversations. Time pressures will always threaten these interactions, but they should not diminish the importance of understanding and seeking to fulfill the priorities of the patient. EXPANdINg FRoNTIERS IN MEdICAL PRACTICE The Era of “omics”: genomics, Epigenomics, Proteomics, Microbiomics, Metagenomics, Metabolomics, Exposomics . . . In the spring of 2003, announcement of the complete sequencing of the human genome officially ushered in the genomic era. However, even before that landmark accomplishment, the practice of medicine had been evolving as a result of the insights into both the human genome and the genomes of a wide variety of microbes. The clinical implications of these insights are illustrated by the complete genome sequencing of H1N1 influenza virus in 2009 and the rapid identification of H1N1 influenza as a potentially fatal pandemic illness, with swift development and dissemination of an effective protective vaccine. Today, gene expression profiles are being Chapter 1 The Practice of Medicine used to guide therapy and inform prognosis for a number of diseases, the use of genotyping is providing a new means to assess the risk of certain diseases as well as variations in response to a number of drugs, and physicians are better understanding the role of certain genes in the causality of common conditions such as obesity and allergies. Despite these advances, the use of complex genomics in the diagnosis, prevention, and treatment of disease is still in its early stages. The task of physicians is complicated by the fact that phenotypes generally are determined not by genes alone but by the interplay of genetic and environmental factors. Indeed, researchers have just begun to scratch the surface of the potential applications of genomics in the practice of medicine. Rapid progress also is being made in other areas of molecular medicine. Epigenomics is the study of alterations in chromatin and histone proteins and methylation of DNA sequences that influence gene expression. Every cell of the body has identical DNA sequences; the diverse phenotypes a person’s cells manifest are the result of epigenetic regulation of gene expression. Epigenetic alterations are associated with a number of cancers and other diseases. Proteomics, the study of the entire library of proteins made in a cell or organ and its complex relationship to disease, is enhancing the repertoire of the 23,000 genes in the human genome through alternate splicing, post-translational processing, and posttranslational modifications that often have unique functional consequences. The presence or absence of particular proteins in the circulation or in cells is being explored for diagnostic and disease-screening applications. Microbiomics is the study of the resident microbes in humans and other mammals. The human haploid genome has ~20,000 genes, while the microbes residing on and in the human body comprise over 3–4 million genes; the contributions of these resident microbes are likely to be of great significance with regard to health status. In fact, research is demonstrating that the microbes inhabiting human mucosal and skin surfaces play a critical role in maturation of the immune system, in metabolic balance, and in disease susceptibility. A variety of environmental factors, including the use and overuse of antibiotics, have been tied experimentally to substantial increases in disorders such as obesity, metabolic syndrome, atherosclerosis, and immune-mediated diseases in both adults and children. Metagenomics, of which microbiomics is a part, is the genomic study of environmental species that have the potential to influence human biology directly or indirectly. An example is the study of exposures to microorganisms in farm environments that may be responsible for the lower incidence of asthma among children raised on farms. Metabolomics is the study of the range of metabolites in cells or organs and the ways they are altered in disease states. The aging process itself may leave telltale metabolic footprints that allow the prediction (and possibly the prevention) of organ dysfunction and disease. It seems likely that disease-associated patterns will be sought in lipids, carbohydrates, membranes, mitochondria, and other vital components of cells and tissues. Finally, exposomics refers to efforts to catalogue and capture environmental exposures such as smoking, sunlight, diet, exercise, education, and violence, which together have an enormous impact on health. All of this new information represents a challenge to the traditional reductionist approach to medical thinking. The variability of results in different patients, together with the large number of variables that can be assessed, creates difficulties in identifying preclinical disease and defining disease states unequivocally. Accordingly, the tools of systems biology and network medicine are being applied to the enormous body of information now obtainable for every patient and may eventually provide new approaches to classifying disease. For a more complete discussion of a complex systems approach to human disease, see Chap. 87e. The rapidity of these advances may seem overwhelming to practicing physicians. However, physicians have an important role to play in ensuring that these powerful technologies and sources of new information are applied with sensitivity and intelligence to the patient. Since “omics” are evolving so rapidly, physicians and other health care professionals must continue to educate themselves so that they can apply this new knowledge to the benefit of their patients’ health and wellbeing. Genetic testing requires wise counsel based on an understanding of the value and limitations of the tests as well as the implications of their results for specific individuals. For a more complete discussion of genetic testing, see Chap. 84. The globalization of Medicine Physicians should be cognizant of diseases and health care services beyond local boundaries. Global travel has implications for disease spread, and it is not uncommon for diseases endemic to certain regions to be seen in other regions after a patient has traveled to and returned from those regions. In addition, factors such as wars, the migration of refugees, and climate change are contributing to changing disease profiles worldwide. Patients have broader access to unique expertise or clinical trials at distant medical centers, and the cost of travel may be offset by the quality of care at those distant locations. As much as any other factor influencing global aspects of medicine, the Internet has transformed the transfer of medical information throughout the world. This change has been accompanied by the transfer of technological skills through telemedicine and international consultation—for example, regarding radiologic images and pathologic specimens. For a complete discussion of global issues, see Chap. 2. Medicine on the Internet On the whole, the Internet has had a very positive effect on the practice of medicine; through personal computers, a wide range of information is available to physicians and patients almost instantaneously at any time and from anywhere in the world. This medium holds enormous potential for the delivery of current information, practice guidelines, state-of-the-art conferences, journal content, textbooks (including this text), and direct communications with other physicians and specialists, expanding the depth and breadth of information available to the physician regarding the diagnosis and care of patients. Medical journals are now accessible online, providing rapid sources of new information. By bringing them into direct and timely contact with the latest developments in medical care, this medium also serves to lessen the information gap that has hampered physicians and health care providers in remote areas. Patients, too, are turning to the Internet in increasing numbers to acquire information about their illnesses and therapies and to join Internet-based support groups. Patients often arrive at a clinic visit with sophisticated information about their illnesses. In this regard, physicians are challenged in a positive way to keep abreast of the latest relevant information while serving as an “editor” as patients navigate this seemingly endless source of information, the accuracy and validity of which are not uniform. A critically important caveat is that virtually anything can be published on the Internet, with easy circumvention of the peer-review process that is an essential feature of academic publications. Both physicians and patients who search the Internet for medical information must be aware of this danger. Notwithstanding this limitation, appropriate use of the Internet is revolutionizing information access for physicians and patients and in this regard represents a remarkable resource that was not available to practitioners a generation ago. Public Expectations and Accountability The general public’s level of knowledge and sophistication regarding health issues has grown rapidly over the last few decades. As a result, expectations of the health care system in general and of physicians in particular have risen. Physicians are expected to master rapidly advancing fields (the science of medicine) while considering their patients’ unique needs (the art of medicine). Thus, physicians are held accountable not only for the technical aspects of the care they provide but also for their patients’ satisfaction with the delivery and costs of care. In many parts of the world, physicians increasingly are expected to account for the way in which they practice medicine by meeting certain standards prescribed by federal and local governments. The hospitalization of patients whose health care costs are reimbursed by the government and other third parties is subjected to utilization review. Thus, a physician must defend the cause for and duration of a patient’s hospitalization if it falls outside certain “average” standards. Authorization for reimbursement increasingly is based on documentation of the nature and complexity of an illness, as reflected by recorded elements of the history and physical examination. A growing “pay-forperformance” movement seeks to link reimbursement to quality of care. The goal of this movement is to improve standards of health care and contain spiraling health care costs. In many parts of the United States, managed (capitated) care contracts with insurers have replaced traditional fee-for-service care, placing the onus of managing the cost of all care directly on the providers and increasing the emphasis on preventive strategies. In addition, physicians are expected to give evidence of their current competence through mandatory continuing education, patient record audits, maintenance of certification, and relicensing. Medical Ethics and New Technologies The rapid pace of technological advance has profound implications for medical applications that go far beyond the traditional goals of disease prevention, treatment, and cure. Cloning, genetic engineering, gene therapy, human–computer interfaces, nanotechnology, and use of designer drugs have the potential to modify inherited predispositions to disease, select desired characteristics in embryos, augment “normal” human performance, replace failing tissues, and substantially prolong life span. Given their unique training, physicians have a responsibility to help shape the debate on the appropriate uses of and limits placed on these new techniques and to consider carefully the ethical issues associated with the implementation of such interventions. The Physician as Perpetual Student From the time doctors graduate from medical school, it becomes all too apparent that their lot is that of the “perpetual student” and that the mosaic of their knowledge and experiences is eternally unfinished. This realization is at the same time exhilarating and anxiety-provoking. It is exhilarating because doctors can apply constantly expanding knowledge to the treatment of their patients; it is anxiety-provoking because doctors realize that they will never know as much as they want or need to know. Ideally, doctors will translate the latter feeling into energy through which they can continue to improve themselves and reach their potential as physicians. It is the physician’s responsibility to pursue new knowledge continually by reading, attending conferences and courses, and consulting colleagues and the Internet. This is often a difficult task for a busy practitioner; however, a commitment to continued learning is an integral part of being a physician and must be given the highest priority. The Physician as Citizen Being a physician is a privilege. The capacity to apply one’s skills for the benefit of one’s fellow human beings is a noble calling. The doctor–patient relationship is inherently unbalanced in the distribution of power. In light of their influence, physicians must always be aware of the potential impact of what they do and say and must always strive to strip away individual biases and preferences to find what is best for the patient. To the extent possible, physicians should also act within their communities to promote health and alleviate suffering. Meeting these goals begins by setting a healthy example and continues in taking action to deliver needed care even when personal financial compensation may not be available. A goal for medicine and its practitioners is to strive to provide the means by which the poor can cease to be unwell. Learning Medicine It has been a century since the publication of the Flexner Report, a seminal study that transformed medical education and emphasized the scientific foundations of medicine as well as the acquisition of clinical skills. In an era of burgeoning information and access to medical simulation and informatics, many schools are implementing new curricula that emphasize lifelong learning and the acquisition of competencies in teamwork, communication skills, system-based practice, and professionalism. These and other features of the medical school curriculum provide the foundation for many of the themes highlighted in this chapter and are expected to allow physicians to progress, with experience and learning over time, from competency to proficiency to mastery. At a time when the amount of information that must be mastered to practice medicine continues to expand, increasing pressures both within and outside of medicine have led to the implementation of restrictions on the amount of time a physician-in-training can spend in the hospital. Because the benefits associated with continuity of medical care and observation of a patient’s progress over time were thought to be outstripped by the stresses imposed on trainees by long hours and by the fatigue-related errors they made in caring for patients, strict limits were set on the number of patients that trainees could be responsible for at one time, the number of new patients they could evaluate in a day on call, and the number of hours they could spend in the hospital. In 1980, residents in medicine worked in the hospital more than 90 hours per week on average. In 1989, their hours were restricted to no more than 80 per week. Resident physicians’ hours further decreased by ~10% between 1996 and 2008, and in 2010 the Accreditation Council for Graduate Medical Education further restricted (i.e., to 16 hours per shift) consecutive in-hospital duty hours for first-year residents. The impact of these changes is still being assessed, but the evidence that medical errors have decreased as a consequence is sparse. An unavoidable by-product of fewer hours at work is an increase in the number of “handoffs” of patient responsibility from one physician to another. These transfers often involve a transition from a physician who knows the patient well, having evaluated that individual on admission, to a physician who knows the patient less well. It is imperative that these transitions of responsibility be handled with care and thoroughness, with all relevant information exchanged and acknowledged. Research, Teaching, and the Practice of Medicine The word doctor is derived from the Latin docere, “to teach.” As teachers, physicians should share information and medical knowledge with colleagues, students of medicine and related professions, and their patients. The practice of medicine is dependent on the sum total of medical knowledge, which in turn is based on an unending chain of scientific discovery, clinical observation, analysis, and interpretation. Advances in medicine depend on the acquisition of new information through research, and improved medical care requires the transmission of that information. As part of their broader societal responsibilities, physicians should encourage patients to participate in ethical and properly approved clinical investigations if these studies do not impose undue hazard, discomfort, or inconvenience. However, physicians engaged in clinical research must be alert to potential conflicts of interest between their research goals and their obligations to individual patients. The best interests of the patient must always take priority. To wrest from nature the secrets which have perplexed philosophers in all ages, to track to their sources the causes of disease, to correlate the vast stores of knowledge, that they may be quickly available for the prevention and cure of disease—these are our ambitions. —William Osler, 1849–1919 Paul Farmer, Joseph Rhatigan WHY gLoBAL HEALTH? Global health is not a discipline; it is, rather, a collection of problems. Some scholars have defined global health as the field of study and practice concerned with improving the health of all people and achieving health equity worldwide, with an emphasis on addressing transnational problems. No single review can do much more than identify the leading problems in applying evidence-based medicine in settings of great poverty or across national boundaries. However, this is a moment of opportunity: only recently, persistent epidemics, improved metrics, and growing interest have been matched by an unprecedented investment in addressing the health problems of poor people in the developing world. To ensure that this opportunity is not wasted, the facts need to be laid out for specialists and laypeople alike. This chapter introduces the major international bodies that address health problems; identifies the more significant barriers to improving the health of people who to date have not, by and large, had access to modern medicine; and summarizes population-based data on the most common health problems faced by people living in poverty. Examining specific problems—notably HIV/AIDS (Chap. 226) but also tuberculosis (TB, Chap. 202), malaria (Chap. 248), and key “noncommunicable” chronic diseases (NCDs)—helps sharpen the discussion of barriers to prevention, diagnosis, and care as well as the means of overcoming them. This chapter closes by discussing global health equity, drawing on notions of social justice that once were central to international public health but had fallen out of favor during the last decades of the twentieth century. Concern about health across national boundaries dates back many centuries, predating the Black Plague and other pandemics. One of the first organizations founded explicitly to tackle cross-border health issues was the Pan American Sanitary Bureau, which was formed in 1902 by 11 countries in the Americas. The primary goal of what later became the Pan American Health Organization was the control of infectious diseases across the Americas. Of special concern was yellow fever, which had been running a deadly course through much of South and Central America and halted the construction of the Panama Canal. In 1948, the United Nations formed the first truly global health institution: the World Health Organization (WHO). In 1958, under the aegis of the WHO and in line with a long-standing focus on communicable diseases that cross borders, leaders in global health initiated the effort that led to what some see as the greatest success in international health: the eradication of smallpox. Naysayers were surprised when the smallpox eradication campaign, which engaged public health officials throughout the world, proved successful in 1979 despite the ongoing Cold War. At the International Conference on Primary Health Care in Alma-Ata (in what is now Kazakhstan) in 1978, public health officials from around the world agreed on a commitment to “Health for All by the Year 2000,” a goal to be achieved by providing universal access to primary health care worldwide. Critics argued that the attainment of this goal by the proposed date was impossible. In the ensuing years, a strategy for the provision of selective primary health care emerged that included four inexpensive interventions collectively known as GOBI: growth monitoring, oral rehydration, breast-feeding, and immunizations for diphtheria, whooping cough, tetanus, polio, TB, and measles. GOBI later was expanded to GOBI-FFF, which also included female education, food, and family planning. Some public health figures saw GOBI-FFF as an interim strategy to achieve “health for all,” but others criticized it as a retreat from the bolder commitments of Alma-Ata. The influence of the WHO waned during the 1980s. In the early 1990s, many observers argued that, with its vastly superior financial resources and its close—if unequal—relationships with the governments of poor countries, the World Bank had eclipsed the WHO as the most important multilateral institution working in the area of health. One of the stated goals of the World Bank was to help poor countries identify “cost-effective” interventions worthy of public funding and international support. At the same time, the World Bank encouraged many of those nations to reduce public expenditures in health and education in order to stimulate economic growth as part of (later discredited) structural adjustment programs whose restrictions were imposed as a condition for access to credit and assistance through international financial institutions such as the World Bank and the International Monetary Fund. There was a resurgence of many diseases, including malaria, trypanosomiasis, and schistosomiasis, in Africa. TB, an eminently curable disease, remained the world’s leading infectious killer of adults. Half a million women per year died in childbirth during the last decade of the twentieth century, and few of the world’s largest philanthropic or funding institutions focused on global health equity. HIV/AIDS, first described in 1981, precipitated a change. In the United States, the advent of this newly described infectious killer marked the culmination of a series of events that discredited talk of “closing the book” on infectious diseases. In Africa, which would emerge as the global epicenter of the pandemic, HIV disease strained TB control programs, and malaria continued to take as many lives as ever. At the dawn of the twenty-first century, these three diseases alone killed nearly 6 million people each year. New research, new policies, and new funding mechanisms were called for. The past decade has seen the rise of important multilateral global health financing institutions such as the Global Fund to Fight AIDS, Tuberculosis, and Malaria; bilateral efforts such as the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR); and private philanthropic organizations such as the Bill & Melinda Gates Foundation. With its 193 member states and 147 country offices, the WHO remains important in matters relating to the cross-border spread of infectious diseases and other health threats. In the aftermath of the epidemic of severe acute respiratory syndrome in 2003, the WHO’s International Health Regulations—which provide a legal foundation for that organization’s direct investigation into a wide range of global health problems, including pandemic influenza, in any member state—were strengthened and brought into force in May 2007. Even as attention to and resources for health problems in poor countries grow, the lack of coherence in and among global health institutions may undermine efforts to forge a more comprehensive and effective response. The WHO remains underfunded despite the ever-growing need to engage a wider and more complex range of health issues. In another instance of the paradoxical impact of success, the rapid growth of the Gates Foundation, which is one of the most important developments in the history of global health, has led some foundations to question the wisdom of continuing to invest their more modest resources in this field. This indeed may be what some have called “the golden age of global health,” but leaders of major organizations such as the WHO, the Global Fund, the United Nations Children’s Fund (UNICEF), the Joint United Nations Programme on HIV/AIDS (UNAIDS), PEPFAR, and the Gates Foundation must work together to design an effective architecture that will make the most of opportunities to link new resources for and commitments to global health equity with the emerging understanding of disease burden and unmet need. To this end, new and old players in global health must invest heavily in discovery (relevant basic science), development of new tools (preventive, diagnostic, and therapeutic), and modes of delivery that will ensure the equitable provision of health products and services to all who need them. Political and economic concerns have often guided global health interventions. As mentioned, early efforts to control yellow fever were tied to the completion of the Panama Canal. However, the precise nature of the link between economics and health remains a matter for debate. Some economists and demographers argue that improving the health status of populations must begin with economic development; others maintain that addressing ill health is the starting point for development in poor countries. In either case, investment in health care, especially the control of communicable diseases, should lead to increased productivity. The question is where to find the necessary resources to start the predicted “virtuous cycle.” During the past two decades, spending on health in poor countries has increased dramatically. According to a study from the Institute for Health Metrics and Evaluation (IHME) at the University of Washington, total development assistance for health worldwide grew to $28.2 billion in 2010—up from $5.6 billion in 1990. In 2010, the leading contributors included U.S. bilateral agencies such as PEPFAR, the Global Fund, nongovernmental organizations (NGOs), the WHO, the World Bank, and the Gates Foundation. It appears, however, that total development assistance for health plateaued in 2010, and it is unclear whether growth will continue in the upcoming decade. To reach the United Nations Millennium Development Goals, which include targets for poverty reduction, universal primary education, and gender equality, spending in the health sector must be increased above the 2010 levels. To determine by how much and for how long, it is imperative to improve the ability to assess the global burden of disease and to plan interventions that more precisely match need. Refining metrics is an important task for global health: only recently have there been solid assessments of the global burden of disease. The first study to look seriously at this issue, conducted in 1990, laid the foundation for the first report on Disease Control Priorities in Developing Countries and for the World Bank’s 1993 World Development Report Investing in Health. Those efforts represented a major advance in the understanding of health status in developing countries. Investing in Health has been especially influential: it familiarized a broad audience with cost-effectiveness analysis for specific health interventions and with the notion of disability-adjusted life years (DALYs). The DALY, which has become a standard measure of the impact of a specific health condition on a population, combines absolute years of life lost and years lost due to disability for incident cases of a condition. (See Fig. 2-1 and Table 2-1 for an analysis of the global disease burden by DALYs.) In 2012, the IHME and partner institutions began publishing results from the Global Burden of Diseases, Injuries, and Risk Factors Study 2010 (GBD 2010). GBD 2010 is the most comprehensive effort to date to produce longitudinal, globally complete, and comparable estimates of the burden of diseases, injuries, and risk factors. This report reflects the expansion of the available data on health in the poorest countries and of the capacity to quantify the impact of specific conditions on a population. It measures current levels and recent trends in all major diseases, injuries, and risk factors among 21 regions and for 20 age groups and both sexes. The GBD 2010 team revised and improved the health-state severity weight system, collated published data, and used household surveys to enhance the breadth and accuracy of disease burden data. As analytic methods and data quality improve, important trends can be identified in a comparison of global disease burden estimates from 1990 to 2010. Of the 52.8 million deaths worldwide in 2010, 24.6% (13 million) were due to communicable diseases, maternal and perinatal conditions, and nutritional deficiencies—a marked decrease compared with figures for 1990, when these conditions accounted for 34% of global mortality. Among the fraction of all deaths related to communicable diseases, maternal and perinatal conditions, and nutritional deficiencies, 76% occurred in sub-Saharan Africa and southern Asia. While the proportion of deaths due to these conditions has decreased significantly in the past decade, there has been a dramatic rise in the number of deaths from NCDs, which constituted the top five causes of death in 2010. The leading cause of death among adults in 2010 was ischemic heart disease, accounting for 7.3 million deaths (13.8% of total deaths) worldwide. In high-income countries ischemic heart disease accounted for 17.9% of total deaths, and in developing (lowand middle-income) countries it accounted for 10.1%. It is noteworthy that ischemic heart disease was responsible for just 2.6% of total deaths in sub-Saharan Africa (Table 2-2). In second place—causing 11.1% of global mortality—was cerebrovascular disease, which accounted for 9.9% of deaths in high-income countries, 10.5% in developing countries, and 4.0% in sub-Saharan Africa. Although the third leading cause of death in high-income countries was lung cancer (accounting for 5.6% of all deaths), this condition did not figure among the top 10 causes in lowand middle-income countries. Among the 10 leading causes of death in sub-Saharan Africa, 6 were infectious diseases, with malaria and HIV/AIDS ranking as the dominant contributors to disease burden. In high-income countries, however, only one infectious disease—lower respiratory infection—ranked among the top 10 causes of death. The GBD 2010 found that the worldwide mortality figure among children <5 years of age dropped from 16.39 million in 1970 to 11.9 million in 1990 and to 6.8 million in 2010—a decrease that surpassed predictions. Of childhood deaths in 2010, 3.1 million (40%) occurred in the neonatal period. About one-third of deaths among children <5 years old occurred in southern Asia and almost one-half in sub-Saharan Africa; <1% occurred in high-income countries. The global burden of death due to HIV/AIDS and malaria was on an upward slope until 2004; significant improvements have since been documented. Global deaths from HIV infection fell from 1.7 million in 2006 to 1.5 million in 2010, while malaria deaths dropped from 1.2 million to 0.98 million over the same period. Despite these improvements, malaria and HIV/AIDS continue to be major burdens in particular regions, with global implications. Although it has a minor impact on mortality outside sub-Saharan Africa and Southeast Asia, malaria is the eleventh leading cause of death worldwide. HIV infection ranked thirty-third in global DALYs in 1990 but was the fifth leading cause of disease burden in 2010, with sub-Saharan Africa bearing the vast majority of this burden (Fig. 2-1). The world’s population is living longer: global life expectancy has increased significantly over the past 40 years from 58.8 years in 1970 to 70.4 years in 2010. This demographic change, accompanied by the fact that the prevalence of NCDs increases with age, is dramatically shifting the burden of disease toward NCDs, which have surpassed communicable, maternal, nutritional, and neonatal causes. By 2010, 65.5% of total deaths at all ages and 54% of all DALYs were due to NCDs. Increasingly, the global burden of disease comprises conditions and injuries that cause disability rather than death. Worldwide, although both life expectancy and years of life lived in good health have risen, years of life lived with disability have also increased. Despite the higher prevalence of diseases common in older populations (e.g., dementia and musculoskeletal disease) in developed and high-income countries, best estimates from 2010 reveal that disability resulting from cardiovascular diseases, chronic respiratory diseases, and the long-term impact of communicable diseases was greater in lowand middle-income countries. In most developing countries, people lived shorter lives and experienced disability and poor health for a greater proportion of their lives. Indeed, 50% of the global burden of disease occurred in southern Asia and sub-Saharan Africa, which together account for only 35% of the world’s population. Clear disparities in burden of disease (both communicable and noncommunicable) across country income levels are strong indicators that poverty and health are inherently linked. Poverty remains one of the most important root causes of poor health worldwide, and the global burden of poverty continues to be high. Among the 6.7 billion people alive in 2008, 19% (1.29 billion) lived on less than $1.25 a day— one standard measurement of extreme poverty—and another 1.18 billion lived on $1.25 to $2 a day. Approximately 600 million children—more than 30% of those in low-income countries—lived in extreme poverty in 2005. Comparison of national health indicators with gross domestic product per capita among nations shows a clear relationship between higher gross domestic product and better health, with only a few outliers. Numerous studies have also documented the link between poverty and health within nations as well as across them. The GBD 2010 study found that the three leading risk factors for global disease burden in 2010 were (in order of frequency) high blood pressure, tobacco smoking (including secondhand smoke), and alcohol use—a substantial change from 1990, when childhood undernutrition was ranked first. Though ranking eighth in 2010, childhood undernutrition remains the leading risk factor for death worldwide among children <5 years of age. In an era that has seen obesity become a major health concern in many developed countries—and the sixth leading risk factor worldwide—the persistence of undernutrition is surely cause for great consternation. Low body weight is still the dominant risk factor for disease burden in sub-Saharan Africa. Inability to feed the hungry reflects many years of failed development projects and must be addressed as a problem of the highest priority. Indeed, no health care initiative, however generously funded, will be effective without adequate nutrition. In a 2006 publication that examined how specific diseases and injuries are affected by environmental risk, the WHO estimated that roughly one-quarter of the total global burden of disease, one-third of the global disease burden among children, and 23% of all deaths were due to modifiable environmental factors. Many of these factors lead to deaths from infectious diseases; others lead to deaths Communicable, maternal, neonatal, and nutritional disorders FIgURE 2-1 Global DALY (disability-adjusted life year) ranks for the top causes of disease burden in 1990 and 2010. COPD, chronic obstructive pulmonary disease. (Reproduced with permission from C Murray et al: Disability-adjusted life years [DALYs] for 291 diseases and injuries in 21 regions, 1990–2010: A systematic analysis for the Global Burden of Disease Study 2010. Lancet 380:2197–2223, 2012.) from malignancies. Etiology and nosology are increasingly difficult to parse. As much as 94% of diarrheal disease, which is linked to unsafe drinking water and poor sanitation, can be attributed to environmental factors. Risk factors such as indoor air pollution due to use of solid fuels, exposure to secondhand tobacco smoke, and outdoor air pollution account for 20% of lower respiratory infections in developed countries and for as many as 42% of such infections in developing countries. Various forms of unintentional injury and malaria top the list of health problems to which environmental factors contribute. Some 4 million children die every year from causes related to unhealthy environments, and the number of infant deaths due to environmental factors in developing countries is 12 times that in developed countries. The second edition of Disease Control Priorities in Developing Countries, published in 2006, is a document of great breadth and ambition, providing cost-effectiveness analyses for more than 100 interventions and including 21 chapters focused on strategies for strengthening health systems. Cost-effectiveness analyses that compare relatively equivalent interventions and facilitate the best choices under constraint are necessary; however, these analyses are often based on an incomplete knowledge of cost and evolving evidence of effectiveness. As both resources and objectives for global health grow, cost-effectiveness analyses (particularly those based on older evidence) must not hobble the increased worldwide commitment to providing resources and accessible health care services to all who need them. This is why we use the term global health equity. To illustrate these points, it 11taBLe 2-1 LeadInG Causes of dIsease Burden, 2010 Percent of Percent of Disease or Injury DALYs (Millions) Total DALYs Disease or Injury DALYs (Millions) Total DALYs 1 Ischemic heart disease 129.8 5.2 2 Lower respiratory infections 115.2 4.7 3 Cerebrovascular disease 102.2 4.1 4 Diarrheal disease 89.5 3.6 5 HIV/AIDS 81.5 3.3 6 Malaria 82.7 3.3 7 Low back pain 80.7 3.2 8 Preterm birth complications 77.0 3.1 9 COPD 76.8 3.1 10 Road injury 75.5 3.1 1 Lower respiratory infections 109.0 5.2 2 Diarrheal disease 88.0 4.2 3 Ischemic heart disease 85.5 4.1 4 Malaria 82.7 3.9 5 Cerebrovascular disease 79.4 3.8 6 HIV/AIDS 77.0 3.7 7 Preterm birth complications 74.4 3.5 8 Road injury 66.2 3.2 9 COPD 65.6 3.1 10 Low back pain 58.4 2.8 1 Ischemic heart disease 21.8 8.2 2 Low back pain 17.0 6.4 3 Cerebrovascular disease 11.3 4.2 4 Major depressive disorder 9.7 3.7 5 Lung cancer 9.2 3.5 6 COPD 8.6 3.2 7 Other musculoskeletal disorders 8.2 3.1 8 Diabetes mellitus 7.3 2.8 9 Neck pain 7.2 2.7 10 Falls 6.8 2.5 1 Malaria 76.6 13.3 2 HIV/AIDS 57.8 10.1 3 Lower respiratory infections 43.5 7.6 4 Diarrheal diseases 39.2 6.8 5 Protein-energy malnutrition 22.3 3.9 6 Preterm birth complications 20.0 3.5 7 Neonatal sepsis 18.9 3.3 8 Meningitis 16.3 2.8 9 Neonatal encephalopathy 14.9 2.6 10 Road injury 13.9 2.5 aThe term developing countries refers to lowand middle-income economies. See data.worldbank.org/about/country-classifications. bThe World Bank classifies high-income countries as those whose gross national income per capita is $12,476 or more. See data.worldbank.org/about/country-classifications. Abbreviations: COPD, chronic obstructive pulmonary disease; DALYs, disability-adjusted life years. Source: Institute for Health Metrics and Evaluation, University of Washington (2013). Data are available through www.healthmetricsandevaluation.org/gbd/visualizations/country. taBLe 2-2 LeadInG Causes of death WorLdWIde, 2010 Deaths Percent of Deaths Percent of Disease or Injury (Millions) Total Deaths Disease or Injury (Millions) Total Deaths 1 Ischemic heart disease 7.3 13.3 2 Cerebrovascular disease 5.9 11.1 3 COPD 2.9 5.5 4 Lower respiratory infections 2.8 5.3 5 Lung cancer 1.5 2.9 6 HIV/AIDS 1.5 2.8 7 Diarrheal diseases 1.4 2.7 8 Road injury 1.3 2.5 9 Diabetes 1.3 2.4 10 Tuberculosis 1.2 2.3 1 Cerebrovascular disease 4.2 10.5 2 Ischemic heart disease 4.0 10.1 3 COPD 2.4 6.1 4 Lower respiratory infections 2.3 5.9 5 Diarrheal diseases 1.4 3.6 6 HIV/AIDS 1.4 3.4 7 Malaria 1.2 2.9 8 Road injury 1.2 2.9 9 Tuberculosis 1.1 2.9 10 Diabetes 1.0 2.6 1 Ischemic heart disease 1.6 17.9 2 Cerebrovascular disease 0.9 9.9 3 Lung cancer 0.5 5.6 4 Lower respiratory infections 0.4 4.7 5 COPD 0.4 4.5 6 Alzheimer’s and other dementias 0.4 4.0 7 Colon and rectum cancers 0.3 3.3 8 Diabetes 0.2 2.6 9 Other cardiovascular and circulatory diseases 0.2 2.5 10 Chronic kidney disease 0.2 2.0 1 Malaria 1.1 12.7 2 HIV/AIDS 1.0 12.0 3 Lower respiratory infections 0.8 9.3 4 Diarrheal diseases 0.5 6.6 5 Cerebrovascular disease 0.3 4.0 6 Protein-energy malnutrition 0.3 4.0 7 Tuberculosis 0.3 3.6 8 Road injury 0.2 2.8 9 Preterm birth complications 0.2 2.8 10 Meningitis 0.2 2.6 aThe term developing countries refers to lowand middle-income economies. See data.worldbank.org/about/country-classifications. bThe World Bank classifies high-income countries as those whose gross national income per capita is $12,476 or more. See data.worldbank.org/about/country-classifications. Abbreviation: COPD, chronic obstructive pulmonary disease. Source: Institute for Health Metrics and Evaluation, University of Washington (2013). Data available through www.healthmetricsandevaluation.org/gbd/visualizations/country. is instructive to look to HIV/AIDS, which in the course of the last three decades has become the world’s leading infectious cause of adult death. Chapter 226 provides an overview of the HIV epidemic in the world today. Here the discussion will be limited to HIV/AIDS in the developing world. Lessons learned from tackling HIV/AIDS in resource-constrained settings are highly relevant to discussions of other chronic diseases, including NCDs, for which effective therapies have been developed. Approximately 34 million people in all countries worldwide were living with HIV infection in 2011; more than 8 million of those in low-and middle-income countries were receiving antiretroviral therapy (ART)—a number representing a 20-fold increase over the corresponding figure for 2003. By the end of 2011, 54% of people eligible for treatment were receiving ART. (It remains to be seen how many of these people are receiving ART regularly and with the requisite social support.) In the United States, the availability of ART has transformed HIV/ AIDS from an inescapably fatal destruction of cell-mediated immunity into a manageable chronic illness. In high-income countries, improved ART has prolonged life by an estimated average of 35 years per patient—up from 6.8 years in 1993 and 24 years in 2006. This success rate exceeds that obtained with almost any treatment for adulthood cancer or for complications of coronary artery disease. In developing countries, treatment has been offered broadly only since 2003, and only in 2009 did the number of patients receiving treatment exceed 40% of the number who needed it. Before 2003, many arguments were raised to justify not moving forward rapidly with ART programs for people living with HIV/AIDS in resource-limited settings. The standard litany included the price of therapy compared with the poverty of the patient, the complexity of the intervention, the lack of infrastructure for laboratory monitoring, and the lack of trained health care providers. Narrow cost-effectiveness arguments that created false dichotomies—prevention or treatment rather than both—too often went unchallenged. As a cumulative result of these delays in the face of health disparities, old and new, there were millions of premature deaths. Disparities in access to HIV treatment gave rise to widespread moral indignation and a new type of health activism. In several middle-income countries, including Brazil, public programs have helped bridge the access gap. Other innovative projects pioneered by international NGOs in diverse settings such as Haiti and Rwanda have established that a simple approach to ART that is based on intensive community engagement and support can achieve remarkable results (Fig. 2-2). During the past decade, the availability of ART has increased sharply in the lowand middle-income countries that have borne the greatest burden of the HIV/AIDS pandemic. In 2000, very few people living with HIV/AIDS in these nations had access to ART, whereas by 2011, as stated above, 8 million people, a majority of those deemed eligible, in these countries were receiving ART. This scale-up was made possible by a number of developments: a staggering drop in the cost of ART, the development of a standardized approach to treatment, substantial investments by funders, and the political commitment of governments to make ART available. Civil-society AIDS activists spurred many of these efforts. Starting in the early 2000s, a combination of factors, including work by the Clinton Foundation HIV/AIDS Initiative and Médecins Sans Frontières, led to the availability of generic ART medications. While first-line ART cost more than $10,000 per patient per year in 2000, first-line regimens in lowand middle-income countries are now available for less than $100 per year. At the same time, fixed-dose combination drugs that are easier to administer have become more widely available. Also around this time, the WHO began advocating a public health approach to the treatment of people with AIDS in resource-limited settings. This approach, derived from models of care pioneered by the NGO Partners In Health and other groups, proposed standard first-line treatment regimens based on a simple five-drug formulary, with a more complex (and more expensive) set of second-line options in reserve. Clinical protocols were standardized, and intensive training packages for health professionals and community health workers were developed and implemented in many countries. These efforts were supported by new funding from the World Bank, the Global Fund, and PEPFAR. In 2003, lack of access to ART was declared a global public health emergency by the WHO and UNAIDS, and those two agencies launched the “3 by 5 initiative,” setting an ambitious target: to have 3 million people in developing countries on treatment by the end of 2005. Worldwide funding for HIV/AIDS treatment increased dramatically during this period, rising from $300 million in 1996 to over $15 billion in 2010. Many countries set corresponding national targets and have worked to integrate ART into their national AIDS programs and health systems and to harness the synergies between HIV/AIDS treatment and prevention activities. Further lessons with implications for policy and action have come from efforts now under way among lower-income countries. Rwanda provides an example: Over the past decade, mortality from HIV disease has fallen by >78% as the country—despite its relatively low gross national income (Fig. 2-3)—has provided almost universal access to ART. The reasons for this success include strong national leadership, evidence-based policy, cross-sector collaboration, community-based care, and a deliberate focus on a health system approach that embeds HIV/AIDS treatment and prevention in the primary health care service delivery platform. As we will discuss later in this chapter, these principles can be applied to other conditions, including NCDs. Chapter 202 provides a concise overview of the pathophysiology and treatment of TB. In 2011, an estimated 12 million people were living with active TB, and 1.4 million died from it. The disease is closely linked to HIV infection in much of the world: of the 8.7 million estimated new cases of TB in 2011, 1.2 million occurred among people living with HIV. Indeed, a substantial proportion of the TB resurgence FIgURE 2-2 An HIV/TB-co-infected patient in Rwanda before (left ) and after (right ) 6 months of treatment. 100% infection control in hospitals and clinics is associated with explosive and lethal epidemics due to these strains and that patients may be infected with multiple strains. Gross domestic product per capita, 2009 (log) Namibia Swaziland Zambia Gabon South Africa Senegal Benin MaliEthiopia Malawi ChadEritrea Niger Liberia Madagascar Somalia Djibouti Sudan Burundi D.R. Congo Congo Angola Mauritius United Nations 2010 target $100 0% 20% 40% 60% $1,000 $10,000 FIgURE 2-3 Antiretroviral therapy (ART) coverage in sub-Saharan Africa, 2009. Estimated ART coverage, 2009 registered in southern Africa is attributed to HIV co-infection. Even before the advent of HIV, however, it was estimated that fewer than one-half of all cases of TB in developing countries were ever diagnosed, much less treated. Primarily because of the common failure to diagnose and treat TB, international authorities devised a single strategy to reduce the burden of disease. In the early 1990s, the World Bank, the WHO, and other international bodies promoted the DOTS strategy (directly observed therapy using short-course isoniazidand rifampinbased regimens) as highly cost-effective. Passive case-finding of smear-positive patients was central to the strategy, and an uninterrupted drug supply was, of course, deemed necessary for cure. DOTS was clearly effective for most uncomplicated cases of drug-susceptible TB, but a number of shortcomings were soon identified. First, the diagnosis of TB based solely on sputum smear microscopy— a method dating from the late nineteenth century—is not sensitive. Many cases of pulmonary TB and all cases of exclusively extrapulmonary TB are missed by smear microscopy, as are most cases of active disease in children. Second, passive case-finding relies on the availability of health care services, which is uneven in the settings where TB is most prevalent. Third, patients with multidrug-resistant TB (MDR-TB) are by definition infected with strains of Mycobacterium tuberculosis resistant to isoniazid and rifampin; thus exclusive reliance on these drugs is unwarranted in settings in which drug resistance is an established problem. The crisis of antibiotic resistance registered in U.S. hospitals is not confined to the industrialized world or to common bacterial infections. The great majority of patients sick with and dying from TB are afflicted with strains susceptible to all first-line drugs. In some settings, however, a substantial minority of patients with TB are infected with M. tuberculosis strains resistant to at least one first-line anti-TB drug. A 2012 article in a leading journal reported that, in China, 10% of all patients with TB and 26% of all previously treated patients were sick with MDR strains of M. tuberculosis. Most of these cases were the result of primary transmission. To improve DOTS-based responses to MDR-TB, global health authorities adopted DOTS-Plus, which adds the diagnostics and drugs necessary to manage drug-resistant disease. Even as DOTS-Plus was being piloted in resource-constrained settings, however, new strains of extensively drug-resistant (XDR) M. tuberculosis (resistant to isoniazid and rifampin, any fluoroquinolone, and at least one injectable second-line drug) had already threatened the success of TB control programs in beleaguered South Africa, for example, where high rates of HIV infection have led to a doubling of TB incidence over the last decade. Despite the poor capacity for detection of MDRand XDR-TB in most resource-limited settings, an estimated 630,000 cases of MDR-TB were thought to occur in 2011. Approximately 9% of these drug-resistant cases were caused by XDR strains. It is clear that poor TUBERCULoSIS ANd AIdS AS CHRoNIC dISEASES: LESSoNS LEARNEd Strategies effective against MDR-TB have implications for the management of drug-resistant HIV infection and even drug-resistant malaria, which, through repeated infections and a lack of effective therapy, has become a chronic disease in parts of Africa (see “Malaria,” below). Equatorial As new therapies, whether for TB or for hepatitis C infec-Guinea tion, become available, many of the problems encountered in the past will recur. Indeed, examining AIDS and TB as chronic diseases—instead of simply communicable diseases—makes it possible to draw a number of conclusions, many of them pertinent to global health in general. First, the chronic infections discussed here are best treated with multidrug regimens to which the infecting strains are susceptible. This is true of chronic infections due to many bacteria, fungi, parasites, or viruses; even acute infections such as those caused by Plasmodium spe cies are not reliably treated with a single drug. Second, charging fees for AIDS prevention and care poses insurmountable problems for people living in poverty, many of whom are unable to pay even modest amounts for services or medications. Like efforts to battle airborne TB, such services might best be seen as a public good promoting public health. Initially, a subsidy approach will require sustained donor contributions, but many African countries have set targets for increased national investments in health—a pledge that could render ambitious programs sustainable in the long run, as the Rwanda experience suggests. Meanwhile, as local investments increase, the price of AIDS care is decreasing. The development of generic medications means that ART can now cost <$0.25 per day; costs continue to decrease. Third, the effective scale-up of pilot projects requires strengthening and sometimes rebuilding of health care systems, including those charged with delivering primary care. In the past, the lack of health care infrastructure has been cited as a barrier to providing ART in the world’s poorest regions; however, AIDS resources, which are at last considerable, may be marshaled to rebuild public health systems in sub-Saharan Africa and other HIV-burdened regions—precisely the settings in which TB is resurgent. Fourth, the lack of trained health care personnel, most notably doctors and nurses, in resource-poor settings must be addressed. This personnel deficiency is invoked as a reason for the failure to treat AIDS in poor countries. In what is termed the brain drain, many physicians and nurses emigrate from their home countries to pursue opportunities abroad, leaving behind health systems that are understaffed and ill equipped to deal with the epidemic diseases that ravage local populations. The WHO recommends a minimum of 20 physicians and 100 nurses per 100,000 persons, but recent reports from that organization and others confirm that many countries, especially in sub-Saharan Africa, fall far short of those target numbers. Specifically, more than one-half of those countries register fewer than 10 physicians per 100,000 population. In contrast, the United States and Cuba register 279 and 596 doctors per 100,000 population, respectively. Similarly, the majority of sub-Saharan African countries do not have even half of the WHO-recommended minimum number of nurses. Further inequalities in health care staffing exist within countries. Rural–urban disparities in health care personnel mirror disparities of both wealth and health. For instance, nearly 90% of Malawi’s population lives in rural areas, but more than 95% of clinical officers work at urban facilities, and 47% of nurses work at tertiary care facilities. Even community health workers trained to provide first-line services to rural populations often transfer to urban districts. One reason doctors and nurses leave sub-Saharan Africa and other resource-poor areas is that they lack the tools to practice their trade there. Funding for “vertical” (disease-specific) programs can be used not only to strengthen health systems but to recruit and train physicians and nurses to underserved regions where they, in turn, can help to train and then work with community health workers in supervising care for patients with AIDS and many other diseases within their communities. Such training should be undertaken even where physicians are abundant, since close community-based supervision represents the highest standard of care for chronic disease, whether in developing or developed countries. The United States has much to learn from Rwanda. Fifth, the barriers to adequate health care and patient adherence that are raised by extreme poverty can be removed only with the deployment of “wrap-around services”: food supplements for the hungry, help with transportation to clinics, child care, and housing. Extreme poverty makes it difficult for many patients to comply with therapy for chronic diseases, whether communicable or not. Indeed, poverty in its many dimensions is far and away the greatest barrier to the scale-up of treatment and prevention programs. In many rural regions of Africa, hunger is the major coexisting condition in patients with AIDS or TB, and those consumptive diseases cannot be treated effectively without adequate caloric intake. Finally, there is a need for a renewed basic-science commitment to the discovery and development of vaccines; more reliable, less expensive diagnostic tools; and new classes of therapeutic agents. This need applies not only to the three leading infectious killers—against none of which is there an effective vaccine—but also to most other neglected diseases of poverty. Chapter 248 reviews the etiology, pathogenesis, and clinical treatment of malaria, the world’s third-ranking infectious killer. Malaria’s human cost is enormous, with the highest toll among children—especially African children—living in poverty. In 2010, there were ~219 million cases of malaria, and the disease is thought to have killed 660,000 people; 86% of these deaths (~568,000) occurred among children <5 years old. The poor disproportionately experience the burden of malaria: more than 80% of estimated malaria deaths occur in just 14 countries, and mortality rates are highest in sub-Saharan Africa. The Democratic Republic of the Congo and Nigeria account for more than 40% of total estimated malaria deaths globally. Microeconomic analyses focusing on direct and indirect costs estimate that malaria may consume >10% of a household’s annual income. A study in rural Kenya shows that mean direct-cost burdens vary between the wet and dry seasons (7.1% and 5.9% of total household expenditure, respectively) and that this proportion is >10% in the poorest households in both seasons. A Ghanaian study that categorized the population by income group highlighted the regressive nature of this cost: responding to malaria consumed only 1% of a wealthy family’s income but 34% of a poor household’s income. Macroeconomic analyses estimate that malaria may reduce the per capita gross national product of a disease-endemic country by 50% relative to that of a non-malaria-endemic country. The causes of this drag include impaired cognitive development of children, decreased schooling, decreased savings, decreased foreign investment, and restriction of worker mobility. In light of this enormous cost, it is little wonder that an important review by the economists Sachs and Malaney concludes that “where malaria prospers most, human societies have prospered least.” Rolling Back Malaria In part because of differences in vector distribution and climate, resource-rich countries offer few blueprints for malaria control and treatment that are applicable in tropical (and resource-poor) settings. In 2001, African heads of state endorsed the WHO Roll Back Malaria (RBM) campaign, which prescribes strategies appropriate for sub-Saharan African countries. In 2008, the RBM partnership launched the Global Malaria Action Plan (GMAP). This strategy integrates prevention and care and calls for an avoidance of single-dose regimens and an awareness of existing drug resistance. The GMAP recommends a number of key tools to reduce malaria-related morbidity and mortality rates: the use of insecticide-treated bed nets (ITNs), indoor residual spraying, and artemisinin-based combination therapy (ACT) as well as intermittent preventive treatment during pregnancy, prompt diagnosis, and other vector control measures such as larviciding and environmental management. InsectIcIde-treated bed nets ITNs are an efficacious and cost-effective public health intervention. A meta-analysis of controlled trials in seven sub-Saharan African countries indicates that parasitemia prevalence is reduced by 24% among children <5 years of age who sleep under ITNs compared with that among those who do not. Even untreated nets reduce malaria incidence by one-quarter. On an individual level, the utility of ITNs extends beyond protection from malaria. Several studies suggest that ITNs reduce all-cause mortality among children under age 5 to a greater degree than can be attributed to the reduction in malarial disease alone. Morbidity (specifically that due to anemia), which predisposes children to diarrheal and respiratory illnesses and pregnant women to the delivery of low-birth-weight infants, also is reduced in populations using ITNs. In some areas, ITNs offer a supplemental benefit by preventing transmission of lymphatic filariasis, cutaneous leishmaniasis, Chagas’ disease, and tick-borne relapsing fever. At the community level, investigators suggest that the use of an ITN in just one household may reduce the number of mosquito bites in households up to a hundred meters away by reducing mosquito density. The cost of ITNs per DALY saved—estimated at $29—makes ITNs a good-value public health investment. The WHO recommends that all individuals living in malaria-endemic areas sleep under protective ITNs. About 140 million long-lasting ITNs were distributed in high-burden African countries in 2006–2008, and rates of household ownership of ITNs in high-burden countries increased to 31%. Although the RBM partnership has seen modest success, the WHO’s 2009 World Malaria Report states that the percentage of children <5 years of age using an ITN (24%) remains well below the World Health Assembly’s target of 80%. Limited success in scaling up ITN coverage reflects the inadequately acknowledged economic barriers that prevent the destitute sick from gaining access to critical preventive technologies and the challenges faced in designing and implementing effective delivery platforms for these products. In other words, this is a delivery failure rather than a lack of knowledge of how best to reduce malaria deaths. Indoor resIdual sprayIng Indoor residual spraying is one of the most common interventions for preventing the transmission of malaria in endemic areas. Vector control using insecticides approved by the WHO, including DDT, can effectively reduce or even interrupt malaria transmission. However, studies have indicated that spraying is effective in controlling malaria transmission only if most (~80%) of the structures in the targeted community are treated. Moreover, since a successful program depends on well-trained spraying teams as well as on effective monitoring and planning, indoor residual spraying is difficult to employ and is often reliant on health systems with a strong infrastructure. Regardless of the limitations of indoor residual spraying, the WHO recommends its use in combination with ITNs. Neither intervention alone is sufficient to prevent transmission of malaria entirely. artemIsInIn-based combInatIon therapy The emergence and spread of chloroquine resistance have increased the need for antimalarial combination therapy. To limit the spread of resistance, the WHO now recommends that only ACT (as opposed to artemisinin monotherapy) be used for uncomplicated falciparum malaria. Like that of other antimalarial interventions, the use of ACT has increased in the last few years, but coverage rates remain very low in several countries in sub-Saharan Africa. The RBM partnership has invested significantly in measures to enhance access to ACT by facilitating its delivery through the public health sector and developing innovative funding mechanisms (e.g., the Affordable Medicines Facility—malaria) that reduce its cost significantly so that ineffective monotherapies can be eliminated from the market. In the last several years, resistance to antimalarial medicines and insecticides has become an even larger problem than in the past. In 2009, confirmation of artemisinin resistance was reported. Although the WHO has called for an end to the use of artemisinin monotherapy, the marketing of such therapies continues in many countries. Ongoing use of artemisinin monotherapy increases the likelihood of drug resistance, a deadly prospect that will make malaria far more difficult to treat. Between 2001 and 2011, global malaria deaths were reduced by an estimated 38%, with reductions of ≥50% in 10 African countries as well as in most endemic countries in other regions. Again the experience in Rwanda is instructive: from 2005 to 2011, malaria deaths dropped by >85% for the same reasons mentioned earlier in recounting that nation’s successes in battling HIV. Meeting the challenge of malaria control will continue to require careful study of appropriate preventive and therapeutic strategies in the context of an increasingly sophisticated molecular understanding of pathogen, vector, and host. However, an appreciation of the economic and social devastation wrought by malaria—like that inflicted by diarrhea, AIDS, and TB—on the most vulnerable populations should heighten the level of commitment to critical analysis of ways to implement proven strategies for prevention and treatment. Funding from the Global Fund, the Gates Foundation, the World Bank’s International Development Association, and the U.S. President’s Malaria Initiative, along with leadership from public health authorities, is critical to sustain the benefits of prevention and treatment. Building on the growing momentum of the last decade with adequate financial support, innovative strategies, and effective tools for prevention, diagnosis, and treatment, we may one day achieve the goal of a world free of malaria. Although the burden of communicable diseases—especially HIV infection, TB, and malaria—still accounts for the majority of deaths in resource-poor regions such as sub-Saharan Africa, 63% of all deaths worldwide in 2008 were held to be due to NCDs. Although we will use this term to describe cardiovascular diseases, cancers, diabetes, and chronic lung diseases, this usage masks important distinctions. For instance, two significant NCDs in low-income countries, rheumatic heart disease (RHD) and cervical cancer, represent the chronic sequelae of infections with group A Streptococcus and human papillomavirus, respectively. It is in these countries that the burden of disease due to NCDs is rising most rapidly. Close to 80% of deaths attributable to NCDs occur in lowand middle-income countries, where 86% of the global population lives. The WHO reports that ~25% of global NCD-related deaths take place before the age of 60—a figure representing ~5.7 million people and exceeding the total number of deaths due to AIDS, TB, and malaria combined. In almost all high-income countries, the WHO reported that NCD deaths accounted for ~70% of total deaths in 2008. By 2020, NCDs will account for 80% of the global burden of disease and for 7 of every 10 deaths in developing countries. The recent increase in resources for and attention to communicable diseases is both welcome and long overdue, but developing countries are already carrying a “double burden” of communicable and noncommunicable diseases. diabetes, Cardiovascular disease, and Cancer: A global Perspective In contrast to TB, HIV infection, and malaria—diseases caused by single pathogens that damage multiple organs—cardiovascular diseases reflect injury to a single organ system downstream of a variety of insults, both infectious and noninfectious. Some of these insults result from rapid changes in diet and labor conditions. Other insults are of a less recent vintage. The burden of cardiovascular disease in low-income countries represents one consequence of decades of neglect of health systems. Furthermore, cardiovascular research and investment have long focused on the ischemic conditions that are increasingly common in highand middle-income countries. Meanwhile, despite awareness of its health impact in the early twentieth century, cardiovascular damage in response to infection and malnutrition has fallen out of view until recently. The misperception of cardiovascular diseases as a problem primarily of elderly populations in middleand high-income countries has contributed to the neglect of these diseases by global health institutions. Even in Eastern Europe and Central Asia, where the collapse of the Soviet Union was followed by a catastrophic surge in cardiovascular disease deaths (mortality rates from ischemic heart disease nearly doubled between 1991 and 1994 in Russia, for example), the modest flow of overseas development assistance to the health sector focused on the communicable causes that accounted for <1 in 20 excess deaths during that period. dIabetes The International Diabetes Federation reports that the number of diabetic patients in the world is expected to increase from 366 million in 2011 to 552 million by 2030. Already, a significant proportion of diabetic patients live in developing countries where, because those affected are far more frequently between ages 40 and 59, the complications of microand macrovascular disease take a far greater toll. Globally, these complications are a major cause of disability and reduced quality of life. A high fasting plasma glucose level alone ranks seventh among risks for disability and is the sixth leading risk factor for global mortality. The GBD 2010 estimates that diabetes accounted for 1.28 million deaths in 2010, with almost 80% of those deaths occurring in lowand middle-income countries. Predictions of an imminent rise in the share of deaths and disabilities due to NCDs in developing countries have led to calls for preventive policies to improve diet, increase exercise, and restrict tobacco use, along with the prescription of multidrug regimens for persons at high-level vascular risk. Although this agenda could do much to prevent pandemic NCD, it will do little to help persons with established heart disease stemming from nonatherogenic pathologies. cardIovascular dIsease Because systemic investigation of the causes of stroke and heart failure in sub-Saharan Africa has begun only recently, little is known about the impact of elevated blood pressure in this portion of the continent. Modestly elevated blood pressure in the absence of tobacco use in populations with low rates of obesity may confer little risk of adverse events in the short run. In contrast, persistently elevated blood pressure above 180/110 goes largely undetected, untreated, and uncontrolled in this part of the world. In the cohort of men assessed in the Framingham Heart Study, the prevalence of blood pressures above 210/120—severe hypertension—declined from 1.8% in the 1950s to 0.1% by the 1960s with the introduction of effective antihypertensive agents. Although debate continues about appropriate screening strategies and treatment thresholds, rural health centers staffed largely by nurses must quickly gain access to essential antihypertensive medications. The epidemiology of heart failure reflects inequalities in risk factor prevalence and in treatment. The reported burden of this condition has remained unchanged since the 1950s, but the causes of heart failure and the age of the people affected vary across the globe. Heart failure as a consequence of pericardial, myocardial, endocardial, or valvular injury accounts for as many as 5% of all medical admissions to hospitals around the world. In high-income countries, coronary artery disease and hypertension among the elderly account for most cases of heart failure. For example, in the United States, coronary artery disease is present in 60% of patients with heart failure and hypertension in 70%. Among the world’s poorest 1 billion people, however, heart failure reflects poverty-driven exposure of children and young adults to rheumatogenic strains of streptococci and cardiotropic microorganisms (e.g., HIV, Trypanosoma cruzi, enteroviruses, M. tuberculosis), untreated high blood pressure, and nutrient deficiencies. The mechanisms underlying other causes of heart failure common in these populations—such as idiopathic dilated cardiomyopathy, peripartum cardiomyopathy, and endomyocardial fibrosis—remain unclear. In stark contrast to the extraordinary lengths to which clinicians in wealthy countries will go to treat ischemic cardiomyopathy, little attention has been paid to young patients with nonischemic cardiomyopathies in resource-poor settings. Nonischemic cardiomyopathies, such as those due to hypertension, RHD, and chronic lung disease, account for >90% of cases of cardiac failure in sub-Saharan Africa and include poorly understood entities such as peripartum cardiomyopathy (which has an incidence in rural Haiti of 1 per 300 live births) and HIV-associated cardiomyopathy. Multidrug regimens that include beta blockers, angiotensin-converting enzyme inhibitors, and other agents can dramatically reduce mortality risk and improve quality of life for these patients. Lessons learned in the scale-up of chronic care for HIV infection and TB may be illustrative as progress is made in establishing the means to deliver heart-failure therapies. Some of the lessons learned from the chronic infections discussed above are, of course, relevant to cardiovascular disease, especially those classified as NCDs but caused by infectious pathogens. Integration of prevention and care remains as important today as in 1960 when Paul Dudley White and his colleagues found little evidence of myocardial infarction in the region near the Albert Schweitzer Hospital in Lambaréné, Gabon, but reported that “the high prevalence of mitral stenosis is astonishing…. We believe strongly that it is a duty to help bring to these sufferers the benefits of better penicillin prophylaxis and of cardiac surgery when indicated. The same responsibility exists for those with correctable congenital cardiovascular defects.” RHD affects more than 15 million people worldwide, with more than 470,000 new cases each year. Among the 2.4 million annual cases of pediatric RHD, an estimated 42% occur in sub-Saharan Africa. This disease, which may cause endocarditis or stroke, leads to more than 345,000 deaths per year—almost all occurring in developing countries. Researchers in Ethiopia have reported annual death rates as high as 12.5% in rural areas. In part because the prevention of RHD has not advanced since the disease’s disappearance in wealthy countries, no part of sub-Saharan Africa has eradicated RHD despite examples of success in Costa Rica, Cuba, and some Caribbean nations. A survey of acute heart failure among adults in sub-Saharan Africa showed that ~14.3% of these cases were due to RHD. Strategies to eliminate rheumatic heart disease may depend on active case-finding, with confirmation by echocardiography, among high-risk groups as well as on efforts to expand access to surgical interventions among children with advanced valvular damage. Partnerships between established surgical programs and areas with limited or nonexistent facilities may help expand the capacity to provide life-saving interventions to patients who otherwise would die early and painfully. A long-term goal is the establishment of regional centers of excellence equipped to provide consistent, accessible, high-quality services. Clinicians from tertiary care centers in sub-Saharan Africa and elsewhere have continued to call for prevention and treatment of the cardiovascular conditions of the poor. The reconstruction of health services in response to pandemic infectious disease offers an opportunity to identify and treat patients with organ damage and to undertake the prevention of cardiovascular and other chronic conditions of poverty. cancer Cancers account for ~5% of the global burden of disease. Low-and middle-income countries accounted for more than two-thirds of the 12.6 million cases and 7.6 million deaths due to cancer in 2008. By 2030, annual mortality from cancer will increase by 4 million—with developing countries experiencing a sharper increase than developed nations. “Western” lifestyle changes will be responsible for the increased incidence of cancers of the breast, colon, and prostate among populations in lowand middle-income countries, but historic realities, sociocultural and behavioral factors, genetics, and poverty itself also will have a profound impact on cancer-related mortality and morbidity rates. At least 2 million cancer cases per year—18% of the global cancer burden—are attributable to infectious causes, which are responsible for <10% of cancers in developed countries but account for up to 20% of all malignancies in lowand middle-income countries. Infectious causes of cancer such as human papillomavirus, hepatitis B virus, and Helicobacter pylori will continue to have a much larger impact in developing countries. Environmental and dietary factors, such as indoor air pollution and high-salt diets, also contribute to increased rates of certain cancers (e.g., lung and gastric cancers). Tobacco use (both smoking and chewing) is the most important source of increased mortality rates from lung and oral cancers. In contrast to decreasing tobacco use in many developed countries, the number of smokers is growing in developing countries, especially among women and young persons. For many reasons, outcomes of malignancies are far worse in developing countries than in developed nations. As currently funded, overstretched health systems in poor countries are not capable of early detection; the majority of patients already have incurable malignancies at diagnosis. Treatment of cancers is available for only a very small number of mostly wealthy citizens in the majority of poor countries, and, even when treatment is available, the range and quality of services are often substandard. Yet this need not be the future. Only a decade ago, MDR-TB and HIV infection were considered untreatable in settings of great poverty. The feasibility of creating innovative programs that reduce technical and financial barriers to the provision of care for treatable malignancies among the world’s poorest populations is now clear (Fig. 2-4). Several middle-income countries, including Mexico, have expanded publicly funded cancer care to reach poorer populations. This commitment of resources has dramatically improved outcomes for cancers, from childhood leukemia to cervical cancer. Prevention of Noncommunicable diseases False debates, including those pitting prevention against care, continue in global health and reflect, in part, outmoded paradigms or a partial understanding of disease burden and etiology as well as the dramatic variations in risk within a single nation. Moreover, debates are sometimes politicized as a result of vested interests. For example, in 2004, the WHO released its Global Strategy on Diet, Physical Activity, and Health, which focused on the population-wide promotion of healthy diet and regular physical activity in an effort to reduce the growing global problem of obesity. Passing this strategy at the World Health Assembly proved difficult because of strong opposition from the food industry and from a number of WHO member states, including the United States. Although globalization has had many positive effects, one negative effect has been the growth in both developed and developing countries of well-financed lobbies that have aggressively promoted unhealthy dietary changes and increased consumption of alcohol and tobacco. Foreign direct investment in tobacco, beverage, and food products in developing countries reached $90.3 billion in 2010— a figure nearly 490 times greater than the $185 million spent during that year to address NCDs by bilateral funding agencies, the WHO, the World Bank, and all other sources of development assistance for health combined. Investment in curbing NCDs remains disproportionately low despite the WHO’s 2008–2013 Action Plan for the Global Strategy for the Prevention and Control of Noncommunicable Diseases. FIgURE 2-4 An 11-year-old Rwandan patient with embryonal rhabdomyosarcoma before (left) and after (right) 48 weeks of chemotherapy plus surgery. Five years later, she is healthy with no evidence of disease. The WHO estimates that 80% of all cases of cardiovascular disease and type 2 diabetes as well as 40% of all cancers can be prevented through healthier diets, increased physical activity, and avoidance of tobacco. These estimates mask large local variations. Although some evidence indicates that population-based measures can have some impact on these behaviors, it is sobering to note that increasing obesity levels have not been reversed in any population. Tobacco avoidance may be the most important and most difficult behavioral modification of all. In the twentieth century, 100 million people worldwide died of tobacco-related diseases; it is projected that more than 1 billion people will die of these diseases in the twenty-first century, with the vast majority of those deaths in developing countries. The WHO’s 2003 Framework Convention on Tobacco Control represented a major advance, committing all of its signatories to a set of policy measures shown to reduce tobacco consumption. Today, ~80% of the world’s 1 billion smokers live in lowand middle-income countries. If trends continue, tobacco-related deaths will increase to 8 million per year by 2030, with 80% of those deaths in lowand middle-income countries. The WHO reports that some 450 million people worldwide are affected by mental, neurologic, or behavioral problems at any given time and that ~877,000 people die by suicide every year. Major depression is the leading cause of years lost to disability in the world today. One in four patients visiting a health service has at least one mental, neurologic, or behavioral disorder, but most of these disorders are neither diagnosed nor treated. Most lowand middle-income countries devote <1% of their health expenditures to mental health. Increasingly effective therapies exist for many of the major causes of mental disorders. Effective treatments for many neurologic diseases, including seizure disorders, have long been available. One of the greatest barriers to delivery of such therapies is the paucity of skilled personnel. Most sub-Saharan African countries have only a handful of psychiatrists, for example; most of them practice in cities and are unavailable within the public sector or to patients living in poverty. Among the few patients who are fortunate enough to see a psychiatrist or neurologist, fewer still are able to adhere to treatment regimens: several surveys of already diagnosed patients ostensibly receiving daily therapy have revealed that, among the poor, multiple barriers prevent patients from taking their medications as prescribed. In one study from Kenya, no patients being seen in an epilepsy clinic had therapeutic blood levels of anti-seizure medications, even though all had had these drugs prescribed. Moreover, many patients had no detectable blood levels of these agents. The same barriers that prevent the poor from having reliable access to insulin or ART prevent them from benefiting from antidepressant, antipsychotic, and antiepileptic agents. To alleviate this problem, some authorities are proposing the training of health workers to provide community-based adherence support, counseling services, and referrals for patients in need of mental health services. One such program instituted in Goa, India, used “lay” counselors and resulted in a significant reduction in symptoms of common mental disorders among the target population. World Mental Health: Problems and Priorities in Low-Income Countries still offers a comprehensive analysis of the burden of mental, behavioral, and social problems in low-income countries and relates the mental health consequences of social forces such as violence, dislocation, poverty, and disenfranchisement of women to current economic, political, and environmental concerns. In the years since this report was published, however, a number of pilot projects designed to deliver community-based care to patients with chronic mental illness have been launched in settings as diverse as Goa, India; Banda Aceh, Indonesia; rural China; post-earthquake Haiti; and Fiji. Some of these programs have been school-based and have sought to link prevention to care. CoNCLUSIoN: ToWARd A SCIENCE oF IMPLEMENTATIoN Public health strategies draw largely on quantitative methods— epidemiology, biostatistics, and economics. Clinical practice, including the practice of internal medicine, draws on a rapidly expanding knowledge base but remains focused on individual patient care; clinical interventions are rarely population-based. But global health equity depends on avoiding the false debates of the past: neither public health nor clinical approaches alone are adequate to address the problems of global health. There is a long way to go before evidence-based internal medicine is applied effectively among the world’s poor. Complex infectious diseases such as HIV/AIDS and TB have proved difficult but not impossible to manage; drug resistance and lack of effective health systems have complicated such work. Beyond what is usually termed “communicable diseases”—i.e., in the arena of chronic diseases such as cardiovascular disease and mental illness—global health is a nascent endeavor. Efforts to address any one of these problems in settings of great scarcity need to be integrated into broader efforts to strengthen failing health systems and alleviate the growing personnel crisis within these systems. Such efforts must include the building of “platforms” for care delivery that are robust enough to incorporate new preventive, diagnostic, and therapeutic technologies rapidly in response to changes both in the burden of disease and in the needs not met by dominant paradigms and systems of health delivery. Academic medical centers have tried to address this “know–do” gap as new technologies are introduced and assessed through clinical trials, but the reach of these institutions into settings of poverty is limited in rich and poor countries alike. When such centers link their capacities effectively to the public institutions charged with the delivery of health care to the poor, great progress can be made. For these reasons, scholarly work and practice in the field once known as “international health” and now often designated “global health equity” are changing rapidly. That work is still informed by the tension between clinical practice and population-based interventions, between analysis and action, and between prevention and care. Once metrics are refined, how might they inform efforts to lessen premature morbidity and mortality rates among the world’s poor? As in the nineteenth century, human rights perspectives have proved helpful in turning attention to the problems of the destitute sick; such perspectives may also inform strategies for delivering care equitably. A number of university hospitals are developing training programs for physicians with an interest in global health. In medical schools across the United States and in other wealthy countries, interest in global health has exploded. One study has shown that more than 25% of medical students take part in at least one global health experience prior to graduation. Half a century or even a decade ago, such high levels of interest would have been unimaginable. An estimated 12 million people die each year simply because they live in poverty. An absolute majority of these premature deaths occur in Africa, with the poorer regions of Asia not far behind. Most of these deaths occur because the world’s poorest do not have access to the fruits of science. They include deaths from vaccine-preventable illness, deaths during childbirth, deaths from infectious diseases that might be General Considerations in Clinical Medicine Preventing such a future is the most important goal of global health. decision-Making in Clinical Medicine Daniel B. Mark, John B. Wong INTRodUCTIoN To a medical student who requires hours to collect a patient’s history, 3 cured with access to antibiotics and other essential medicines, deaths from malaria that would have been prevented by bed nets and access to therapy, and deaths from waterborne illnesses. Other excess mortality is attributable to the inadequacy of efforts to develop new preventive, diagnostic, and therapeutic tools. Those funding the discovery and development of new tools typically neglect the concurrent need for strategies to make them available to the poor. Indeed, some would argue that the biggest challenge facing those who seek to address this outcome gap is the lack of practical means of distribution in the most heavily affected regions. The development of tools must be followed quickly by their equitable distribution. When new preventive and therapeutic tools are developed without concurrent attention to delivery or implementation, one encounters what are sometimes termed perverse effects: even as new tools are developed, inequalities of outcome—lower morbidity and mortality rates among those who can afford access, with sustained high morbidity and mortality among those who cannot—will grow in the absence of an equity plan to deliver the tools to those most at risk. perform a physical examination, and organize that information into a coherent presentation, an experienced clinician’s ability to decide on a diagnosis and management plan in minutes may seem extraordinary. What separates the master clinician from the novice is an elusive quality called “expertise.” The first part of this chapter provides an overview of our current understanding of expertise in clinical reasoning, what it is, and how it can be developed. The proper use of diagnostic tests and the integration of the results into the patient’s clinical assessment may also be equally bewildering to students. Hoping to hit the unknown diagnostic target, novice medical practitioners typically apply a “shotgun” approach to testing. The expert, in contrast, usually focuses her testing strategy to specific diagnostic hypotheses. The second part of the chapter reviews basic statistical concepts useful for interpreting diagnostic tests and quantitative tools useful for clinical decision-making. Evidence-based medicine (EBM) constitutes the integration of the best available research evidence with clinical judgment as applied to the care of individual patients. The third part of the chapter provides an overview of the tools of EBM. BRIEF INTRodUCTIoN To CLINICAL REASoNINg Clinical Expertise Defining “clinical expertise” remains surprisingly difficult. Chess has an objective ranking system based on skill and performance criteria. Athletics, similarly, have ranking systems to distinguish novices from Olympians. But in medicine, after physicians complete training and pass the boards, no further tests or benchmarks identify those who have attained the highest levels of clinical performance. Of course, physicians often consult a few “elite” clinicians for their “special problem-solving prowess” when particularly difficult or obscure cases have baffled everyone else. Yet despite their skill, even master clinicians typically cannot explain their exact processes and methods, thereby limiting the acquisition and dissemination of the expertise used to achieve their impressive results. Furthermore, clinical virtuosity appears not to be generalizable, e.g., an expert on hypertrophic cardiomyopathy may be no better (and possibly worse) than a first-year medical resident at diagnosing and managing a patient with neutropenia, fever, and hypotension. Broadly construed, clinical expertise includes not only cognitive dimensions and the integration of verbal and visual cues or information but also complex fine-motor skills necessary for invasive and noninvasive procedures and tests. In addition, “the complete package” of expertise in medicine includes the ability to communicate effectively with patients and work well with members of the medical team. Research on medical expertise remains relatively sparse overall, with most of the work focused on diagnostic reasoning, and much less work focused on treatment decisions or the technical skills involved in the performance of procedures. Thus, in this chapter, we focus primarily on the cognitive elements of clinical reasoning. Because clinical reasoning takes place in the heads of doctors, it is therefore not readily observable, making it obviously difficult to study. One method of research on reasoning asks doctors to “think out loud” as they receive increments of clinical information in a manner meant to simulate a clinical encounter. Another research approach has focused on how doctors should reason diagnostically rather than on how they actually do reason. Much of what is known about clinical reasoning comes from empirical studies of nonmedical problem-solving behavior. Because of the diverse perspectives contributing to this area, with important contributions from cognitive psychology, sociology, medical education, economics, informatics, and decision sciences, no single integrated model of clinical reasoning exists, and not infrequently, different terms and models describe similar phenomena. Intuitive versus Analytic Reasoning A contemporary model of reasoning, dual-process theory distinguishes two general systems of cognitive processes. Intuition (System 1) provides rapid effortless judgments from memorized associations using pattern recognition and other simplifying “rules of thumb” (i.e., heuristics). For example, a very simple pattern that could be useful in certain situations is “African-American women plus hilar adenopathy equals sarcoid.” Because no effort is involved in recalling the pattern, typically, the clinician is unable to say how those judgments were formulated. In contrast, analysis (System 2), the other form of reasoning in the dual-process model, is slow, methodical, deliberative, and effortful. These are, of course, idealized extremes of the cognitive continuum. How these systems interact in different decision problems, how experts use them differently from novices, and when their use can lead to errors in judgment remain the subject of considerable study and debate. Pattern recognition is a complex cognitive process that appears largely effortless. One can recognize people’s faces, the breed of a dog, or an automobile model without necessarily being able to say what specific features prompted the recognition. Analogously, experienced clinicians often recognize familiar diagnosis patterns quickly. In the absence of an extensive stored repertoire of diagnostic patterns, students (as well as more experienced clinicians operating outside their area of expertise) often use the more laborious System 2 analytic approach along with more intensive and comprehensive data collection to reach the diagnosis. The following three brief scenarios of a patient with hemoptysis demonstrate three distinct patterns: A 46-year-old man presents to his internist with a chief complaint of hemoptysis. An otherwise healthy nonsmoker, he is recovering from an apparent viral bronchitis. This presentation pattern suggests that the small amount of blood-streaked sputum is due to acute bronchitis, so that a chest x-ray provides sufficient reassurance that a more serious disorder is absent. In the second scenario, a 46-year-old patient who has the same chief complaint but with a 100-pack-year smoking history, a productive morning cough, and episodes of blood-streaked sputum fits the pattern of carcinoma of the lung. Consequently, along with the chest x-ray, the physician obtains a sputum cytology examination and refers this patient for a chest computed tomography (CT) scan. In the third scenario, a 46-year-old patient with hemoptysis who immigrated from a developing country has an echocardiogram as well, because the physician hears a soft diastolic rumbling murmur at the apex on cardiac auscultation, suggesting rheumatic mitral stenosis and possibly pulmonary hypertension. Although rapid, pattern recognition used without sufficient reflection can result in premature closure: mistakenly concluding that one already knows the correct diagnosis and therefore failing to complete the data collection that would demonstrate the lack of fit of the initial pattern selected. For example, a 45-year-old man presents with a 3-week history of a “flulike” upper respiratory infection (URI) including symptoms of dyspnea and a productive cough. On the basis of the presenting complaints, the clinician uses a “URI assessment form” to improve the quality and efficiency of care by standardizing the information gathered. After quickly acquiring the requisite structured examination components and noting in particular the absence of fever and a clear chest examination, the physician prescribes medication for acute bronchitis and sends the patient home with the reassurance that his illness was not serious. Following a sleepless night with significant dyspnea, the patient develops nausea and vomiting and collapses. He presents to the emergency department in cardiac arrest and is unable to be resuscitated. His autopsy shows a posterior wall myocardial infarction and a fresh thrombus in an atherosclerotic right coronary artery. What went wrong? The clinician had decided, based on the patient’s appearance, even before starting the history, that the patient’s complaints were not serious. Therefore, he felt confident that he could perform an abbreviated and focused examination by using the URI assessment protocol rather than considering the broader range of possibilities and performing appropriate tests to confirm or refute his initial hypotheses. In particular, by concentrating on the URI, the clinician failed to elicit the full dyspnea history, which would have suggested a far more serious disorder, and he neglected to search for other symptoms that could have directed him to the correct diagnosis. Heuristics, also referred to as cognitive shortcuts or rules of thumb, are simplifying decision strategies that ignore part of the data available so as to provide an efficient path to the desired judgment. They are generally part of the intuitive system tools. Two major research programs have come to different conclusions about the value of heuristics in clinical judgment. The “heuristics and biases” program focused on understanding how heuristics in problem solving could be biased by testing the numerical intuition of psychology undergraduates against the rules of statistics. In contrast, the “fast and frugal heuristics” research program explored how and when decision makers’ reliance on simple heuristics can produce good decisions. Although many heuristics have relevance to clinical reasoning, only four will be mentioned here. When assessing a particular patient, clinicians often weigh the similarity of that patient’s symptoms, signs, and risk factors against those of their mental representations of the diagnostic hypotheses being considered. In other words, among the diagnostic possibilities, clinicians identify the diagnosis for which the patient appears to be a representative example. Analogous to pattern recognition, this cognitive shortcut is called the representativeness heuristic. However, physicians using the representativeness heuristic can reach erroneous conclusions if they fail to consider the underlying prevalence (i.e., the prior, or pretest, probabilities) of the two competing diagnoses that could explain the patient’s symptoms. Consider a patient with hypertension and headache, palpitations, and diaphoresis. Inexperienced clinicians might judge pheochromocytoma to be quite likely based on the representativeness heuristic with this classic symptom triad suggesting pheochromocytoma. Doing so would be incorrect given that other causes of hypertension are much more common than pheochromocytoma, and this triad of symptoms can occur in patients who do not have pheochromocytoma. Less experience with a particular diagnosis and with the breadth of presentations (e.g., diseases that affect multiple organ systems such as sarcoid) may also lead to errors. A second commonly used cognitive shortcut, the availability heuristic, involves judgments based of how easily prior similar cases or outcomes can be brought to mind. For example, an experienced clinician may recall 20 elderly patients seen over the last few years who presented with painless dyspnea of acute onset and were found to have acute myocardial infarction (MI). A novice clinician may spend valuable time seeking a pulmonary cause for the symptoms before considering and then confirming the cardiac diagnosis. In this situation, the patient’s clinical pattern does not fit the most common pattern of acute MI, but experience with this atypical presentation, along with the ability to recall it, directs the physician to the diagnosis. Errors with the availability heuristic arise from several sources of recall bias. Rare catastrophes are likely to be remembered with a clarity and force disproportionate to their likelihood for future diagnosis— for example, a patient with a sore throat eventually found to have leukemia or a young athlete with leg pain eventually found to have a sarcoma—and those publicized in the media or that are recent experiences are, of course, easier to recall and therefore more influential on clinical judgments. The third commonly used cognitive shortcut, the anchoring heuristic (also called conservatism or stickiness), involves estimating a probability of disease (the anchor) and then insufficiently adjusting that probability up or down (compared with Bayes’ rule) when interpreting new data about the patient, i.e., sticking to their initial diagnosis. For example, a clinician may still judge the probability of coronary artery disease (CAD) to be high after a negative exercise thallium test and proceed to cardiac catheterization (see “Measures of Disease Probability and Bayes’ Rule,” below). The fourth heuristic states that clinicians should use the simplest explanation possible that will account adequately for the patient’s symptoms or findings (Occam’s razor or, alternatively, the simplicity heuristic). Although this is an attractive and often used principle, it is important to remember that no biologic basis for it exists. Errors from the simplicity heuristic include premature closure leading to the neglect of unexplained significant symptoms or findings. Even experienced physicians use analytic reasoning processes (System 2) when the problem they face is recognized to be complex or to involve important unfamiliar elements or features. In such situations, clinicians proceed much more methodically in what has been referred to as the hypothetico-deductive model of reasoning. From the outset, expert clinicians working analytically generate, refine, and discard diagnostic hypotheses. The hypotheses drive questions asked during history taking and may change based on the working hypotheses of the moment. Even the physical examination is focused by the working hypotheses. Is the spleen enlarged? How big is the liver? Is it tender? Are there any palpable masses or nodules? Each question must be answered (with the exclusion of all other inputs) before the examiner can move on to the next specific question. Each diagnostic hypothesis provides testable predictions and sets a context for the next question or step to follow. For example, if the enlarged and quite tender liver felt on physical examination is due to acute hepatitis (the hypothesis), certain specific liver function tests should be markedly elevated (the prediction). If the tests come back normal, the hypothesis may have to be discarded or substantially modified. Negative findings often are neglected but are as important as positive ones because they often reduce the likelihood of the diagnostic hypotheses under consideration. Chest discomfort that is not provoked or worsened by exertion in an active patient reduces the likelihood that chronic ischemic heart disease is the underlying cause. The absence of a resting tachycardia and thyroid gland enlargement reduces the likelihood of hyperthyroidism in a patient with paroxysmal atrial fibrillation. The acuity of a patient’s illness may override considerations of prevalence and the other issues described above. “Diagnostic imperatives” recognize the significance of relatively rare but potentially catastrophic diagnoses if undiagnosed and untreated. For example, clinicians are taught to consider aortic dissection routinely as a possible cause of acute severe chest discomfort. Even though the typical history of dissection differs from that of MI, dissection is far less prevalent, so diagnosing dissection remains challenging unless it is explicitly and routinely considered as a diagnostic imperative (Chap. 301). If the clinician fails to elicit any of the characteristic features of dissection by history and finds equivalent blood pressures in both arms and no pulse deficits, he may feel comfortable discarding the aortic dissection hypothesis. If, however, the chest x-ray shows a possible widened mediastinum, the hypothesis may be reinstated and an appropriate imaging test ordered (e.g., thoracic CT scan, transesophageal echocardiogram) to evaluate more fully. In nonacute situations, the prevalence of potential alternative diagnoses should play a much more prominent role in diagnostic hypothesis generation. Cognitive scientists studying the thought processes of expert clinicians have observed that clinicians group data into packets, or “chunks,” that are stored in short-term or “working memory” and manipulated to generate diagnostic hypotheses. Because short-term memory can typically retain only 5–9 items at a time, the number of packets that can be actively integrated into hypothesis-generating activities is similarly limited. For this reason, the cognitive shortcuts discussed above play a key role in the generation of diagnostic hypotheses, many of which are discarded as rapidly as they are formed (thereby demonstrating that the distinction between analytic and intuitive reasoning is an arbitrary and simplistic, but nonetheless useful, representation of cognition). Research into the hypothetico-deductive model of reasoning has had surprising difficulty identifying the elements of the reasoning process that distinguish experts from novices. This has led to a shift from examining the problem-solving process of experts to analyzing the organization of their knowledge. For example, diagnosis may be based on the resemblance of a new case to prior individual instances (exemplars). Experts have a much larger store of memorized cases, for example, visual long-term memory in radiology. However, clinicians do not simply rely on literal recall of specific cases but have constructed elaborate conceptual networks of memorized information or models of disease to aid in arriving at their conclusions. That is, expertise involves an increased ability to connect symptoms, signs, and risk factors to one another in meaningful ways; relate those findings to possible diagnoses; and identify the additional information necessary to confirm the diagnosis. No single theory accounts for all the key features of expertise in medical diagnosis. Experts have more knowledge about more things and a larger repertoire of cognitive tools to employ in problem solving than do novices. One definition of expertise highlights the ability to make powerful distinctions. In this sense, expertise involves a working knowledge of the diagnostic possibilities and what features distinguish one disease from another. Memorization alone is insufficient. Memorizing a medical textbook would not make one an expert. But having access to detailed and specific relevant information is critically important. Clinicians of the past primarily accessed their own remembered experience. Clinicians of the future will be able to access the experience of large numbers of clinicians using electronic tools, but, as with the memorized textbook, the data alone will not create an instant expert. The expert adds these data to an extensive internalized database of knowledge and experience not available to the novice (and nonexpert). Despite all the work that has been done to understand expertise, in medicine and other disciplines, it remains uncertain whether there is any didactic program that can accelerate the progression from novice to expert or from experienced clinician to master clinician. Deliberate effortful practice (over an extended period of time, sometimes said to be 10 years or 10,000 practice hours) and personal coaching are two strategies that are often used outside medicine (e.g., music, athletics, chess) to promote expertise. Their use in developing medical expertise and maintaining or enhancing it has not yet been adequately explored. The modern ideal of medical therapeutic decision making is to “personalize” the recommendation. In the abstract, personalizing treatment involves combining the best available evidence about what works with an individual patient’s unique features (e.g., risk factors) and his or her preferences and health goals to craft an optimal treatment recommendation with the patient. Operationally, there are two different and complementary levels of personalization possible: individualizing the evidence for the specific patient based on relevant clinical and other characteristics, and personalizing the patient interaction by incorporating their values, often referred to as shared decision-making, which is critically important, but falls outside the scope of this chapter. Individualizing the evidence about therapy does not mean relying on physician impressions of what works based on personal experience. Because of small sample sizes and rare events, the chance of drawing erroneous causal inferences from one’s own clinical experience is very high. For most chronic diseases, therapeutic effectiveness is only demonstrable statistically in patient populations. It would be incorrect to infer with any certainty, for example, that treating a hypertensive patient with angiotensin-converting enzyme (ACE) inhibitors necessarily prevented a stroke from occurring during treatment, or that an untreated patient would definitely have avoided a stroke had he or she been treated. For many chronic diseases, a majority of patients will remain event free regardless of treatment choices; some will have events regardless of which treatment is selected; and those who avoided having an event through treatment cannot be individually identified. Blood pressure lowering, a readily observable surrogate endpoint, does not have a tightly coupled relationship with strokes prevented. Consequently, demonstrating therapeutic effectiveness cannot rely simply on observing the outcome of an individual patient but should instead be based on large groups of patients carefully studied and properly analyzed. Therapeutic decision making, therefore, should be based on the best available evidence from clinical trials and well-done outcome studies. Authoritative, well-done clinical practice guidelines that synthesize such evidence offer readily available, reliable, and trustworthy information relevant to many treatment decisions clinicians face. However, all guidelines recognize that their “one size fits all” recommendations may not apply to individual patients. Increased attention is now being paid to understand how best to adjust group-level clinical evidence of treatment harms and benefits to account for the absolute level of risks faced by subgroups and even individual patients, using, for example, validated clinical risk scores. More than a decade of research on variations in clinician practice patterns has shed much light on the forces that shape clinical decisions. These factors can be grouped conceptually into three overlapping categories: (1) factors related to physicians’ personal characteristics and practice style, (2) factors related to the practice setting, and (3) factors related to economic incentives. Factors Related to Practice Style To ensure that necessary care is provided at a high level of quality, physicians fulfill a key role in medical care by serving as the patient’s agent. Factors that influence performance in this role include the physician’s knowledge, training, and experience. Clearly, physicians cannot practice EBM (described later in the chapter) if they are unfamiliar with the evidence. As would be expected, specialists generally know the evidence in their field better than do generalists. Beyond published evidence and practice guidelines, a major set of influences on physician practice can be subsumed under the general concept of “practice style.” The practice style serves to define norms of clinical behavior. Beliefs about effectiveness of different therapies and preferred patterns of diagnostic test use are examples of different facets of a practice style. The physician beliefs that drive these different practice styles may be based on personal experience, recollection, and interpretation of the available medical evidence. For example, heart failure specialists are much more likely than generalists to achieve target doses of ACE inhibitor therapy in their heart failure patients because they are more familiar with what the targets are (as defined by large clinical trials), have more familiarity with the specific drugs (including adverse effects), and are less likely to overreact to foreseeable problems in therapy such as a rise in creatinine levels or asymptomatic hypotension. Beyond the patient’s welfare, physician perceptions about the risk of a malpractice suit resulting from either an erroneous decision or a bad outcome may drive clinical decisions and create a practice referred to as defensive medicine. This practice involves using tests and therapies with very small marginal benefit, ostensibly to preclude future criticism should an adverse outcome occur. Without any conscious awareness of a connection to the risk of litigation, however, over time such patterns of care may become accepted as part of the practice norm, thereby perpetuating their overuse, e.g., annual cardiac exercise testing in asymptomatic patients. Practice Setting Factors Factors in this category relate to the physical resources available to the physician’s practice and the practice environment. Physician-induced demand is a term that refers to the repeated observation that once medical facilities and technologies are made available to physicians, they will use them. Other environmental factors that can influence decision-making include the local availability of specialists for consultations and procedures; “high-tech” advanced imaging or procedure facilities such as MRI machines and proton beam therapy centers; and fragmentation of care. Economic Incentives Economic incentives are closely related to the other two categories of practice-modifying factors. Financial issues can exert both stimulatory and inhibitory influences on clinical practice. In general, physicians are paid on a fee-for-service, capitation, or salary basis. In fee-for-service, physicians who do more get paid more, thereby encouraging overuse, consciously or unconsciously. When fees are reduced (discounted reimbursement), doctors tend to increase the number of services provided to maintain revenue. Capitation, in contrast, provides a fixed payment per patient per year to encourage physicians to consider a global population budget in managing individual patients and ideally reducing the use of interventions with small marginal benefit. In contrast to inexpensive preventive services, however, this type of incentive is more likely to affect expensive interventions. To discourage volume-based excessive utilization, fixed salary compensation plans pay physicians the same regardless of the clinical effort expended, but may provide an incentive to see fewer patients. Despite the great technological advances in medicine over the last century, uncertainty remains a key challenge in all aspects of medical decision-making. Compounding this challenge is the massive information overload that characterizes modern medicine. Today’s clinician needs access to close to 2 million pieces of information to practice medicine. According to one estimate, doctors subscribe to an average of seven journals, representing over 2500 new articles each year. Of course, to be useful, this information must be sifted for applicability to and then integrated with patient-specific data. Although computers appear to offer an obvious solution both for information management and for quantification of medical care uncertainties, many practical problems must be solved before computerized decision support can be routinely incorporated into the clinical reasoning process in a way that demonstrably improves the quality of care. For the present, understanding the nature of diagnostic test information can help clinicians become more efficient users of such data. The next section reviews important concepts related to diagnostic testing. dIAgNoSTIC TESTINg: MEASURES oF TEST ACCURACY The purpose of performing a test on a patient is to reduce uncertainty about the patient’s diagnosis or prognosis in order to facilitate optimal management. Although diagnostic tests commonly are thought of as laboratory tests (e.g., blood count) or procedures (e.g., colonoscopy or bronchoscopy), any technology that changes a physician’s understanding of the patient’s problem qualifies as a diagnostic test. Thus, even the history and physical examination can be considered a form of diagnostic test. In clinical medicine, it is common to reduce the results of a test to a dichotomous outcome, such as positive or negative, normal or abnormal. Although this simplification ignores useful information (such as the degree of abnormality), such simplification does make it easier to demonstrate the fundamental principles of test interpretation discussed below. The accuracy of diagnostic tests is defined in relation to an accepted “gold standard,” which defines the presumably true state of the patient (Table 3-1). Characterizing the diagnostic performance of a new test requires identifying an appropriate population (ideally, patients in whom the new test would be used) and applying both the new and the gold standard tests to all subjects. Biased estimates of test performance may occur from using an inappropriate population or from incompletely applying the gold standard test. By comparing the two tests, the characteristics of the new test are determined. The sensitivity or true-positive rate of the new test is the proportion of patients with disease (defined by the gold standard) who have a positive (new) test. This measure reflects how well the new test identifies patients with disease. The proportion of patients with disease who have a negative test is the false-negative rate and is calculated as 1 – sensitivity. Among patients without disease, the proportion who have a negative test is the specificity, or true-negative rate. This measure reflects how well the new test correctly identifies patients without disease. Among patients without disease, the proportion who have a positive test is the false-positive rate, calculated as 1 – specificity. A perfect test would have a sensitivity of 100% and a specificity of 100% and would completely distinguish patients with disease from those without it. Calculating sensitivity and specificity requires selection of a threshold value or cut point above which the test is considered “positive.” Making the cut point “stricter” (e.g., raising it) lowers sensitivity but improves specificity, whereas making it “laxer” (e.g., lowering it) raises sensitivity but lowers specificity. This dynamic trade-off between more accurate identification of subjects with disease versus those without disease is often displayed graphically as a receiver operating characteristic (ROC) curve (Fig. 3-1) by plotting sensitivity (y axis) versus 1 – specificity (x axis). Each point on the curve represents a potential cut point with an associated sensitivity and specificity value. The area under the ROC curve often is used as a quantitative measure of the information content of a test. Values range from 0.5 (no diagnostic information from testing at all; the test is equivalent to flipping a coin) to 1.0 (perfect test). The choice of cut point should depend on the relative harms and benefits of treatment for those without versus those with disease. For example, if treatment was safe with substantial benefit, then choosing a high-sensitivity cut point (upper right of the ROC curve) for a low-risk test may be appropriate (e.g., phenylketonuria in newborns), but if treatment had substantial risk for harm, then choosing a high-specificity cut point (lower left of the ROC curve) may be appropriate (e.g., amniocentesis that may lead to therapeutic abortion of a normal fetus). The choice of cut point may also depend on the likelihood of disease, with low likelihoods placing a greater emphasis on the harms of treating false-positive tests and higher likelihoods placing a greater emphasis on missed benefit by not treating false-negative tests. Unfortunately, there are no perfect tests. After every test is completed, the true disease state of the patient remains uncertain. Quantifying this residual uncertainty can be done with Bayes’ rule, which provides a simple way to calculate the likelihood of disease after a test result or posttest probability from three parameters: the pretest probability of disease, the test sensitivity, and the test specificity. The pretest probability is a quantitative estimate of the likelihood of the diagnosis before the test is performed and is usually the prevalence of the disease in the underlying population although occasionally it can be the disease 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 illustrates a trade-off that occurs between improved test sensitivity (accurate detection of patients with disease) and improved test specificity (accurate detection of patients without disease), because the test value defining when the test turns from “negative” to “positive” is varied. A 45° line would indicate a test with no predictive value (sensitivity = specificity at every test value). The area under each ROC curve is a measure of the information content of the test. Thus, a larger ROC area signifies increased diagnostic accuracy. incidence. For some common conditions, such as CAD, nomograms and statistical models generate estimates of pretest probability that account for history, physical examination, and test findings. The posttest probability (also called the predictive value of the test) is a revised statement of the likelihood of the diagnosis, accounting for both pretest probability and test results. For the likelihood of disease following a positive test (i.e., positive predictive value), Bayes’ rule is calculated as: For example, with a pretest probability of 0.50 and a “positive” diagnostic test result (test sensitivity = 0.90 and specificity = 0.90): 0.50× 0.90 Posttest probability = 0.50× 0.90+ (1− 0.50)0.10 × = 0.90 The term predictive value often is used as a synonym for the posttest probability. Unfortunately, clinicians commonly misinterpret reported predictive values as intrinsic measures of test accuracy. Studies of diagnostic tests compound the confusion by calculating predictive values on the same sample used to measure sensitivity and specificity. Since all posttest probabilities are a function of the prevalence of disease in the tested population, such calculations may be misleading unless the test is applied subsequently to populations with the same disease prevalence. For these reasons, the term predictive value is best avoided in favor of the more informative posttest probability following a positive or a negative test result. The nomogram version of Bayes’ rule (Fig. 3-2) helps us to conceptually understand how it estimates the posttest probability of disease. In this nomogram, the impact of the diagnostic test result is summarized by the likelihood ratio, which is defined as the ratio of the probability of a given test result (e.g., “positive” or “negative”) in a patient with disease to the probability of that result in a patient without disease, thereby providing a measure of how well the test distinguishes those with from those without disease. For a positive test, the likelihood ratio positive is calculated as the ratio of the true-positive rate to the false-positive rate (or sensitivity/ [1 – specificity]). For example, a test with a sensitivity of 0.90 and a specificity of 0.90 has a likelihood ratio of 0.90/(1 – 0.90), or 9. Thus, for this hypothetical test, a “positive” result is nine times more likely in a patient with the disease than in a patient without it. Most tests in medicine have likelihood ratios for a positive result between 1.5 and 20. Higher values are associated with tests that more substantially increase the posttest likelihood of disease. A very high likelihood ratio positive (exceeding 10) usually implies high specificity, so a positive high-specificity test helps “rule in” disease. If sensitivity is excellent but specificity is less so, the likelihood ratio will be reduced substantially (e.g., with a 90% sensitivity but a 55% specificity, the likelihood ratio is 2.0). For a negative test, the corresponding likelihood ratio negative is the ratio of the false-negative rate to the true-negative rate (or [1 – sensitivity]/ specificity). Lower likelihood ratio values more substantially lower the posttest likelihood of disease. A very low likelihood ratio negative (falling below 0.10) usually implies high sensitivity, so a negative high-sensitivity test helps “rule out” disease. The hypothetical test considered above with a sensitivity of 0.9 and a specificity of 0.9 would have a likelihood ratio for a negative test result of (1 – 0.9)/0.9, or 0.11, meaning that a negative result is about one-tenth as likely in patients with disease than in those without disease (or 10 times more likely in those without disease than in those with disease). Consider two tests commonly used in the diagnosis of CAD: an exercise treadmill test and an exercise single-photon emission CT (SPECT) myocardial perfusion imaging test (Chap. 270e). Meta-analysis has shown that a positive treadmill ST-segment response has an average sensitivity of 66% and an average specificity of 84%, yielding a likelihood ratio of 4.1 (0.66/[1 – 0.84]) (consistent with small discriminatory ability because it falls between 2 and 5). For a patient with a 10% pretest probability of CAD, the posttest probability of disease after a positive result rises to only about 30%. If a patient with a pretest probability of CAD of 80% has a positive test result, the posttest probability of disease is about 95%. In contrast, exercise SPECT myocardial perfusion test is more accurate for CAD. For simplicity, assume that the finding of a reversible exercise-induced perfusion defect has both a sensitivity and a specificity of 90%, yielding a likelihood ratio for a positive test of 9.0 (0.90/ [1 – 0.90]) (consistent with moderate discriminatory ability because it falls between 5 and 10). For the same 10% pretest probability patient, a positive test raises the probability of CAD to 50% (Fig. 3-2). However, despite the differences in posttest probabilities between these two tests (30% versus 50%), the more accurate test may not improve diagnostic likelihood enough to change patient management (e.g., decision to refer to cardiac catheterization) because the more accurate test has only moved the physician from being fairly certain that the patient did not have CAD to a 50:50 chance of disease. In a patient with a pretest probability of 80%, exercise SPECT test raises the posttest probability to 97% (compared with 95% for the exercise treadmill). Again, the more accurate test does not provide enough improvement in posttest confidence to alter management, and neither test has improved much on what was known from clinical data alone. In general, positive results with an accurate test (e.g., likelihood ratio positive 10) when the pretest probability is low (e.g., 20%) do not move the posttest probability to a range high enough to rule in disease (e.g., 80%). In screening situations, pretest probabilities are often particularly low because patients are asymptomatic. In such cases, specificity becomes particularly important. For example, in screening first-time female blood donors without risk factors for HIV, a positive test raised the likelihood of HIV to only 67% despite a specificity of 99.995% because the prevalence was 0.01%. Conversely, with a high pretest 0.1 99 0.1 0.5 98 0.2 0.5 95 0.5 0.5 0.5 40 0.2 0.2 0.1 0.05 0.1 0.05 5 0.02 0.02 70 0.01 0.01 0.5 95 0.5 0.2 98 0.2 0.1 99 0.1 Pretest Likelihood Posttest Pretest Likelihood Posttest Probability, % Ratio Probability, % Probability, % Ratio Probability, % FIgURE 3-2 Nomogram version of Bayes’ rule used to predict the posttest probability of disease (right-hand scale) using the pretest probability of disease (left-hand scale) and the likelihood ratio for a positive test (middle scale). See text for information on calculation of likelihood ratios. To use, place a straight edge connecting the pretest probability and the likelihood ratio and read off the posttest probability. The right-hand part of the figure illustrates the value of a positive exercise treadmill test (likelihood ratio 4, green line) and a positive exercise thallium single-photon emission computed tomography perfusion study (likelihood ratio 9, broken yellow line) in a patient with a pretest probability of coronary artery disease of 50%. (Adapted from Centre for Evidence-Based Medicine: Likelihood ratios. Available at http://www.cebm.net/ index.aspx?o=1043.) probability, a negative test may not rule out disease adequately if it is not sufficiently sensitive. Thus, the largest change in diagnostic likelihood following a test result occurs when the clinician is most uncertain (i.e., pretest probability between 30% and 70%). For example, if a patient has a pretest probability for CAD of 50%, a positive exercise treadmill test will move the posttest probability to 80% and a positive exercise SPECT perfusion test will move it to 90% (Fig. 3-2). As presented above, Bayes’ rule employs a number of important simplifications that should be considered. First, few tests have only positive or negative results, and many tests provide multiple outcomes (e.g., ST-segment depression and exercise duration with exercise testing). Although Bayes’ rule can be adapted to this more detailed test result format, it is computationally more complex to do so. Similarly, when multiple tests are performed, the posttest probability may be used as the pretest probability to interpret the second test. However, this simplification assumes conditional independence—that is, that the results of the first test do not affect the likelihood of the second test result—and this is often not true. Finally, it has long been asserted that sensitivity and specificity are prevalence-independent parameters of test accuracy, and many texts still make this statement. This statistically useful assumption, however, is clinically simplistic. A treadmill exercise test, for example, has a sensitivity in a population of patients with one-vessel CAD of around 30%, whereas its sensitivity in patients with severe three-vessel CAD approaches 80%. Thus, the best estimate of sensitivity to use in a particular decision may vary, depending on the severity of disease in the local population. A hospitalized, symptomatic, or referral population typically has a higher prevalence of disease and, in particular, a higher prevalence of more advanced disease than does an outpatient population. Consequently, test sensitivity will likely be higher in hospitalized patients, and test specificity higher in outpatients. Bayes’ rule, while illustrative as presented above, provides an unrealistically simple solution to most problems a clinician faces. Predictions based on multivariable statistical models, however, can more accurately address these more complex problems by accounting for specific patient characteristics. In particular, these models explicitly account for multiple possibly overlapping pieces of patient-specific information and assign a relative weight to each on the basis of its unique contribution to the prediction in question. For example, a logistic regression model to predict the probability of CAD considers all the relevant independent factors from the clinical examination and diagnostic testing and their significance instead of the limited data that clinicians can manage in their heads or with Bayes’ rule. However, despite this strength, prediction models are usually too complex computationally to use without a calculator or computer (although this limitation may be overcome once medicine is practiced from a fully computerized platform). To date, only a handful of prediction models have been validated properly (for example, Wells criteria for pulmonary embolism) (Table 3-2). The importance of independent validation in a population separate from the one used to develop the model cannot be overstated. An unvalidated prediction model should be viewed with the skepticism appropriate for any new drug or medical device that has not had rigorous clinical trial testing. When statistical models have been compared directly with expert clinicians, they have been found to be more consistent, as would be expected, but not significantly more accurate. Their biggest promise, then, may be in helping less-experienced clinicians identify critical discriminating patient characteristics and become more accurate in their predictions. Over the last 40 years, many attempts have been made to develop computer systems to aid clinical decision-making and patient management. Conceptually attractive because computers offer ready access to the vast information available to today’s physicians, they may also support management decisions by making accurate predictions of outcome, simulating the whole decision process, or providing algorithmic guidance. Computer-based predictions using Bayesian or statistical regression models inform a clinical decision but do not actually reach a “conclusion” or “recommendation.” Artificial intelligence systems attempt to simulate or replace human reasoning with a computer-based analogue. To date, such approaches have achieved only limited success. Reminder or protocol-directed systems do not make predictions but use existing algorithms, such as guidelines, to guide clinical practice. In general, however, decision support systems have had little impact on practice. Reminder systems, although not yet in widespread use, have shown the most promise, particularly in correcting drug dosing and promoting adherence to guidelines. Checklists, as used by pilots for example, have garnered recent support as an approach to avoid or reduce errors. Compared with the decision support methods discussed above, decision analysis represents a prescriptive approach to decision making in the face of uncertainty. Its principal application is risk, abundant uncertainty, trade-offs in the outcomes emphasizing a role for preferences, or absence of evidence due to an idiosyncratic feature. For a public health example, Fig. 3-3 displays a decision tree to evaluate strategies for screening for HIV infection. Infected individuals who are unaware of their illness cause up to of the initial diagnosis because of delayed diagnosis. Early identification offers the opportunity to prevent progression to AIDS through CD4 count and viral load monitoring and combination antiretroviral therapy and to reduce spread by reducing risky injection or sexual behaviors. In 2003, the Centers for Disease Control and Prevention (CDC) proposed that routine universal HIV testing should be incorporated into standard adult medical care and, in part, cited a decision analysis model comparing HIV screening with usual care. Assuming a 1% prevalence of unidentified HIV infection in the population, routine screening of a cohort of 43-year-old men and women increased life expectancy by 5.5 days and lifetime costs by $194 per person screened, yielding an incremental cost-effectiveness ratio for screening versus usual care of $15,078 per quality-adjusted life-year (the additional cost to society to increase population health by 1 year of perfect health). Factors that influenced the results included assumptions about the effectiveness of behavior modification on subsequent sexual behavior, the benefits of early therapy for HIV infection, and the prevalence and incidence of HIV infection in the population targeted. This model, which required over 75 separate data points, provided novel insights into a public health problem in the absence of a randomized clinical trial and helped weigh the pros and cons of such a health policy recommendation. Although such models have been developed for selected clinical problems, their benefit and application to individual real-time clinical management have yet to be demonstrated. High-quality medical care begins with accurate diagnosis. Recently, diagnostic errors have been re-envisioned: the old view was that they were caused by a lack of sufficient skill of an individual clinician; the new view is that they represent a quality of care patient-safety problem traceable to breakdowns in the health care system. Whether this conceptual shift will lead to new ways to improve diagnosis is uncertain. An annual rate of diagnostic errors of 10–15%, possibly leading to 40,000 deaths in the United States, is commonly cited, but these figures are imprecise. Solutions to the “diagnostic errors as a system of care problem” have focused on system-level approaches, such as decision support and other tools integrated into electronic medical records. The use of checklists has been proposed as a means of reducing some of the cognitive errors discussed earlier in the chapter, such as premature closure. Although checklists have been shown to be useful in certain medical contexts, such as operating rooms and intensive care units, their value in preventing diagnostic errors that lead to patient adverse events remains to be shown. Clinical medicine is defined traditionally as a practice combining medical knowledge (including scientific evidence), intuition, and judgment in the care of patients (Chap. 1). EBM updates this construct by placing much greater emphasis on the processes by which clinicians gain knowledge of the most up-to-date and relevant clinical research 20,000 new cases of HIV infection annually FIgURE 3-3 Basic structure of decision model used to evaluate strategies for screen-in the United States, and about 40% of HIV-ing for HIV in the general population. HAART, highly active antiretroviral therapy. positive patients progress to AIDS within a year (Provided courtesy of G. Sanders, with permission.) to determine for themselves whether medical interventions alter the disease course and improve the length or quality of life. The meaning of practicing EBM becomes clearer through an examination of its four key steps: 1. Formulating the management question to be answered 2. Searching the literature and online databases for applicable research data 3. Appraising the evidence gathered with regard to its validity and relevance 4. Integrating this appraisal with knowledge about the unique aspects of the patient (including the patient’s preferences about the possible outcomes) The process of searching the world’s research literature and appraising the quality and relevance of studies thus identified can be quite time-consuming and requires skills and training that most clinicians do not possess. Thus, identifying recent systematic overviews of the problem in question (Table 3-3) may offer the best starting point for most EBM searches. Generally, the EBM tools listed in Table 3-3 provide access to research information in one of two forms. The first, primary research reports, is the original peer-reviewed research work that is published in medical journals and accessible through MEDLINE in abstract form. However, without training in using MEDLINE, quickly and efficiently locating reports that are on point in a huge sea of irrelevant or unhelpful citations may be difficult, and important studies could also be missed. The second form, systematic reviews, is the highest level of evidence in the hierarchy because it comprehensively summarizes the available evidence on a particular topic up to a certain date. To avoid the potential biases in review articles, predefined explicit search strategies and inclusion and exclusion criteria are used to find all of the relevant scientific research and grade its quality. The prototype for this kind of resource is the Cochrane Database of Systematic Reviews. When appropriate, a meta-analysis quantitatively summarizes the systematic review findings. The next two sections explicate the major types of clinical research reports available in the literature and the process of aggregating those data into meta-analyses. SoURCES oF EVIdENCE: CLINICAL TRIALS ANd REgISTRIES The notion of learning from observation of patients is as old as medicine itself. Over the last 50 years, physicians’ understanding of how best to turn raw observation into useful evidence has evolved considerably. Case reports, personal anecdotal experience, and small single-center case series are now recognized as having severe limitations in validity and generalizability, and although they may generate hypotheses or be the first reports of adverse events, they have no role in formulating modern standards of practice. The major tools used to develop reliable evidence consist of the randomized clinical trial and the large observational registry. A registry or database typically is focused on a disease or syndrome (e.g., cancer, CAD, heart failure), a clinical procedure (e.g., bone marrow transplantation, coronary revascularization), or an administrative process (e.g., claims data used for billing and reimbursement). By definition, in observational data, the investigator does not control patient care. Carefully collected prospective observational data, however, can achieve a level of evidence quality approaching that of major clinical trial data. At the other end of the spectrum, data collected retrospectively (e.g., chart review) are limited in form and content to what previous observers recorded, which may not include the specific research data being sought, e.g., claims data. Advantages of observational data include the inclusion of a broader population as encountered in practice than is typically represented in clinical trials because of their restrictive inclusion and exclusion criteria. In addition, observational data provide primary evidence for research questions when a randomized trial cannot be performed. For example, it would be difficult to randomize patients to test diagnostic or therapeutic strategies that are unproven but widely accepted in practice, and it would be unethical to randomize based on sex, racial/ethnic group, socioeconomic status, or country of residence or to randomize patients to a potentially harmful intervention, such as smoking or deliberately overeating to develop obesity. A well-done prospective observational study of a particular management strategy differs from a well-done randomized clinical trial most importantly by its lack of protection from treatment selection bias. The use of observational data to compare diagnostic or therapeutic strategies assumes that sufficient uncertainty exists in clinical practice to ensure that similar patients will be managed differently by different physicians. In short, the analysis assumes that a sufficient element of randomness (in the sense of disorder rather than in the formal statistical sense) exists in clinical management. In such cases, statistical models attempt to adjust for important imbalances to “level the playing field” so that a fair comparison among treatment options can be made. When management is clearly not random (e.g., all eligible left main CAD patients are referred for coronary bypass surgery), the problem may be too confounded (biased) for statistical correction, and observational data may not provide reliable evidence. In general, the use of concurrent controls is vastly preferable to that of historical controls. For example, comparison of current surgical management of left main CAD with left main CAD patients treated medically during the 1970s (the last time these patients were routinely treated with medicine alone) would be extremely misleading because “medical therapy” has substantially improved in the interim. MEDLINE National Library of Medicine database with cita-www.nlm.nih.gov Free via Internet. tions back to 1966. Randomized controlled clinical trials include the careful prospective design features of the best observational data studies but also include the use of random allocation of treatment. This design provides the best protection against measured and unmeasured confounding due to treatment selection bias (a major aspect of internal validity). However, the randomized trial may not have good external validity (generalizability) if the process of recruitment into the trial resulted in the exclusion of many patients seen in clinical practice. Consumers of medical evidence need to be aware that randomized trials vary widely in their quality and applicability to practice. The process of designing such a trial often involves many compromises. For example, trials designed to gain U.S. Food and Drug Administration (FDA) approval for an investigational drug or device must fulfill regulatory requirements that may result in a trial population and design that differs substantially from what practicing clinicians would find most useful. The Greek prefix meta signifies something at a later or higher stage of development. Meta-analysis is research that combines and summarizes the available evidence quantitatively. Although occasionally used to examine nonrandomized studies, meta-analysis is used most typically to summarize all randomized trials examining a particular therapy. Ideally, unpublished trials should be identified and included to avoid publication bias (i.e., missing “negative” trials that may not be published). Furthermore, the best meta-analyses obtain and analyze individual patient-level data from all trials rather than working only the summary data in published reports of each trial. Nonetheless, not all published meta-analyses yield reliable evidence for a particular problem, so their methodology should be scrutinized carefully to ensure proper study design and analysis. The results of a well-done meta-analysis are likely to be most persuasive if they include at least several large-scale, properly performed randomized trials. Meta-analysis can especially help detect benefits when individual trials are inadequately powered (e.g., the benefits of streptokinase thrombolytic therapy in acute MI demonstrated by ISIS-2 in 1988 were evident by the early 1970s through meta-analysis). However, in cases in which the available trials are small or poorly done, meta-analysis should not be viewed as a remedy for the deficiency in primary trial data. Meta-analyses typically focus on summary measures of relative treatment benefit, such as odds ratios or relative risks. Clinicians also should examine what absolute risk reduction (ARR) can be expected from the therapy. A useful summary metric of absolute treatment benefit is the number needed to treat (NNT) to prevent one adverse outcome event (e.g., death, stroke). NNT is simply 1/ARR. For example, if a hypothetical therapy reduced mortality rates over a 5-year follow-up by 33% (the relative treatment benefit) from 12% (control arm) to 8% (treatment arm), the ARR would be 12% – 8% = 4%, and the NNT would be 1/0.04, or 25. Thus, it would be necessary to treat 25 patients for 5 years to prevent 1 death. If the hypothetical treatment was applied to a lower-risk population, say, with a 6% 5-year mortality, the 33% relative treatment benefit would reduce absolute mortality by 2% (from 6% to 4%), and the NNT for the same therapy in this lower-risk group of patients would be 50. Although not always made explicit, comparisons of NNT estimates from different studies should account for the duration of follow-up used to create each estimate. According to the 1990 Institute of Medicine definition, clinical practice guidelines are “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.” This definition emphasizes several crucial features of modern guideline development. First, guidelines are created by using the tools of EBM. In particular, the core of the development process is a systematic literature search followed by a review of the relevant peer-reviewed literature. Second, guidelines usually are focused on a clinical disorder (e.g., adult diabetes, stable angina pectoris) or a health care intervention (e.g., cancer screening). Third, the primary objective of guidelines is to improve the quality of medical care by identifying care practices that should be routinely implemented, based on high-quality evidence and high benefit-to-harm ratios for the interventions. Guidelines are intended to “assist” decision-making, not to define explicitly what decisions should be made in a particular situation, in part because evidence alone is never sufficient for clinical decision-making (e.g., deciding whether to intubate and administer antibiotics for pneumonia in a terminally ill individual, in an individual with dementia, or in an otherwise healthy 30-year-old mother). Guidelines are narrative documents constructed by expert panels whose composition often is determined by interested professional organizations. These panels vary in the degree to which they represent all relevant stakeholders. The guideline documents consist of a series of specific management recommendations, a summary indication of the quantity and quality of evidence supporting each recommendation, an assessment of the benefit-to-harm ratio for the recommendation, and a narrative discussion of the recommendations. Many recommendations simply reflect the expert consensus of the guideline panel because literature-based evidence is absent. The final step in guideline construction is peer review, followed by a final revision in response to the critiques provided. To improve the reliability and trustworthiness of guidelines, the Institute of Medicine has made methodologic recommendations for guideline development. Guidelines are closely tied to the process of quality improvement in medicine through their identification of evidence-based best practices. Such practices can be used as quality indicators. Examples include the proportion of acute MI patients who receive aspirin upon admission to a hospital and the proportion of heart failure patients with a depressed ejection fraction treated with an ACE inhibitor. In this era of EBM, it is tempting to think that all the difficult decisions practitioners face have been or soon will be solved and digested into practice guidelines and computerized reminders. However, EBM provides practitioners with an ideal rather than a finished set of tools with which to manage patients. Moreover, even with such evidence, it is always worth remembering that the response to therapy of the “average” patient represented by the summary clinical trial outcomes may not be what can be expected for the specific patient sitting in front of a physician in the clinic or hospital. In addition, meta-analyses cannot generate evidence when there are no adequate randomized trials, and most of what clinicians confront in practice will never be thoroughly tested in a randomized trial. For the foreseeable future, excellent clinical reasoning skills, experience supplemented by well-designed quantitative tools, and a keen appreciation for the role of individual patient preferences in their health care will continue to be of paramount importance in the practice of clinical medicine. Katrina Armstrong, Gary J. Martin A primary goal of health care is to prevent disease or detect it early enough that intervention will be more effective. Tremendous progress has been made toward this goal over the last 50 years. Screening tests are available for many common diseases and encompass biochemical (e.g., cholesterol, glucose), physiologic (e.g., blood pressure, growth curves), radiologic (e.g., mammogram, bone densitometry), and cytologic (e.g., Pap smear) approaches. Effective preventive interventions have resulted in dramatic declines in mortality from many diseases, particularly infections. Preventive interventions include counseling about risk behaviors, vaccinations, medications, and, in some relatively uncommon settings, surgery. Preventive services (including screening tests, preventive interventions, and counseling) are different than other medical interventions because they are proactively administered to healthy individuals instead of in response to a symptom, sign, or diagnosis. Thus, the decision to recommend a screening test or preventive intervention requires a particularly high bar of evidence that testing and intervention are both practical and effective. must be extremely low risk to have an acceptable benefit-to-harm ratio, the ability to target individuals who are more likely to develop disease could enable the application of a wider set of potential approaches and increase efficiency. Currently, there are many types of data that can predict disease incidence in an asymptomatic individual. Genomic data have received the most attention to date, at least in part because mutations in high-penetrance genes have clear implications for preventive care (Chap. 84). Women with mutations in either BRCA1 or BRCA2, the two major breast cancer susceptibility genes identified to date, have a markedly increased risk (5to 20-fold) of breast and ovarian cancer. Screening and prevention recommendations include prophylactic oophorectomy and breast magnetic resonance imaging (MRI), both of which are considered to incur too much harm for women at average cancer risk. Some women opt for prophylactic mastectomy to dramatically reduce their breast cancer risk. Although the proportion of common disease explained by high-penetrance genes appears to be relatively small (5–10% of most diseases), mutations in rare, moderate-penetrance genes, and variants in low-penetrance genes, also contribute to the prediction of disease risk. The advent of affordable whole exome/whole genome sequencing is likely to speed the dissemination of these tests into clinical practice and may transform the delivery of preventive care. Other forms of “omic” data also have the potential to provide important predictive information, including proteomics and metabolomics. These fields are earlier in development and have yet to move into clinical practice. Imaging and other clinical data may also be integrated into a risk-stratified paradigm as evidence grows about the predictive ability of these data and the feasibility of their collection. Of course, all of these data may also be helpful in predicting the risk of harms from screening or prevention, such as the risk of a false-positive mammogram. To the degree that this information can be incorporated into personalized screening and prevention strategies, it could also improve delivery and efficiency. In addition to advances in risk prediction, there are several other factors that are likely to promote the importance of screening and prevention in the near term. New imaging modalities are being developed that promise to detect changes at the cellular and subcellular levels, greatly increasing the probability that early detection improves outcomes. The rapidly growing understanding of the biologic pathways underlying initiation and progression of many common diseases has the potential to transform the development of preventive interventions, including chemoprevention. Furthermore, screening and prevention offer the promise of both improving health and sparing the costs of disease treatment, an issue that has gained national attention with the continued growth in health care costs. This chapter will review the basic principles of screening and prevention in the primary care setting. Recommendations for specific disorders such as cardiovascular disease, diabetes, and cancer are provided in the chapters dedicated to those topics. The basic principles of screening populations for disease were published by the World Health Organization in 1968 (Table 4-1). In general, screening is most effective when applied to relatively common disorders that carry a large disease burden (Table 4-2). The five leading causes of mortality in the United States are heart diseases, malignant neoplasms, accidents, cerebrovascular diseases, and chronic obstructive pulmonary disease. Thus, many screening strategies are targeted at these conditions. From a global health perspective, these conditions are priorities, but malaria, malnutrition, AIDS, tuberculosis, and violence also carry a heavy disease burden (Chap. 2). challenging for some common diseases. For example, although Alzheimer’s disease is the sixth leading cause of death in the prInCIpLes of sCreenInG The condition should be an important health problem. There should be a treatment for the condition. Facilities for diagnosis and treatment should be available. There should be a latent stage of the disease. There should be a test or examination for the condition. The test should be acceptable to the population. The natural history of the disease should be adequately understood. There should be an agreed policy on whom to treat. The cost of finding a case should be balanced in relation to overall medical expenditure. United States, there are no curative treatments and no evidence that early treatment improves outcomes. Lack of facilities for diagnosis and treatment is a particular challenge for developing countries and may change screening strategies, including the development of “see and treat” approaches such as those currently used for cervical cancer screening in some countries. A long latent or preclinical phase where early treatment increases the chance of cure is a hallmark of many cancers; for example, polypectomy prevents progression to colon cancer. Similarly, early identification of hypertension or hyperlipidemia allows therapeutic interventions that reduce the long-term risk of cardiovascular or cerebrovascular events. In contrast, lung cancer screening has historically proven more challenging because most tumors are not curable by the time they can be detected on a chest x-ray. However, the length of the preclinical phase also depends on the level of resolution of the screening test, and this situation changed with the development of chest computed tomography (CT). Low-dose chest CT scanning can detect tumors earlier and was recently demonstrated to reduce lung cancer mortality by 20% in individuals who had at least a 30-pack-year history of smoking. The short interval between the ability to detect disease on a screening test and the development of incurable disease also contributes to the limited effectiveness of mammography screening in reducing breast cancer mortality among premenopausal women. Similarly, the early detection of prostate cancer may not lead to a difference in the mortality rate because the disease is often indolent and competing morbidities, such as coronary artery disease, may ultimately cause mortality (Chap. 100). This uncertainty about the natural history is also reflected in the controversy about treatment of prostate cancer, further contributing to the challenge of screening in this disease. Finally, screening programs can incur significant economic costs that must be considered in the context of the available resources and alternative strategies for improving health outcomes. Because screening and preventive interventions are recommended to asymptomatic individuals, they are held to a high standard for demonstrating a favorable risk-benefit ratio before implementation. In general, the principles of evidence-based medicine apply to demonstrating the efficacy of screening tests and preventive interventions, where randomized controlled trials (RCTs) with mortality outcomes are the gold standard. However, because RCTs are often not feasible, observational studies, such as case-control designs, have been used to assess the effectiveness of some interventions such as colorectal cancer screening. For some strategies, such as cervical cancer screening, the only data available are ecologic data demonstrating dramatic declines in mortality. Chapter 4 Screening and Prevention of Disease Irrespective of the study design used to assess the effectiveness of screening, it is critical that disease incidence or mortality is the primary endpoint rather than length of disease survival. This is important because lead time bias and length time bias can create the appearance of an improvement in disease survival from a screening test when there is no actual effect. Lead time bias occurs because screening identifies a case before it would have presented clinically, thereby creating the perception that a patient lived longer after diagnosis simply by moving the date of diagnosis earlier rather than the date of death later. Length time bias occurs because screening is more likely to identify slowly progressive disease than rapidly progressive disease. Thus, within a fixed period of time, a screened population will have a greater proportion of these slowly progressive cases and will appear to have better disease survival than an unscreened population. A variety of endpoints are used to assess the potential gain from screening and preventive interventions. 1. The absolute and relative impact of screening on disease incidence or mortality. The absolute difference in disease incidence or mortality between a screened and nonscreened group allows the comparison of size of the benefit across preventive services. A meta-analysis of Swedish mammography trials (ages 40–70) found that ~1.2 fewer women per 1000 would die from breast cancer if they were screened over a 12-year period. By comparison, ~3 lives per 1000 would be saved from colon cancer in a population (ages 50–75) screened with annual fecal occult blood testing (FOBT) over a 13-year period. Based on this analysis, colon cancer screening may actually save more women’s lives than does mammography. However, the relative impact of FOBT (30% reduction in colon cancer death) is similar to the relative impact of mammography (14–32% reduction in breast cancer death), emphasizing the importance of both relative and absolute comparisons. 2. The number of subjects screened to prevent disease or death in one individual. The inverse of the absolute difference in mortality is the number of subjects who would need to be screened or receive a preventive intervention to prevent one death. For example, 731 women ages 65–69 would need to be screened by dual-energy x-ray absorptiometry (DEXA) (and treated appropriately) to prevent one hip fracture from osteoporosis. 3. Increase in average life expectancy for a population. Predicted increases in life expectancy for various screening and preventive interventions are listed in Table 4-3. It should be noted, however, that the increase in life expectancy is an average that applies to a population, not to an individual. In reality, the vast majority of the population does not derive any benefit from a screening test or preventive intervention. A small subset of patients, however, will benefit greatly. For example, Pap smears do not benefit the 98% of women who never develop cancer of the cervix. However, for the 2% who would have developed cervical cancer, Pap smears may add as much as 25 years to their lives. Some studies suggest that a 1-month gain of life expectancy is a reasonable goal for a population-based screening or prevention strategy. Just as with most aspects of medical care, screening and preventive interventions also incur the possibility of adverse outcomes. These adverse outcomes include side effects from preventive medications and vaccinations, false-positive screening tests, overdiagnosis of disease from screening tests, anxiety, radiation exposure from some screening tests, and discomfort from some interventions and screening tests. The risk of side effects from preventive medications is analogous to the use of medications in therapeutic settings and is considered in the Food and Drug Administration (FDA) approval process. Side effects from currently recommended vaccinations are primarily limited to discomfort and minor immune reactions. However, the concern about associations between vaccinations and serious adverse outcomes continues to limit the acceptance of many vaccinations despite the lack of data supporting the causal nature of these associations. The possibility of a false-positive test occurs with nearly all screening tests, although the definition of what constitutes a false-positive Mammography: Women, 40–50 years 0–5 days Women, 50–70 years 1 month Pap smears, age 18–65 2–3 months Getting a 35-year-old smoker to quit 3–5 years Beginning regular exercise for a 40-year-old 9 months–2 years man (30 min, 3 times a week) result often varies across settings. For some tests such as screening mammography and screening chest CT, a false-positive result occurs when an abnormality is identified that is not malignant, requiring either a biopsy diagnosis or short-term follow-up. For other tests such as Pap smears, a false-positive result occurs because the test identifies a wide range of potentially premalignant states, only a small percentage of which would ever progress to an invasive cancer. This risk is closely tied to the risk of overdiagnosis in which the screening test identifies disease that would not have presented clinically in the patient’s lifetime. Assessing the degree of overdiagnosis from a screening test is very difficult given the need for long-term follow-up of an unscreened population to determine the true incidence of disease over time. Recent estimates suggest that as much as 15–25% of breast cancers identified by mammography screening and 15–37% of prostate cancers identified by prostate-specific antigen testing may never have presented clinically. Screening tests also have the potential to create unwarranted anxiety, particularly in conjunction with false-positive findings. Although multiple studies have documented increased anxiety through the screening process, there are few data suggesting this anxiety has long-term adverse consequences, including subsequent screening behavior. Screening tests that involve radiation (e.g., mammography, chest CT) add to the cumulative radiation exposure for the screened individual. The absolute amount of radiation is very small from any of these tests, but the overall impact of repeated exposure from multiple sources is still being determined. Some preventive interventions (e.g., vaccinations) and screening tests (e.g., mammography) may lead to discomfort at the time of administration, but again, there is little evidence of long-term adverse consequences. The decision to implement a population-based screening and prevention strategy requires weighing the benefits and harms, including the economic impact of the strategy. The costs include not only the expense of the intervention but also time away from work, downstream costs from false-positive results or adverse events, and other potential harms. Cost-effectiveness is typically assessed by calculating the cost per year of life saved, with adjustment for the quality of life impact of different interventions and disease states (i.e., quality-adjusted life-year). Typically, strategies that cost <$50,000 to $100,000 per quality-adjusted year of life saved are considered “cost-effective” (Chap. 3). The U.S. Preventive Services Task Force (USPSTF) is an independent panel of experts in preventive care that provides evidence-based recommendations for screening and preventive strategies based on an assessment of the benefit-to-harm ratio (Tables 4-4 and 4-5). Because there are multiple advisory organizations providing recommendations for preventive services, the agreement among the organizations varies across the different services. For example, all advisory groups support screening for hyperlipidemia and colorectal cancer, whereas consensus is lower for breast cancer screening among women in their 40s and almost nonexistent for prostate cancer screening. Because the guidelines are only updated periodically, differences across advisory organizations may also reflect the data that were available when the guideline was issued. For example, multiple organizations have recently issued recommendations supporting lung cancer screening among heavy smokers based on the results of the National Lung Screening Trial (NLST) published in 2011, whereas the USPSTF did not review lung cancer screening until 2014. 29taBLe 4-4 sCreenInG tests reCoMMended By the u.s. preventIve servICes task forCe for averaGe-rIsk aduLts Abbreviations: DEXA, dual-energy x-ray absorptiometry; HCV, hepatitis C virus; HPV, human papillomavirus; PCR, polymerase chain reaction. Source: Adapted from the U.S. Preventive Services Task Force 2013. http://www.uspreventiveservicestaskforce.org/adultrec.htm. For many screening tests and preventive interventions, the balance Although informed consent is important for all aspects of mediof benefits and harms may be uncertain for the average-risk population cal care, shared decision-making may be a particularly important but more favorable for individuals at higher risk for disease. Although approach to decisions about preventive services when the benefit-toage is the most commonly used risk factor for determining screening harm ratio is uncertain for a specific population. For example, many and prevention recommendations, the USPSTF also recommends expert groups, including the USPSTF, recommend an individualized some screening tests in populations with other risk factors for the dis-discussion about prostate cancer screening, because the decision-ease (e.g., syphilis). In addition, being at increased risk for the disease making process is complex and relies heavily on personal issues. often supports initiating screening at an earlier age than that recom-Some men may decline screening, whereas others may be more will-mended for the average-risk population. For example, when there is ing to accept the risks of an early detection strategy. Recent analysis a significant family history of breast or colon cancer, it is prudent to suggests that many men may be better off not screening for prostate initiate screening 10 years before the age at which the youngest family cancer because watchful waiting was the preferred strategy when member was diagnosed with cancer. quality-adjusted life-years were considered. Another example of Chapter 4 Screening and Prevention of Disease shared decision-making involves the choice of techniques for colon cancer screening (Chap. 100). In controlled studies, the use of annual FOBT reduces colon cancer deaths by 15–30%. Flexible sigmoidoscopy reduces colon cancer deaths by ~60%. Colonoscopy offers the same benefit as or greater benefit than flexible sigmoidoscopy, but its use incurs additional costs and risks. These screening procedures have not been compared directly in the same population, but the estimated cost to society is similar: $10,000–25,000 per year of life saved. Thus, although one patient may prefer the ease of preparation, less time disruption, and the lower risk of flexible sigmoidoscopy, others may prefer the sedation and thoroughness of colonoscopy. In considering the impact of preventive services, it is important to recognize that tobacco and alcohol use, diet, and exercise constitute the vast majority of factors that influence preventable deaths in developed countries. Perhaps the single greatest preventive health care measure is to help patients quit smoking (Chap. 470). However, efforts in these areas frequently involve behavior changes (e.g., weight loss, exercise, seat belts) or the management of addictive conditions (e.g., tobacco and alcohol use) that are often recalcitrant to intervention. Although these are challenging problems, evidence strongly supports the role of counseling by health care providers (Table 4-6) in effecting health behavior change. Educational campaigns, public policy changes, and community-based interventions have also proven to be important parts of a strategy for addressing these factors in some settings. Although the USPSTF found that the evidence was conclusive to recommend a relatively small set of counseling activities, counseling in areas such as physical activity and injury prevention (including seat belts and bicycle and motorcycle helmets) has become a routine part of primary care practice. The implementation of disease prevention and screening strategies in practice is challenging. A number of techniques can assist physicians with the delivery of these services. An appropriately configured electronic health record can provide reminder systems that make it easier for physicians to track and meet guidelines. Some systems give patients secure access to their medical records, providing an additional means Alcohol and drug use 467, 468e Sexually transmitted infections 163, 226 to enhance adherence to routine screening. Systems that provide nurses and other staff with standing orders are effective for smoking prevention and immunizations. The Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention have developed flow sheets and electronic tools as part of their “Put Prevention into Practice” program (http://www.uspreventiveservicestaskforce.org/ tools.htm). Many of these tools use age categories to help guide implementation. Age-specific recommendations for screening and counseling are summarized in Table 4-7. Many patients see a physician for ongoing care of chronic illnesses, and this visit provides an opportunity to include a “measure of prevention” for other health problems. For example, a patient seen for management of hypertension or diabetes can have breast cancer screening incorporated into one visit and a discussion about colon cancer screening at the next visit. Other patients may respond more favorably to a clearly defined visit that addresses all relevant screening and prevention interventions. Because of age or comorbidities, it may be appropriate with some patients to abandon certain screening and prevention activities, although there are fewer data about when to “sunset” these services. For many screening tests, the benefit of screening does not accrue until 5 to 10 years of follow-up, and there are generally few data to support continuing screening for most diseases past age 75. In addition, for patients with advanced diseases and limited life expectancy, there is considerable benefit from shifting the focus from screening procedures to the conditions and interventions more likely to affect quality and length of life. mitted disease; UV, ultraviolet. principles of Clinical pharmacology Dan M. Roden Drugs are the cornerstone of modern therapeutics. Nevertheless, it is well recognized among physicians and in the lay community that the 5 Leading Causes of Age Group Age-Specific Mortality Screening Prevention Interventions to Consider for Each Specific Population Note: The numbers in parentheses refer to areas of risk in the mortality column affected by the specified intervention. Abbreviations: AAA, abdominal aortic aneurysm; ATV, all-terrain vehicle; HPV, human papillomavirus; MMR, measles-mumps-rubella; PSA, prostate-specific antigen; STD, sexually trans- Chapter 5 Principles of Clinical Pharmacology outcome of drug therapy varies widely among individuals. While this variability has been perceived as an unpredictable, and therefore inevitable, accompaniment of drug therapy, this is not the case. The goal of this chapter is to describe the principles of clinical pharmacology that can be used for the safe and optimal use of available and new drugs. Drugs interact with specific target molecules to produce their beneficial and adverse effects. The chain of events between administration of a drug and production of these effects in the body can be divided into two components, both of which contribute to variability in drug actions. The first component comprises the processes that determine drug delivery to, and removal from, molecular targets. The resulting description of the relationship between drug concentration and time is termed pharmacokinetics. The second component of variability in drug action comprises the processes that determine variability in drug actions despite equivalent drug delivery to effector drug sites. This description of the relationship between drug concentration and effect is termed pharmacodynamics. As discussed further below, pharmacodynamic variability can arise as a result of variability in function of the target molecule itself or of variability in the broad biologic context in which the drug-target interaction occurs to achieve drug effects. Two important goals of the discipline of clinical pharmacology are (1) to provide a description of conditions under which drug actions vary among human subjects; and (2) to determine mechanisms underlying this variability, with the goal of improving therapy with available drugs as well as pointing to new drug mechanisms that may be effective in the treatment of human disease. The first steps in the discipline were empirical descriptions of the influence of disease on drug actions and of individuals or families with unusual sensitivities to adverse drug effects. These important descriptive findings are now being replaced by an understanding of the molecular mechanisms underlying variability in drug actions. Thus, the effects of disease, drug coadministration, or familial factors in modulating drug action can now be reinterpreted as variability in expression or function of specific genes whose products determine pharmacokinetics and pharmacodynamics. Nevertheless, it is often the personal interaction of the patient with the physician or other health care provider that first identifies unusual variability in drug actions; maintained alertness to unusual drug responses continues to be a key component of improving drug safety. Unusual drug responses, segregating in families, have been recognized for decades and initially defined the field of pharmacogenetics. Now, with an increasing appreciation of common and rare polymorphisms across the human genome, comes the opportunity to reinterpret descriptive mechanisms of variability in drug action as a consequence of specific DNA variants, or sets of variants, among individuals. This approach defines the field of pharmacogenomics, which may hold the opportunity of allowing practitioners to integrate a molecular understanding of the basis of disease with an individual’s genomic makeup to prescribe personalized, highly effective, and safe therapies. Drug therapy is an ancient feature of human culture. The first treatments were plant extracts discovered empirically to be effective for indications like fever, pain, or breathlessness. This symptom-based empiric approach to drug development was supplanted in the twentieth century by identification of compounds targeting more fundamental biologic processes such as bacterial growth or elevated blood pressure; the term “magic bullet,” coined by Paul Ehrlich to describe the search for effective compounds for syphilis, captures the essence of the hope that understanding basic biologic processes will lead to highly effective new therapies. An integral step in modern drug development follows identification of a chemical lead with biologic activity with increasingly sophisticated medicinal chemistry-based structural modifications to develop compounds with specificity for the chosen target, lack of “off-target” effects, and pharmacokinetic properties suitable for human use (e.g., consistent bioavailability, long elimination half-life, no high-risk pharmacokinetic features described further below). A common starting point for contemporary drug development is basic biologic discovery that implicates potential target molecules: examples of such target molecules include HMG-CoA reductase or the BRAF V600E mutation in many malignant melanomas. The development of compounds targeting these molecules has not only revolutionized treatment for diseases such as hypercholesterolemia or malignant melanoma, but has also revealed new biologic features of disease. Thus, for example, initial spectacular successes with vemurafenib (which targets BRAF V600E) were followed by near-universal tumor relapse, strongly suggesting that inhibition of this pathway alone would be insufficient for tumor control. This reasoning, in turn, supports a view that many complex diseases will not lend themselves to cure by target ing a single magic bullet, but rather single drugs or combinations will a symptom and those designed to prolong useful life. An increasing emphasis on the principles of evidence-based medicine and techniques such as large clinical trials and meta-analyses have defined benefits of drug therapy in broad patient populations. Establishing the balance between risk and benefit is not always simple. An increasing body of evidence supports the idea, with which practitioners are very familiar, that individual patients may display responses that are not expected from large population studies and often have comorbidities that typically exclude them from large clinical trials. In addition, therapies that provide symptomatic benefits but shorten life may be entertained in patients with serious and highly symptomatic diseases such as heart failure or cancer. These considerations illustrate the continuing, highly personal nature of the relationship between the prescriber and the patient. Adverse Effects Some adverse effects are so common and so readily associated with drug therapy that they are identified very early during clinical use of a drug. By contrast, serious adverse effects may be sufficiently uncommon that they escape detection for many years after a drug begins to be widely used. The issue of how to identify rare but serious adverse effects (that can profoundly affect the benefit-risk perception in an individual patient) has not been satisfactorily resolved. Potential approaches range from an increased understanding of the molecular and genetic basis of variability in drug actions to expanded postmarketing surveillance mechanisms. None of these have been completely effective, so practitioners must be continuously vigilant to the possibility that unusual symptoms may be related to specific drugs, or combinations of drugs, that their patients receive. Therapeutic Index Beneficial and adverse reactions to drug therapy can be described by a series of dose-response relations (Fig. 5-1). Well-tolerated drugs demonstrate a wide margin, termed the therapeutic ratio, therapeutic index, or therapeutic window, between the doses required to produce a therapeutic effect and those producing toxicity. In cases where there is a similar relationship between plasma drug concentration and effects, monitoring plasma concentrations can be a highly effective aid in managing drug therapy by enabling concentrations to be maintained above the minimum required to produce an effect and below the concentration range likely to produce toxicity. Such monitoring has been widely used to guide therapy with specific agents, such as certain antiarrhythmics, anticonvulsants, and antibiotics. Many of the principles in clinical pharmacology and examples need to attack multiple pathways whose perturbation results in disease. The use of combination therapy in settings such as hypertension, tuber culosis, HIV infection, and many cancers highlights potential for such a “systems biology” view of drug therapy. It is true across all cultures and diseases that factors such as compliance, genetic variants affecting pharmacokinetics or pharmacodynamics, and drug interactions contribute to drug responses. In addition, cultureor ancestry-specific factors play a role. For example, the frequency of specific genetic variants modulating drug responses often varies by ancestry, as discussed later. Cost issues or cultural factors may determine the likelihood that specific drugs, drug combinations, or over-the-counter (OTC) remedies are prescribed. The broad principles of clinical pharmacology enunciated here can be used to analyze the mechanisms underlying successful or unsuccessful therapy with any drug. INdICATIoNS FoR dRUg THERAPY: RISK VERSUS BENEFIT It is self-evident that the benefits of drug therapy should outweigh the risks. Benefits fall into two broad categories: those designed to alleviate FIgURE 5-1 The concept of a therapeutic ratio. Each panel illustrates the relationship between increasing dose and cumulative probability of a desired or adverse drug effect. Top. A drug with a wide therapeutic ratio, i.e., a wide separation of the two curves. Bottom. A drug with a narrow therapeutic ratio; here, the likelihood of adverse effects at therapeutic doses is increased because the curves are not well separated. Further, a steep dose-response curve for adverse effects is especially undesirable, as it implies that even small dosage increments may sharply increase the likelihood of toxicity. When there is a definable relationship between drug concentration (usually measured in plasma) and desirable and adverse effect curves, concentration may be substituted on the abscissa. Note that not all patients necessarily demonstrate a therapeutic response (or adverse effect) at any dose, and that some effects (notably some adverse effects) may occur in a dose-independent fashion. outlined below, which can be applied broadly to therapeutics, have been developed in these arenas. The processes of absorption, distribution, metabolism, and excretion—collectively termed drug disposition—determine the concentration of drug delivered to target effector molecules. When a drug is administered orally, subcutaneously, intramuscularly, rectally, sublingually, or directly into desired sites of action, the amount of drug actually entering the systemic circulation may be less than with the intravenous route (Fig. 5-2A). The fraction of drug available to the systemic circulation by other routes is termed bioavailability. Bioavailability may be <100% for two main reasons: (1) absorption is reduced, or (2) the drug undergoes metabolism or elimination prior to entering the systemic circulation. Occasionally, the administered drug formulation is inconsistent or has degraded with time; for example, the anticoagulant dabigatran degrades rapidly (over weeks) once exposed to air, so the amount administered may be less than prescribed. When a drug is administered by a nonintravenous route, the peak concentration occurs later and is lower than after the same dose given by rapid intravenous injection, reflecting absorption from the site of administration (Fig. 5-2). The extent of absorption may be reduced because a drug is incompletely released from its dosage form, undergoes destruction at its site of administration, or has physicochemical properties such as insolubility that prevent complete absorption from its site of administration. Slow absorption rates are deliberately designed into “slow-release” or “sustained-release” drug formulations in order to minimize variation in plasma concentrations during the interval between doses. “First-Pass” Effect When a drug is administered orally, it must traverse the intestinal epithelium, the portal venous system, and the liver prior FIgURE 5-2 Idealized time-plasma concentration curves after a single dose of drug. A. The time course of drug concentration after an instantaneous IV bolus or an oral dose in the one-compartment model shown. The area under the time-concentration curve is clearly less with the oral drug than the IV, indicating incomplete bioavailability. Note that despite this incomplete bioavailability, concentration after the oral dose can be higher than after the IV dose at some time points. The inset shows that the decline of concentrations over time is linear on a log-linear plot, characteristic of first-order elimination, and that oral and IV drugs have the same elimination (parallel) time course. B. The decline of central compartment concentration when drug is distributed both to and from a peripheral compartment and eliminated from the central compartment. The rapid initial decline of concentration reflects not drug elimination but distribution. FIgURE 5-3 Mechanism of presystemic clearance. After drug enters the enterocyte, it can undergo metabolism, excretion into the intestinal lumen, or transport into the portal vein. Similarly, the hepatocyte may accomplish metabolism and biliary excretion prior to the entry of drug and metabolites to the systemic circulation. (Adapted by permission from DM Roden, in DP Zipes, J Jalife [eds]: Cardiac Electrophysiology: From Cell to Bedside, 4th ed. Philadelphia, Saunders, 2003. Copyright 2003 with permission from Elsevier.) to entering the systemic circulation (Fig. 5-3). Once a drug enters the enterocyte, it may undergo metabolism, be transported into the portal vein, or be excreted back into the intestinal lumen. Both excretion into the intestinal lumen and metabolism decrease systemic bioavailability. Once a drug passes this enterocyte barrier, it may also be taken up into the hepatocyte, where bioavailability can be further limited by metabolism or excretion into the bile. This elimination in intestine and liver, which reduces the amount of drug delivered to the systemic circulation, is termed presystemic elimination, presystemic extraction, or first-pass elimination. Drug movement across the membrane of any cell, including enterocytes and hepatocytes, is a combination of passive diffusion and active transport, mediated by specific drug uptake and efflux molecules. One widely studied drug transport molecule is P-glycoprotein, the product of the MDR1 gene. P-glycoprotein is expressed on the apical aspect of the enterocyte and on the canalicular aspect of the hepatocyte (Fig. 5-3). In both locations, it serves as an efflux pump, limiting availability of drug to the systemic circulation. P-glycoprotein–mediated drug efflux from cerebral capillaries limits drug brain penetration and is an important component of the blood-brain barrier. Drug metabolism generates compounds that are usually more polar and, hence, more readily excreted than parent drug. Metabolism takes place predominantly in the liver but can occur at other sites such as kidney, intestinal epithelium, lung, and plasma. “Phase I” metabolism involves chemical modification, most often oxidation accomplished by members of the cytochrome P450 (CYP) monooxygenase superfamily. CYPs that are especially important for drug metabolism are presented in Table 5-1, and each drug may be a substrate for one or more of these enzymes. “Phase II” metabolism involves conjugation of specific endogenous compounds to drugs or their metabolites. The enzymes aInhibitors affect the molecular pathway, and thus may affect substrate. bClinically important genetic variants described; see Table 5-2. Note: A listing of CYP substrates, inhibitors, and inducers is maintained at http://medicine .iupui.edu/flockhart/table.htm. that accomplish phase II reactions include glucuronyl-, acetyl-, sulfo-, and methyltransferases. Drug metabolites may exert important pharmacologic activity, as discussed further below. Clinical Implications of Altered Bioavailability Some drugs undergo near-complete presystemic metabolism and, thus, cannot be administered orally. Nitroglycerin cannot be used orally because it is completely extracted prior to reaching the systemic circulation. The drug is, therefore, used by the sublingual or transdermal routes, which bypass presystemic metabolism. Some drugs with very extensive presystemic metabolism can still be administered by the oral route, using much higher doses than those required intravenously. Thus, a typical intravenous dose of verapamil is 1–5 mg, compared to the usual single oral dose of 40–120 mg. Administration of low-dose aspirin can result in exposure of cyclooxygenase in platelets in the portal vein to the drug, but systemic sparing because of first-pass aspirin deacylation in the liver. This is an example of presystemic metabolism being exploited to therapeutic advantage. Most pharmacokinetic processes, such as elimination, are first-order; that is, the rate of the process depends on the amount of drug present. Elimination can occasionally be zero-order (fixed amount eliminated per unit time), and this can be clinically important (see “Principles of Dose Selection”). In the simplest pharmacokinetic model (Fig. 5-2A), a drug bolus (D) is administered instantaneously to a central compartment, from which drug elimination occurs as a first-order process. Occasionally, central and other compartments correspond to physiologic spaces (e.g., plasma volume), whereas in others they are simply mathematical functions used to describe drug disposition. The first-order nature of drug elimination leads directly to the relationship describing drug concentration (C) at any time (t) following the bolus: C = D • e(0.69/− tt1/2) where V c is the volume of the compartment into which drug is delivered and t1/2 is elimination half-life. As a consequence of this relationship, a plot of the logarithm of concentration versus time is a straight line (Fig. 5-2A, inset). Half-life is the time required for 50% of a first-order process to be complete. Thus, 50% of drug elimination is achieved after one drug-elimination half-life, 75% after two, 87.5% after three, etc. In practice, first-order processes such as elimination are near-complete after four–five half-lives. In some cases, drug is removed from the central compartment not only by elimination but also by distribution into peripheral compartments. In this case, the plot of plasma concentration versus time after a bolus may demonstrate two (or more) exponential components (Fig. 5-2B). In general, the initial rapid drop in drug concentration represents not elimination but drug distribution into and out of peripheral tissues (also first-order processes), while the slower component represents drug elimination; the initial precipitous decline is usually evident with administration by intravenous but not by other routes. Drug concentrations at peripheral sites are determined by a balance between drug distribution to and redistribution from those sites, as well as by elimination. Once distribution is near-complete (four–five distribution half-lives), plasma and tissue concentrations decline in parallel. Clinical Implications of Half-Life Measurements The elimination half-life not only determines the time required for drug concentrations to fall to near-immeasurable levels after a single bolus, it is also the sole determinant of the time required for steady-state plasma concentrations to be achieved after any change in drug dosing (Fig. 5-4). This applies to the initiation of chronic drug therapy (whether by multiple oral doses or by continuous intravenous infusion), a change in chronic drug dose or dosing interval, or discontinuation of drug. Steady state describes the situation during chronic drug administration when the amount of drug administered per unit time equals drug eliminated per unit time. With a continuous intravenous infusion, plasma concentrations at steady state are stable, while with chronic oral drug administration, plasma concentrations vary during the dosing interval but the time-concentration profile between dosing intervals is stable (Fig. 5-4). In a typical 70-kg human, plasma volume is ∼3 L, blood volume is ∼5.5 L, and extracellular water outside the vasculature is ∼20 L. The volume of distribution of drugs extensively bound to plasma proteins but not to tissue components approaches plasma volume; warfarin is one such example. By contrast, for drugs highly bound to tissues, the volume of distribution can be far greater than any physiologic space. For example, the volume of distribution of digoxin and tricyclic antidepressants is hundreds of liters, obviously exceeding total-body volume. Such drugs are not readily removed by dialysis, an important consideration in overdose. Initiation of therapy Change of chronic therapy ChangedosingDose = D * 10th dose Dose = 2•D Dose = 2•D Dose = 0.5•D Discontinue drug Loading dose + dose = D Concentration FIgURE 5-4 Drug accumulation to steady state. In this simulation, drug was administered (arrows) at intervals = 50% of the elimination half-life. Steady state is achieved during initiation of therapy after ∼5 elimination half-lives, or 10 doses. A loading dose did not alter the eventual steady state achieved. A doubling of the dose resulted in a doubling of the steady state but the same time course of accumulation. Once steady state is achieved, a change in dose (increase, decrease, or drug discontinuation) results in a new steady state in ∼5 elimination half-lives. (Adapted by permission from DM Roden, in DP Zipes, J Jalife [eds]: Cardiac Electrophysiology: From Cell to Bedside, 4th ed. Philadelphia, Saunders, 2003. Copyright 2003 with permission from Elsevier.) Clinical Implications of drug distribution In some cases, pharmacologic effects require drug distribution to peripheral sites. In this instance, the time course of drug delivery to and removal from these sites determines the time course of drug effects; anesthetic uptake into the central nervous system (CNS) is an example. loadIng doses For some drugs, the indication may be so urgent that administration of “loading” dosages is required to achieve rapid elevations of drug concentration and therapeutic effects earlier than with chronic maintenance therapy (Fig. 5-4). Nevertheless, the time required for true steady state to be achieved is still determined only by the elimination half-life. rate of Intravenous admInIstratIon Although the simulations in Fig. 5-2 use a single intravenous bolus, this is usually inappropriate in practice because side effects related to transiently very high concentrations can result. Rather, drugs are more usually administered orally or as a slower intravenous infusion. Some drugs are so predictably lethal when infused too rapidly that special precautions should be taken to prevent accidental boluses. For example, solutions of potassium for intravenous administration >20 mEq/L should be avoided in all but the most exceptional and carefully monitored circumstances. This minimizes the possibility of cardiac arrest due to accidental increases in infusion rates of more concentrated solutions. Transiently high drug concentrations after rapid intravenous administration can occasionally be used to advantage. The use of midazolam for intravenous sedation, for example, depends upon its rapid uptake by the brain during the distribution phase to produce sedation quickly, with subsequent egress from the brain during the redistribution of the drug as equilibrium is achieved. Similarly, adenosine must be administered as a rapid bolus in the treatment of reentrant supraventricular tachycardias (Chap. 276) to prevent elimination by very rapid (t1/2 of seconds) uptake into erythrocytes and endothelial cells before the drug can reach its clinical site of action, the atrioventricular node. Clinical Implications of Altered Protein Binding Many drugs circulate in the plasma partly bound to plasma proteins. Since only unbound (free) drug can distribute to sites of pharmacologic action, drug response is related to the free rather than the total circulating plasma drug concentration. In chronic kidney or liver disease, protein binding may be decreased and thus drug actions increased. In some situations (myocardial infarction, infection, surgery), acute phase reactants transiently increase drug binding and thus decrease efficacy. These changes assume the greatest clinical importance for drugs that are highly protein-bound since even a small change in protein binding can result in large changes in free drug; for example, a decrease in binding from 99% to 98% doubles the free drug concentration from 1% to 2%. For some drugs (e.g., phenytoin), monitoring free rather than total drug concentrations can be useful. Drug elimination reduces the amount of drug in the body over time. An important approach to quantifying this reduction is to consider that drug concentrations at the beginning and end of a time period are unchanged and that a specific volume of the body has been “cleared” of the drug during that time period. This defines clearance as volume/ time. Clearance includes both drug metabolism and excretion. Clinical Implications of Altered Clearance While elimination half-life determines the time required to achieve steady-state plasma concentration (C ss), the magnitude of that steady state is determined by clearance (Cl) and dose alone. For a drug administered as an intravenous infusion, this relationship is: C = dosing rate/Cl or dosing rate = Cl . C When drug is administered orally, the average plasma concentration within a dosing interval (C ) replaces C , and the dosage (dose avg,ss ss per unit time) must be increased if bioavailability (F) is less than 1: Dose/time = Cl . C /F avg,ss Genetic variants, drug interactions, or diseases that reduce the activity of drug-metabolizing enzymes or excretory mechanisms lead to decreased clearance and, hence, a requirement for downward dose adjustment to avoid toxicity. Conversely, some drug interactions and genetic variants increase the function of drug elimination pathways, and hence, increased drug dosage is necessary to maintain a therapeutic effect. Metabolites may produce effects similar to, overlapping with, or distinct from those of the parent drug. Accumulation of the major metabolite of procainamide, N-acetylprocainamide (NAPA), likely accounts for marked QT prolongation and torsades des pointes ventricular tachycardia (Chap. 276) during therapy with procainamide. Neurotoxicity during therapy with the opioid analgesic meperidine is likely due to accumulation of normeperidine, especially in renal disease. Prodrugs are inactive compounds that require metabolism to generate active metabolites that mediate the drug effects. Examples include many angiotensin-converting enzyme (ACE) inhibitors, the angiotensin receptor blocker losartan, the antineoplastic irinotecan, the anti-estrogen tamoxifen, the analgesic codeine (whose active metabolite morphine probably underlies the opioid effect during codeine administration), and the antiplatelet drug clopidogrel. Drug metabolism has also been implicated in bioactivation of procarcinogens and in generation of reactive metabolites that mediate certain adverse drug effects (e.g., acetaminophen hepatotoxicity, discussed below). When plasma concentrations of active drug depend exclusively on a single metabolic pathway, any condition that inhibits that pathway (be it disease-related, genetic, or due to a drug interaction) can lead to dramatic changes in drug concentrations and marked variability in drug action. This problem of high-risk pharmacokinetics is especially pronounced in two settings. First, variability in bioactivation of a prodrug can lead to striking variability in drug action; examples include decreased CYP2D6 activity, which prevents analgesia by codeine, and decreased CYP2C19 activity, which reduces the antiplatelet effects of clopidogrel. The second setting is drug elimination that relies on Chapter 5 Principles of Clinical Pharmacology a single pathway. In this case, inhibition of the elimination pathway by genetic variants or by administration of inhibiting drugs leads to marked elevation of drug concentration and, for drugs with a narrow therapeutic window, an increased likelihood of dose-related toxicity. Individuals with loss-of-function alleles in CYP2C9, responsible for metabolism of the active S-enantiomer of warfarin, appear to be at increased risk for bleeding. When drugs undergo elimination by multi-ple-drug metabolizing or excretory pathways, absence of one pathway (due to a genetic variant or drug interaction) is much less likely to have a large impact on drug concentrations or drug actions. PRINCIPLES oF PHARMACodYNAMICS The onset of drug Action For drugs used in the urgent treatment of acute symptoms, little or no delay is anticipated (or desired) between the drug-target interaction and the development of a clinical effect. Examples of such acute situations include vascular thrombosis, shock, or status epilepticus. For many conditions, however, the indication for therapy is less urgent, and a delay between the interaction of a drug with its pharmacologic target(s) and a clinical effect is clinically acceptable. Common pharmacokinetic mechanisms that can contribute to such a delay include slow elimination (resulting in slow accumulation to steady state), uptake into peripheral compartments, or accumulation of active metabolites. Another common explanation for such a delay is that the clinical effect develops as a downstream consequence of the initial molecular effect the drug produces. Thus, administration of a proton pump inhibitor or an H2-receptor blocker produces an immediate increase in gastric pH but ulcer healing that is delayed. Cancer chemotherapy similarly produces delayed therapeutic effects. drug Effects May Be disease Specific A drug may produce no action or a different spectrum of actions in unaffected individuals compared to patients with underlying disease. Further, concomitant disease can complicate interpretation of response to drug therapy, especially adverse effects. For example, high doses of anticonvulsants such as phenytoin may cause neurologic symptoms, which may be confused with the underlying neurologic disease. Similarly, increasing dyspnea in a patient with chronic lung disease receiving amiodarone therapy could be due to drug, underlying disease, or an intercurrent cardiopulmonary problem. Thus, the presence of chronic lung disease may argue against the use of amiodarone. While drugs interact with specific molecular receptors, drug effects may vary over time, even if stable drug and metabolite concentrations are maintained. The drug-receptor interaction occurs in a complex biologic milieu that can vary to modulate the drug effect. For example, ion channel blockade by drugs, an important anticonvulsant and anti-arrhythmic effect, is often modulated by membrane potential, itself a function of factors such as extracellular potassium or local ischemia. Receptors may be upor downregulated by disease or by the drug itself. For example, β-adrenergic blockers upregulate β-receptor density during chronic therapy. While this effect does not usually result in resistance to the therapeutic effect of the drugs, it may produce severe agonist-mediated effects (such as hypertension or tachycardia) if the blocking drug is abruptly withdrawn. The desired goal of therapy with any drug is to maximize the likelihood of a beneficial effect while minimizing the risk of adverse effects. Previous experience with the drug, in controlled clinical trials or in postmarketing use, defines the relationships between dose or plasma concentration and these dual effects (Fig. 5-1) and has important implications for initiation of drug therapy: 1. The target drug effect should be defined when drug treatment is started. With some drugs, the desired effect may be difficult to measure objectively, or the onset of efficacy can be delayed for weeks or months; drugs used in the treatment of cancer and psychiatric disease are examples. Sometimes a drug is used to treat a symptom, such as pain or palpitations, and here it is the patient who will report whether the selected dose is effective. In yet other settings, such as anticoagulation or hypertension, the desired response can be repeatedly and objectively assessed by simple clinical or laboratory tests. 2. The nature of anticipated toxicity often dictates the starting dose. If side effects are minor, it may be acceptable to start chronic therapy at a dose highly likely to achieve efficacy and down-titrate if side effects occur. However, this approach is rarely, if ever, justified if the anticipated toxicity is serious or life-threatening; in this circumstance, it is more appropriate to initiate therapy with the lowest dose that may produce a desired effect. In cancer chemotherapy, it is common practice to use maximum-tolerated doses. 3. The above considerations do not apply if these relationships between dose and effects cannot be defined. This is especially relevant to some adverse drug effects (discussed in further detail below) whose development are not readily related to drug dose. 4. If a drug dose does not achieve its desired effect, a dosage increase is justified only if toxicity is absent and the likelihood of serious toxicity is small. Failure of Efficacy Assuming the diagnosis is correct and the correct drug is prescribed, explanations for failure of efficacy include drug interactions, noncompliance, or unexpectedly low drug dosage due to administration of expired or degraded drug. These are situations in which measurement of plasma drug concentrations, if available, can be especially useful. Noncompliance is an especially frequent problem in the long-term treatment of diseases such as hypertension and epilepsy, occurring in ≥25% of patients in therapeutic environments in which no special effort is made to involve patients in the responsibility for their own health. Multidrug regimens with multiple doses per day are especially prone to noncompliance. Monitoring response to therapy, by physiologic measures or by plasma concentration measurements, requires an understanding of the relationships between plasma concentration and anticipated effects. For example, measurement of QT interval is used during treatment with sotalol or dofetilide to avoid marked QT prolongation that can herald serious arrhythmias. In this setting, evaluating the electrocardiogram at the time of anticipated peak plasma concentration and effect (e.g., 1–2 h postdose at steady state) is most appropriate. Maintained high vancomycin levels carry a risk of nephrotoxicity, so dosages should be adjusted on the basis of plasma concentrations measured at trough (predose). Similarly, for dose adjustment of other drugs (e.g., anticonvulsants), concentration should be measured at its lowest during the dosing interval, just prior to a dose at steady state (Fig. 5-4), to ensure a maintained therapeutic effect. Concentration of drugs in Plasma as a guide to Therapy Factors such as interactions with other drugs, disease-induced alterations in elimination and distribution, and genetic variation in drug disposition combine to yield a wide range of plasma levels in patients given the same dose. Hence, if a predictable relationship can be established between plasma drug concentration and beneficial or adverse drug effect, measurement of plasma levels can provide a valuable tool to guide selection of an optimal dose, especially when there is a narrow range between the plasma levels yielding therapeutic and adverse effects. Monitoring is commonly used with certain types of drugs including many anticonvulsants, antirejection agents, antiarrhythmics, and antibiotics. By contrast, if no such relationship can be established (e.g., if drug access to important sites of action outside plasma is highly variable), monitoring plasma concentration may not provide an accurate guide to therapy (Fig. 5-5A). The common situation of first-order elimination implies that average, maximum, and minimum steady-state concentrations are related linearly to the dosing rate. Accordingly, the maintenance dose may be adjusted on the basis of the ratio between the desired and measured concentrations at steady state; for example, if a doubling of the steady-state plasma concentration is desired, the dose should be doubled. This does not apply to drugs eliminated by zero-order kinetics (fixed amount per unit time), where small dosage increases will produce disproportionate increases in plasma concentration; examples include phenytoin and theophylline. Normal P-glycoprotein function and increased drug levels are associated 37 with adverse effects, drug dosages must 5 be reduced in patients with renal dysfunc tion to avoid toxicity. The antiarrhythmics renal excretion and carry a risk of QT pro- Chapter 5 Principles of Clinical Pharmacology longation and arrhythmias if doses are not reduced in renal disease. In end-stage renal disease, sotalol has been given as 40 mg after dialysis (every second day), compared to the usual daily dose, 80–120 mg every 12 h. The extensive hepatic metabolism, so that renal Time failure has little effect on its plasma concentration. However, its metabolite, norme-Decreased P-glycoprotein function peridine, does undergo renal excretion, accumulates in renal failure, and probably accounts for the signs of CNS excitation, such as irritability, twitching, and seizures, that appear when multiple doses of meperi dine are administered to patients with renal disease. Protein binding of some drugs (e.g., phenytoin) may be altered in uremia, so measuring free drug concentration may be desirable. In non-end-stage renal disease, changes in renal drug clearance are generally proportional to those in creatinine clear- ance, which may be measured directly or estimated from the serum creatinine (Chap. 333e). This estimate, coupled with the knowledge of how much drug is normally excreted renally versus nonrenally, allows an estimate of the dose adjustment required. In practice, most decisions involving dosing adjustment in patients with renal failure use published recommended adjustments in dosage or dosing interval based on the severity of renal dysfunction indicated by creatinine clearance. Any such modification of dose is a first approximation and should be followed by and clinical observation to further opti mize therapy for the individual patient. FIgURE 5-5 A. The efflux pump P-glycoprotein excludes drugs from the endothelium of capillaries in the brain and so constitutes a key element of the blood-brain barrier. Thus, reduced P-glycoprotein function (e.g., due to drug interactions or genetically determined variability in gene transcription) increases penetration of substrate drugs into the brain, even when plasma LIVER dISEASE concentrations are unchanged. B. The graph shows an effect of a β1-receptor polymorphism on Standard tests of liver function are not use-receptor function in vitro. Patients with the hypofunctional variant (red) may display lesser heart-ful in adjusting doses in diseases like heparate slowing or blood pressure lowering on exposure to a receptor blocking agent. titis or cirrhosis. First-pass metabolism may An increase in dosage is usually best achieved by changing the drug dose but not the dosing interval (e.g., by giving 200 mg every 8 h instead of 100 mg every 8 h). However, this approach is acceptable only if the resulting maximum concentration is not toxic and the trough value does not fall below the minimum effective concentration for an undesirable period of time. Alternatively, the steady state may be changed by altering the frequency of intermittent dosing but not the size of each dose. In this case, the magnitude of the fluctuations around the average steady-state level will change—the shorter the dosing interval, the smaller the difference between peak and trough levels. Renal excretion of parent drug and metabolites is generally accomplished by glomerular filtration and by specific drug transporters. If a drug or its metabolites are primarily excreted through the kidneys decrease, leading to increased oral bioavail ability as a consequence of disrupted hepatocyte function, altered liver architecture, and portacaval shunts. The oral bioavailability for high first-pass drugs such as morphine, meperidine, midazolam, and nifedipine is almost doubled in patients with cirrhosis, compared to those with normal liver function. Therefore, the size of the oral dose of such drugs should be reduced in this setting. Under conditions of decreased tissue perfusion, the cardiac output is redistributed to preserve blood flow to the heart and brain at the expense of other tissues (Chap. 279). As a result, drugs may be distributed into a smaller volume of distribution, higher drug concentrations will be present in the plasma, and the tissues that are best perfused (the brain and heart) will be exposed to these higher concentrations, resulting in increased CNS or cardiac effects. As well, decreased perfusion of the kidney and liver may impair drug clearance. Another consequence of severe heart failure is decreased gut perfusion, which may reduce drug absorption and, thus, lead to reduced or absent effects of orally administered therapies. In the elderly, multiple pathologies and medications used to treat them result in more drug interactions and adverse effects. Aging also results in changes in organ function, especially of the organs involved in drug disposition. Initial doses should be less than the usual adult dosage and should be increased slowly. The number of medications, and doses per day, should be kept as low as possible. Even in the absence of kidney disease, renal clearance may be reduced by 35–50% in elderly patients. Dosages should be adjusted on the basis of creatinine clearance. Aging also results in a decrease in the size of, and blood flow to, the liver and possibly in the activity of hepatic drug-metabolizing enzymes; accordingly, the hepatic clearance of some drugs is impaired in the elderly. As with liver disease, these changes are not readily predicted. Elderly patients may display altered drug sensitivity. Examples include increased analgesic effects of opioids, increased sedation from benzodiazepines and other CNS depressants, and increased risk of bleeding while receiving anticoagulant therapy, even when clotting parameters are well controlled. Exaggerated responses to cardiovascular drugs are also common because of the impaired responsiveness of normal homeostatic mechanisms. Conversely, the elderly display decreased sensitivity to β-adrenergic receptor blockers. Adverse drug reactions are especially common in the elderly because of altered pharmacokinetics and pharmacodynamics, the frequent use of multidrug regimens, and concomitant disease. For example, use of long half-life benzodiazepines is linked to the occurrence of hip fractures in elderly patients, perhaps reflecting both a risk of falls from these drugs (due to increased sedation) and the increased incidence of osteoporosis in elderly patients. In population surveys of the noninstitutionalized elderly, as many as 10% had at least one adverse drug reaction in the previous year. identify and validate DNA variants contributing to variable drug actions. Candidate gene Studies in Pharmacogenetics Most studies to date have used an understanding of the molecular mechanisms modulating drug action to identify candidate genes in which variants could explain variable drug responses. One very common scenario is that variable drug actions can be attributed to variability in plasma drug concentrations. When plasma drug concentrations vary widely (e.g., more than an order of magnitude), especially if their distribution is non-unimodal as in Fig. 5-6, variants in single genes controlling drug concentrations often contribute. In this case, the most obvious candidate genes are those responsible for drug metabolism and elimination. Other candidate genes are those encoding the target molecules with which drugs interact to produce their effects or molecules modulating that response, including those involved in disease pathogenesis. genome-Wide Association Studies in Pharmacogenomics The field has also had some success with “unbiased” approaches such as genome-wide association (GWA) (Chap. 82), particularly in identifying single variants associated with high risk for certain forms of drug toxicity (Table 5-2). GWA studies have identified variants in the HLA-B locus that are associated with high risk for severe skin rashes during treatment with the anticonvulsant carbamazepine and the antiretroviral abacavir. A GWA study of simvastatin-associated myopathy identified a single non-coding single nucleotide polymorphism (SNP) in SLCO1B1, encoding OATP1B1, a drug transporter known to modulate simvastatin might be associated with variable drug levels and hence, effect, was 2 mutant alleles 1–2 wild-type alleles Duplication: >2 wild-type alleles Poor metabolizers (PMs) Ultrarapid metabolizers (UMs) APopulation frequency TimeUMEMPMUMEMPMConcentrationB While most drugs used to treat disease in children are the same are those in adults, there are few studies that provide solid data to guide dosing. different rates after birth, and disease mechanisms may be different in children. In practice, doses are adjusted for size (weight or body surface area) as a first approximation unless age-specific data are available. (SEE ALSo CHAPS. 82 ANd 84) The concept that genetically deter advanced at the end of the nineteenth century, and the examples of familial clustering of unusual drug responses were noted in the mid-twentieth century. A goal of traditional Mendelian genetics is to identify DNA variants associated with a distinct phenotype in multiple related family members (Chap. 84). However, it is unusual for a drug response phenotype to be accurately measured in more than one family member, let alone across a kindred. Thus, non-family-based approaches are generally used to FIgURE 5-6 A. CYP2D6 metabolic activity was assessed in 290 subjects by administration of a test dose of a probe substrate and measurement of urinary formation of the CYP2D6-generated metabolite. The heavy arrow indicates a clear antimode, separating poor metabolizer subjects (PMs, red), with two loss-of-function CYP2D6 alleles, indicated by the intron-exon structures below the chart. Individuals with one or two functional alleles are grouped together as extensive metabolizers (EMs, green). Also shown are ultra-rapid metabolizers (UMs), with 2–12 functional copies of the gene (gray), displaying the greatest enzyme activity. (Adapted from M-L Dahl et al: J Pharmacol Exp Ther 274:516, 1995.) B. These simulations show the predicted effects of CYP2D6 genotype on disposition of a substrate drug. With a single dose (left), there is an inverse “gene-dose” relationship between the number of active alleles and the areas under the time-concentration curves (smallest in UM subjects; highest in PM subjects); this indicates that clearance is greatest in UM subjects. In addition, elimination half-life is longest in PM subjects. The right panel shows that these single dose differences are exaggerated during chronic therapy: steady-state concentration is much higher in PM subjects (decreased clearance), as is the time required to achieve steady state (longer elimination half-life). Gene Drugs Effect of Genetic Variantsa Chapter 5 Principles of Clinical Pharmacology K-ras mutation Panitumumab, cetuximab Lack of efficacy with KRAS mutation Philadelphia Busulfan, dasatinib, nilotinib, Decreased efficacy in Philadelphia chromosome–negative chronic chromosome imatinib myelogenous leukemia aDrug effect in homozygotes unless otherwise specified. Note: EM, extensive metabolizer (normal enzymatic activity); PM, poor metabolizer (homozygote for reduced or loss of function allele); UM, ultra-rapid metabolizer (enzymatic activity much greater than normal, e.g., with gene duplication, Fig. 5-6). Further data at U.S. Food and Drug Administration: http://www.fda.gov/Drugs/ ScienceResearch/ResearchAreas/Pharmacogenetics/ucm083378.htm; or Pharmacogenetics Research Network/Knowledge Base: http://www.pharmgkb.org. uptake into the liver, which accounts for 60% of myopathy risk. GWA gENETIC VARIANTS AFFECTINg PHARMACoKINETICS approaches have also implicated interferon variants in antileukemic Clinically important genetic variants have been described in multiple responses and in response to therapy in hepatitis C. Ribavirin, used molecular pathways of drug disposition (Table 5-2). A distinct multi-as therapy in hepatitis C, causes hemolytic anemia, and this has been modal distribution of drug disposition (as shown in Fig. 5-6) argues for linked to variants in ITPA, encoding inosine triphosphatase. a predominant effect of variants in a single gene in the metabolism of that substrate. Individuals with two alleles (variants) encoding for nonfunctional protein make up one group, often termed poor metabolizers (PM phenotype); for some genes, many variants can produce such a loss of function, complicating the use of genotyping in clinical practice. Individuals with one functional allele make up a second (intermediate metabolizers) and may or may not be distinguishable from those with two functional alleles (extensive metabolizers, EMs). Ultra-rapid metabolizers with especially high enzymatic activity (occasionally due to gene duplication; Fig. 5-6) have also been described for some traits. Many drugs in widespread use can inhibit specific drug disposition pathways (Table 5-1), and so EM individuals receiving such inhibitors can respond like PM patients (phenocopying). Polymorphisms in genes encoding drug uptake or drug efflux transporters may be other contributors to variability in drug delivery to target sites and, hence, in drug effects. CYP Variants Members of the CYP3A family (CYP3A4, 3A5) metabolize the greatest number of drugs in therapeutic use. CYP3A4 activity is highly variable (up to an order of magnitude) among individuals, but the underlying mechanisms are not well understood. In whites, but not African Americans, there is a common loss-of-function polymorphism in the closely related CYP3A5 gene. Decreased efficacy of the antirejection agent tacrolimus in African-American subjects has been attributed to more rapid elimination due to relatively greater CYP3A5 activity. A lower risk of vincristine-associated neuropathy has been reported in CYP3A5 “expressers.” CYP2D6 is second to CYP3A4 in the number of commonly used drugs that it metabolizes. CYP2D6 activity is polymorphically distributed, with about 7% of Europeanand African-derived populations (but very few Asians) displaying the PM phenotype (Fig. 5-6). Dozens of loss-of-function variants in the CYP2D6 gene have been described; the PM phenotype arises in individuals with two such alleles. In addition, ultra-rapid metabolizers with multiple functional copies of the CYP2D6 gene have been identified. Codeine is biotransformed by CYP2D6 to the potent active metabolite morphine, so its effects are blunted in PMs and exaggerated in ultra-rapid metabolizers. In the case of drugs with beta-blocking properties metabolized by CYP2D6, greater signs of beta blockade (e.g., bronchospasm, bradycardia) are seen in PM subjects than in EMs. This can be seen not only with orally administered beta blockers such as metoprolol and carvedilol, but also with ophthalmic timolol and with the sodium channel–blocking antiarrhythmic propafenone, a CYP2D6 substrate with beta-blocking properties. Ultra-rapid metabolizers may require very high dosages of tricyclic antidepressants to achieve a therapeutic effect and, with codeine, may display transient euphoria and nausea due to very rapid generation of morphine. Tamoxifen undergoes CYP2D6-mediated biotransformation to an active metabolite, so its efficacy may be in part related to this polymorphism. In addition, the widespread use of selective serotonin reuptake inhibitors (SSRIs) to treat tamoxifen-related hot flashes may also alter the drug’s effects because many SSRIs, notably fluoxetine and paroxetine, are also CYP2D6 inhibitors. The PM phenotype for CYP2C19 is common (20%) among Asians and rarer (2–3%) in European-derived populations. The impact of polymorphic CYP2C19-mediated metabolism has been demonstrated with the proton pump inhibitor omeprazole, where ulcer cure rates with “standard” dosages were much lower in EM patients (29%) than in PMs (100%). Thus, understanding the importance of this polymorphism would have been important in developing the drug, and knowing a patient’s CYP2C19 genotype should improve therapy. CYP2C19 is responsible for bioactivation of the antiplatelet drug clopidogrel, and several large studies have documented decreased efficacy (e.g., increased myocardial infarction after placement of coronary stents) among Caucasian subjects with reduction of function alleles. In addition, some studies suggest that omeprazole and possibly other proton pump inhibitors phenocopy this effect. There are common variants of CYP2C9 that encode proteins with loss of catalytic function. These variant alleles are associated with increased rates of neurologic complications with phenytoin, hypoglycemia with glipizide, and reduced warfarin dose required to maintain stable anticoagulation. The angiotensin-receptor blocker losartan is a prodrug that is bioactivated by CYP2C9; as a result, PMs and those receiving inhibitor drugs may display little response to therapy. Transferase Variants One of the most extensively studied phase II polymorphisms is the PM trait for thiopurine S-methyltransferase (TPMT). TPMT bioinactivates the antileukemic drug 6-mercaptopurine. Further, 6-mercaptopurine is itself an active metabolite of the immunosuppressive azathioprine. Homozygotes for alleles encoding the inactive TPMT (1 in 300 individuals) predictably exhibit severe and potentially fatal pancytopenia on standard doses of azathioprine or 6-mercaptopurine. On the other hand, homozygotes for fully functional alleles may display less anti-inflammatory or antileukemic effect with the drugs. N-acetylation is catalyzed by hepatic N-acetyl transferase (NAT), which represents the activity of two genes, NAT-1 and NAT-2. Both enzymes transfer an acetyl group from acetyl coenzyme A to the drug; polymorphisms in NAT-2 are thought to underlie individual differences in the rate at which drugs are acetylated and thus define “rapid acetylators” and “slow acetylators.” Slow acetylators make up ∼50% of Europeanand African-derived populations but are less common among Asians. Slow acetylators have an increased incidence of the drug-induced lupus syndrome during procainamide and hydralazine therapy and of hepatitis with isoniazid. Induction of CYPs (e.g., by rifampin) also increases the risk of isoniazid-related hepatitis, likely reflecting generation of reactive metabolites of acetylhydrazine, itself an isoniazid metabolite. Individuals homozygous for a common promoter polymorphism that reduces transcription of uridine diphosphate glucuronosyltransferase (UGT1A1) have benign hyperbilirubinemia (Gilbert’s syndrome; Chap. 358). This variant has also been associated with diarrhea and increased bone marrow depression with the antineoplastic prodrug irinotecan, whose active metabolite is normally detoxified by UGT1A1mediated glucuronidation. The antiretroviral atazanavir is a UGT1A1 inhibitor, and individuals with the Gilbert’s variant develop higher bilirubin levels during treatment. Multiple polymorphisms identified in the β2-adrenergic receptor appear to be linked to specific phenotypes in asthma and congestive heart failure, diseases in which β2-receptor function might be expected to determine prognosis. Polymorphisms in the β2-receptor gene have also been associated with response to inhaled β2-receptor agonists, while those in the β1-adrenergic receptor gene have been associated with variability in heart rate slowing and blood pressure lowering (Fig. 5-5B). In addition, in heart failure, a common polymorphism in the β1-adrenergic receptor gene has been implicated in variable clinical outcome during therapy with the investigational beta blocker bucindolol. Response to the 5-lipoxygenase inhibitor zileuton in asthma has been linked to polymorphisms that determine the expression level of the 5-lipoxygenase gene. Drugs may also interact with genetic pathways of disease to elicit or exacerbate symptoms of the underlying conditions. In the porphyrias, CYP inducers are thought to increase the activity of enzymes proximal to the deficient enzyme, exacerbating or triggering attacks (Chap. 430). Deficiency of glucose-6-phosphate dehydrogenase (G6PD), most often in individuals of African, Mediterranean, or South Asian descent, increases the risk of hemolytic anemia in response to the antimalarial primaquine (Chap. 129) and the uric acid–lowering agent rasburicase, which do not cause hemolysis in patients with normal amounts of the enzyme. Patients with mutations in the ryanodine receptor, which controls intracellular calcium in skeletal muscle and other tissues, may be asymptomatic until exposed to certain general anesthetics, which trigger the rare syndrome of malignant hyperthermia. Certain antiarrhythmics and other drugs can produce marked QT prolongation and torsades des pointes (Chap. 276), and in some patients, this adverse effect represents unmasking of previously sub-clinical congenital long QT syndrome. Up to 50% of the variability in steady-state warfarin dose requirement is attributable to polymorphisms in the promoter of VKORC1, which encodes the warfarin target, and in the coding region of CYP2C9, which mediates its elimination. Tumor and Infectious Agent genomes The actions of drugs used to treat infectious or neoplastic disease may be modulated by variants in these nonhuman germline genomes. Genotyping tumors is a rapidly evolving approach to target therapies to underlying mechanisms and to avoid potentially toxic therapy in patients who would derive no benefit (Chap. 101e). Trastuzumab, which potentiates anthracycline-related cardiotoxicity, is ineffective in breast cancers that do not express the herceptin receptor. Imatinib targets a specific tyrosine kinase, BCR-Abl1, that is generated by the translocation that creates the Philadelphia chromosome typical of chronic myelogenous leukemia (CML). BCR-Abl1 is not only active but may be central to the pathogenesis of CML; its use in BCR-Abl1-positive tumors has resulted in remarkable antitumor efficacy. Similarly, the anti–epidermal growth factor receptor (EGFR) antibodies cetuximab and panitumumab appear especially effective in colon cancers in which K-ras, a G protein in the EGFR pathway, is not mutated. Vemurafenib does not inhibit wild-type BRAF but is active against the V600E mutant form of the kinase. The description of genetic variants linked to variable drug responses naturally raises the question of if and how to use this information in practice. Indeed, the U.S. Food and Drug Administration (FDA) now incorporates pharmacogenetic data into information (“package inserts”) meant to guide prescribing. A decision to adopt pharmacogenetically guided dosing for a given drug depends on multiple factors. The most important are the magnitude and clinical importance of the genetic effect and the strength of evidence linking genetic variation to variable drug effects (e.g., anecdote versus post-hoc analysis of clinical trial data versus randomized prospective clinical trial). The evidence can be strengthened if statistical arguments from clinical trial data are complemented by an understanding of underlying physiologic mechanisms. Cost versus expected benefit may also be a factor. When the evidence is compelling, alternate therapies are not available, and there are clear recommendations for dosage adjustment in subjects with variants, there is a strong argument for deploying genetic testing as a guide to prescribing. The association between HLA-B*5701 and severe skin toxicity with abacavir is an example. In other situations, the arguments are less compelling: the magnitude of the genetic effect may be smaller, the consequences may be less serious, alternate therapies may be available, or the drug effect may be amenable to monitoring by other approaches. Ongoing clinical trials are addressing the utility of preprescription genotyping in large populations exposed to drugs with known pharmacogenetic variants (e.g., warfarin). Importantly, technological advances are now raising the possibility of inexpensive whole genome sequencing. Incorporating a patient’s whole genome sequence into their electronic medical record would allow the information to be accessed as needed for many genetic and pharmacogenetic applications, and the argument has been put forward that this approach would lower logistic barriers to use of pharmacogenomic variant data in prescribing. There are multiple issues (e.g., economic, technological, and ethical) that need to be addressed if such a paradigm is to be adopted (Chap. 82). While barriers to bringing genomic and pharmacogenomic information to the bedside seem daunting, the field is very young and evolving rapidly. Indeed, one major result of understanding the role of genetics in drug action has been improved screening of drugs during the development process to reduce the likelihood of highly variable metabolism or unanticipated toxicity. Drug interactions can complicate therapy by increasing or decreasing the action of a drug; interactions may be based on changes in drug disposition or in drug response in the absence of changes in drug levels. Interactions must be considered in the differential diagnosis of any unusual response occurring during drug therapy. Prescribers should recognize that patients often come to them with a legacy of drugs acquired during previous medical experiences, often with multiple physicians who may not be aware of all the patient’s medications. A meticulous drug history should include examination of the patient’s medications and, if necessary, calls to the pharmacist to identify prescriptions. It should also address the use of agents not often volunteered during questioning, such as OTC drugs, health food supplements, and topical agents such as eye drops. Lists of interactions are available from a number of electronic sources. While it is unrealistic to expect the practicing physician to memorize these, certain drugs consistently run the risk of generating interactions, often by inhibiting or inducing specific drug elimination pathways. Examples are presented below and in Table 5-3. Accordingly, when these drugs are started or stopped, prescribers must be especially alert to the possibility of interactions. Gastrointestinal absorption can be reduced if a drug interaction results in drug binding in the gut, as with aluminum-containing antacids, kaolin-pectin suspensions, or bile acid sequestrants. Drugs such as histamine H2-receptor antagonists or proton pump inhibitors that alter gastric pH may decrease the solubility and hence absorption of weak bases such as ketoconazole. Expression of some genes responsible for drug elimination, notably CYP3A and MDR1, can be markedly increased by inducing drugs, such as rifampin, carbamazepine, phenytoin, St. John’s wort, and glutethimide, and by smoking, exposure to chlorinated insecticides such as DDT (CYP1A2), and chronic alcohol ingestion. Administration of inducing agents lowers plasma levels, and thus effects, over 2–3 weeks as gene expression is increased. If a drug dose is stabilized in the presence of an inducer that is subsequently stopped, major toxicity can occur as clearance returns to preinduction levels and drug concentrations rise. Individuals vary in the extent to which drug metabolism can be induced, likely through genetic mechanisms. Interactions that inhibit the bioactivation of prodrugs will decrease drug effects (Table 5-1). Interactions that decrease drug delivery to intracellular sites of action can decrease drug effects: tricyclic antidepressants can blunt the antihypertensive effect of clonidine by decreasing its uptake into adrenergic neurons. Reduced CNS penetration of multiple HIV protease inhibitors (with the attendant risk of facilitating viral replication in a sanctuary site) appears attributable to P-glycoprotein-mediated exclusion of the drug from the CNS; indeed, inhibition of P-glycoprotein has been proposed as a therapeutic approach to enhance drug entry to the CNS (Fig. 5-5A). The most common mechanism here is inhibition of drug elimination. In contrast to induction, new protein synthesis is not involved, and the effect develops as drug and any inhibitor metabolites accumulate (a function of their elimination half-lives). Since shared substrates of a single enzyme can compete for access to the active site of the protein, many CYP substrates can also be considered inhibitors. However, some drugs are especially potent as inhibitors (and occasionally may not even be substrates) of specific drug elimination pathways, and so it is in the use of these agents that clinicians must be most alert to the potential for interactions (Table 5-3). Commonly implicated interacting drugs of this type include amiodarone, cimetidine, Chapter 5 Principles of Clinical Pharmacology Antacids Reduced absorption Bile acid sequestrants Proton pump inhibitors Altered gastric pH H2-receptor blockers Rifampin Induction of CYPs and/or P-glycoprotein Carbamazepine Barbiturates Phenytoin St. John’s wort Glutethimide Nevirapine (CYP3A; CYP2B6) Tricyclic antidepressants Inhibitors of CYP2D6 Fluoxetine Quinidine Cimetidine Inhibitor of multiple CYPs Ketoconazole, itraconazole Inhibitor of CYP3A Erythromycin, clarithromycin Calcium channel blockers Ritonavir Amiodarone Inhibitor of many CYPs and of P-glycoprotein Gemfibrozil (and other fibrates) CYP3A inhibition Quinidine P-glycoprotein inhibition Amiodarone Verapamil Cyclosporine Itraconazole Erythromycin Phenylbutazone Inhibition of renal tubular transport Probenecid Salicylates erythromycin and some other macrolide antibiotics (clarithromycin but not azithromycin), ketoconazole and other azole antifungals, the antiretroviral agent ritonavir, and high concentrations of grapefruit juice (Table 5-3). The consequences of such interactions will depend on the drug whose elimination is being inhibited (see “The Concept of High-Risk Pharmacokinetics,” above). Examples include CYP3A inhibitors increasing the risk of cyclosporine toxicity or of rhabdomyolysis with some HMG-CoA reductase inhibitors (lovastatin, simvastatin, atorvastatin, but not pravastatin), and P-glycoprotein inhibitors increasing the risk of toxicity with digoxin therapy or of bleeding with the thrombin inhibitor dabigatran. These interactions can occasionally be exploited to therapeutic benefit. The antiviral ritonavir is a very potent CYP3A4 inhibitor that is sometimes added to anti-HIV regimens, not because of its antiviral effects but because it decreases clearance, and hence increases efficacy, of other anti-HIV agents. Similarly, calcium channel blockers have been deliberately coadministered with cyclosporine to reduce its clearance and thus its maintenance dosage and cost. Phenytoin, an inducer of many systems, including CYP3A, inhibits CYP2C9. CYP2C9 metabolism of losartan to its active metabolite is inhibited by phenytoin, with potential loss of antihypertensive effect. Decreased concentration and effects of methadone, dabigatran Increased effect of many β blockers Decreased codeine effect; possible decreased tamoxifen effect Increased concentration and effects of phenytoin Increased concentration and toxicity of some HMG-CoA reductase inhibitors cyclosporine, cisapride, terfenadine (now withdrawn) Increased concentration and effects of indinavir (with ritonavir) Decreased clearance and dose requirement for cyclosporine (with calcium channel blockers) Azathioprine and 6-mercaptopurine toxicity Decreased clearance (risk of toxicity) for quinidine Rhabdomyolysis when co-prescribed with some HMG-CoA reductase inhibitors Risk of toxicity with P-glycoprotein substrates (e.g., digoxin, dabigatran) Increased risk of methotrexate toxicity with salicylates Grapefruit (but not orange) juice inhibits CYP3A, especially at inhibition may increase the risk of adverse effects (e.g., cyclosporine, fruit juice. CYP2D6 is markedly inhibited by quinidine, a number of neurolep tic drugs (chlorpromazine and haloperidol), and the SSRIs fluoxetine and paroxetine. The clinical consequences of fluoxetine’s interaction with CYP2D6 substrates may not be apparent for weeks after the drug is started, because of its very long half-life and slow generation of a CYP2D6-inhibiting metabolite. 6-Mercaptopurine is metabolized not only by TPMT but also by xanthine oxidase. When allopurinol, an inhibitor of xanthine oxidase, is administered with standard doses of azathioprine or 6-mercaptopu rine, life-threatening toxicity (bone marrow suppression) can result. A number of drugs are secreted by the renal tubular transport sys tems for organic anions. Inhibition of these systems can cause excessive drug accumulation. Salicylate, for example, reduces the renal clearance of methotrexate, an interaction that may lead to methotrexate toxicity. Renal tubular secretion contributes substantially to the elimination of penicillin, which can be inhibited (to increase its therapeutic effect) by probenecid. Similarly, inhibition of the tubular cation transport system by cimetidine decreases the renal clearance of dofetilide. Drugs may act on separate components of a common process to generate effects greater than either has alone. Antithrombotic therapy with combinations of antiplatelet agents (glycoprotein IIb/IIIa inhibitors, aspirin, clopidogrel) and anticoagulants (warfarin, heparins) is often used in the treatment of vascular disease, although such combinations carry an increased risk of bleeding. Nonsteroidal anti-inflammatory drugs (NSAIDs) cause gastric ulcers, and in patients treated with warfarin, the risk of upper gastrointestinal bleeding is increased almost threefold by concomitant use of an NSAID. Indomethacin, piroxicam, and probably other NSAIDs antagonize the antihypertensive effects of β-adrenergic receptor blockers, diuretics, ACE inhibitors, and other drugs. The resulting elevation in blood pressure ranges from trivial to severe. This effect is not seen with aspirin and sulindac but has been found with the cyclooxygenase 2 (COX-2) inhibitor celecoxib. Torsades des pointes ventricular tachycardia during administration of QT-prolonging antiarrhythmics (quinidine, sotalol, dofetilide) occurs much more frequently in patients receiving diuretics, probably reflecting hypokalemia. In vitro, hypokalemia not only prolongs the QT interval in the absence of drug but also potentiates drug block of ion channels that results in QT prolongation. Also, some diuretics have direct electrophysiologic actions that prolong QT. The administration of supplemental potassium leads to more frequent and more severe hyperkalemia when potassium elimination is reduced by concurrent treatment with ACE inhibitors, spironolactone, amiloride, or triamterene. The pharmacologic effects of sildenafil result from inhibition of the phosphodiesterase type 5 isoform that inactivates cyclic guanosine monophosphate (GMP) in the vasculature. Nitroglycerin and related nitrates used to treat angina produce vasodilation by elevating cyclic GMP. Thus, coadministration of these nitrates with sildenafil can cause profound hypotension, which can be catastrophic in patients with coronary disease. Sometimes, combining drugs can increase overall efficacy and/or reduce drug-specific toxicity. Such therapeutically useful interactions are described in chapters dealing with specific disease entities. The beneficial effects of drugs are coupled with the inescapable risk of untoward effects. The morbidity and mortality from these adverse effects often present diagnostic problems because they can involve every organ and system of the body and may be mistaken for signs of underlying disease. As well, some surveys have suggested that drug therapy for a range of chronic conditions such as psychiatric disease or hypertension does not achieve its desired goal in up to half of treated patients; thus, the most common “adverse” drug effect may be failure of efficacy. Adverse reactions can be classified in two broad groups. One type results from exaggeration of an intended pharmacologic action of the drug, such as increased bleeding with anticoagulants or bone marrow suppression with antineoplastics. The second type of adverse reaction ensues from toxic effects unrelated to the intended pharmacologic actions. The latter effects are often unanticipated (especially with new drugs) and frequently severe and may result from recognized as well as previously undescribed mechanisms. Drugs may increase the frequency of an event that is common in a general population, and this may be especially difficult to recognize; an excellent example is the increase in myocardial infarctions with the COX-2 inhibitor rofecoxib. Drugs can also cause rare and serious adverse effects, such as hematologic abnormalities, arrhythmias, severe skin reactions, or hepatic or renal dysfunction. Prior to regulatory approval and marketing, new drugs are tested in relatively few patients who tend to be less sick and to have fewer concomitant diseases than those patients who subsequently receive the drug therapeutically. Because of the relatively small number of patients studied in clinical trials and the selected nature of these patients, rare adverse effects are generally not detected prior to a drug’s approval; indeed, if they are detected, the new drugs are generally not approved. Therefore, physicians need to be cautious in the prescription of new drugs and alert for the appearance of previously unrecognized adverse events. Elucidating mechanisms underlying adverse drug effects can assist development of safer compounds or allow a patient subset at especially high risk to be excluded from drug exposure. National adverse reaction reporting systems, such as those operated by the FDA (suspected adverse reactions can be reported online at http://www.fda.gov/safety/ medwatch/default.htm) and the Committee on Safety of Medicines in Great Britain, can prove useful. The publication or reporting of a newly recognized adverse reaction can in a short time stimulate many similar such reports of reactions that previously had gone unrecognized. Occasionally, “adverse” effects may be exploited to develop an entirely new indication for a drug. Unwanted hair growth during minoxidil treatment of severely hypertensive patients led to development of the drug for hair growth. Sildenafil was initially developed as an antianginal, but its effects to alleviate erectile dysfunction not only led to a new drug indication but also to increased understanding of the role of type 5 phosphodiesterase in erectile tissue. These examples further reinforce the concept that prescribers must remain vigilant to the possibility that unusual symptoms may reflect unappreciated drug effects. Some 25–50% of patients make errors in self-administration of prescribed medicines, and these errors can be responsible for adverse drug effects. Similarly, patients commit errors in taking OTC drugs by not reading or following the directions on the containers. Health care providers must recognize that providing directions with prescriptions does not always guarantee compliance. In hospitals, drugs are administered in a controlled setting, and patient compliance is, in general, ensured. Errors may occur nevertheless— the wrong drug or dose may be given or the drug may be given to the wrong patient—and improved drug distribution and administration systems are addressing this problem. Patients receive, on average, 10 different drugs during each hospitalization. The sicker the patient, the more drugs are given, and there is a corresponding increase in the likelihood of adverse drug reactions. When <6 different drugs are given to hospitalized patients, the probability of an adverse reaction is ∼5%, but if >15 drugs are given, the probability is >40%. Retrospective analyses of ambulatory patients have revealed adverse drug effects in 20%. Serious adverse reactions are also well-recognized with “herbal” remedies and OTC compounds; examples include kava-associated hepatotoxicity, L-tryptophan-associated eosinophilia-myalgia, and phenylpropanolamine-associated stroke, each of which has caused fatalities. A small group of widely used drugs accounts for a disproportionate number of reactions. Aspirin and other NSAIDs, analgesics, digoxin, anticoagulants, diuretics, antimicrobials, glucocorticoids, antineoplastics, and hypoglycemic agents account for 90% of reactions. Drugs or more commonly reactive metabolites generated by CYPs can covalently bind to tissue macromolecules (such as proteins or DNA) to cause tissue toxicity. Because of the reactive nature of these metabolites, covalent binding often occurs close to the site of production, typically the liver. The most common cause of drug-induced hepatotoxicity is acetaminophen overdosage (Chap. 361). Normally, reactive metabolites are detoxified by combining with hepatic glutathione. When glutathione becomes depleted, the metabolites bind instead to hepatic protein, with resultant hepatocyte damage. The hepatic necrosis produced by the ingestion of acetaminophen can be prevented or attenuated by the administration of substances such as N-acetylcysteine that reduce the binding of electrophilic metabolites to hepatic proteins. The risk of acetaminophen-related hepatic necrosis is increased in patients receiving drugs such as phenobarbital or phenytoin, which increase Chapter 5 Principles of Clinical Pharmacology the rate of drug metabolism, or ethanol, which exhausts glutathione stores. Such toxicity has even occurred with therapeutic dosages, so patients at risk through these mechanisms should be warned. Most pharmacologic agents are small molecules with low molecular weights (<2000) and thus are poor immunogens. Generation of an immune response to a drug therefore usually requires in vivo activation and covalent linkage to protein, carbohydrate, or nucleic acid. Drug stimulation of antibody production may mediate tissue injury by several mechanisms. The antibody may attack the drug when the drug is covalently attached to a cell and thereby destroy the cell. This occurs in penicillin-induced hemolytic anemia. Antibody-drug-antigen complexes may be passively adsorbed by a bystander cell, which is then destroyed by activation of complement; this occurs in quinineand quinidine-induced thrombocytopenia. Heparin-induced thrombocytopenia arises when antibodies against complexes of platelet factor 4 peptide and heparin generate immune complexes that activate platelets; thus, the thrombocytopenia is accompanied by “paradoxical” thrombosis and is treated with thrombin inhibitors. Drugs or their reactive metabolites may alter a host tissue, rendering it antigenic and eliciting autoantibodies. For example, hydralazine and procainamide (or their reactive metabolites) can chemically alter nuclear material, stimulating the formation of antinuclear antibodies and occasionally causing lupus erythematosus. Drug-induced pure red cell aplasia (Chap. 130) is due to an immune-based drug reaction. Serum sickness (Chap. 376) results from the deposition of circulating drug-antibody complexes on endothelial surfaces. Complement activation occurs, chemotactic factors are generated locally, and an inflammatory response develops at the site of complex entrapment. Arthralgias, urticaria, lymphadenopathy, glomerulonephritis, or cerebritis may result. Foreign proteins (vaccines, streptokinase, therapeutic antibodies) and antibiotics are common causes. Many drugs, particularly antimicrobial agents, ACE inhibitors, and aspirin, can elicit anaphylaxis with production of IgE, which binds to mast cell membranes. Contact with a drug antigen initiates a series of biochemical events in the mast cell and results in the release of mediators that can produce the characteristic urticaria, wheezing, flushing, rhinorrhea, and (occasionally) hypotension. Drugs may also elicit cell-mediated immune responses. Topically administered substances may interact with sulfhydryl or amino groups in the skin and react with sensitized lymphocytes to produce the rash characteristic of contact dermatitis. Other types of rashes may also result from the interaction of serum factors, drugs, and sensitized lymphocytes. The manifestations of drug-induced diseases frequently resemble those of other diseases, and a given set of manifestations may be produced by different and dissimilar drugs. Recognition of the role of a drug or drugs in an illness depends on appreciation of the possible adverse reactions to drugs in any disease, on identification of the temporal relationship between drug administration and development of the illness, and on familiarity with the common manifestations of the drugs. A suspected adverse drug reaction developing after introduction of a new drug naturally implicates that drug; however, it is also important to remember that a drug interaction may be responsible. Thus, for example, a patient on a chronic stable warfarin dose may develop a bleeding complication after introduction of amiodarone; this does not reflect a direct reaction to amiodarone but rather its effect to inhibit warfarin metabolism. Many associations between particular drugs and specific reactions have been described, but there is always a “first time” for a novel association, and any drug should be suspected of causing an adverse effect if the clinical setting is appropriate. Illness related to a drug’s intended pharmacologic action is often more easily recognized than illness attributable to immune or other mechanisms. For example, side effects such as cardiac arrhythmias in patients receiving digitalis, hypoglycemia in patients given insulin, or bleeding in patients receiving anticoagulants are more readily related to a specific drug than are symptoms such as fever or rash, which may be caused by many drugs or by other factors. Electronic listings of adverse drug reactions can be useful. However, exhaustive compilations often provide little sense of perspective in terms of frequency and seriousness, which can vary considerably among patients. Eliciting a drug history from each patient is important for diagnosis. Attention must be directed to OTC drugs and herbal preparations as well as to prescription drugs. Each type can be responsible for adverse drug effects, and adverse interactions may occur between OTC drugs and prescribed drugs. Loss of efficacy of oral contraceptives or cyclosporine with concurrent use of St. John’s wort (a P-glycoprotein inducer) is an example. In addition, it is common for patients to be cared for by several physicians, and duplicative, additive, antagonistic, or synergistic drug combinations may therefore be administered if the physicians are not aware of the patients’ drug histories. Every physician should determine what drugs a patient has been taking, for the previous month or two ideally, before prescribing any medications. Medications stopped for inefficacy or adverse effects should be documented to avoid pointless and potentially dangerous reexposure. A frequently overlooked source of additional drug exposure is topical therapy; for example, a patient complaining of bronchospasm may not mention that an ophthalmic beta blocker is being used unless specifically asked. A history of previous adverse drug effects in patients is common. Since these patients have shown a predisposition to drug-induced illnesses, such a history should dictate added caution in prescribing new drugs. Laboratory studies may include demonstration of serum antibody in some persons with drug allergies involving cellular blood elements, as in agranulocytosis, hemolytic anemia, and thrombocytopenia. For example, both quinine and quinidine can produce platelet agglutination in vitro in the presence of complement and the serum from a patient who has developed thrombocytopenia following use of this drug. Biochemical abnormalities such as G6PD deficiency, serum pseudocholinesterase level, or genotyping may also be useful in diagnosis, often after an adverse effect has occurred in the patient or a family member. Once an adverse reaction is suspected, discontinuation of the suspected drug followed by disappearance of the reaction is presumptive evidence of a drug-induced illness. Confirming evidence may be sought by cautiously reintroducing the drug and seeing if the reaction reappears. However, that should be done only if confirmation would be useful in the future management of the patient and if the attempt would not entail undue risk. With concentration-dependent adverse reactions, lowering the dosage may cause the reaction to disappear, and raising it may cause the reaction to reappear. When the reaction is thought to be allergic, however, readministration of the drug may be hazardous, since anaphylaxis may develop. If the patient is receiving many drugs when an adverse reaction is suspected, the drugs likeliest to be responsible can usually be identified; this should include both potential culprit agents as well as drugs that alter their elimination. All drugs may be discontinued at once or, if this is not practical, discontinued one at a time, starting with the ones most suspect, and the patient observed for signs of improvement. The time needed for a concentration-dependent adverse effect to disappear depends on the time required for the concentration to fall below the range associated with the adverse effect; that, in turn, depends on the initial blood level and on the rate of elimination or metabolism of the drug. Adverse effects of drugs with long half-lives or those not directly related to serum concentration may take a considerable time to disappear. Modern clinical pharmacology aims to replace empiricism in the use of drugs with therapy based on in-depth understanding of factors that determine an individual’s response to drug treatment. Molecular pharmacology, pharmacokinetics, genetics, clinical trials, and the educated prescriber all contribute to this process. No drug response should ever be termed idiosyncratic; all responses have a mechanism whose understanding will help guide further therapy with that drug or successors. This rapidly expanding understanding of variability in drug actions makes the process of prescribing drugs increasingly daunting for the practitioner. However, fundamental principles should guide this process: The benefits of drug therapy, however defined, should always outweigh the risk. The smallest dosage necessary to produce the desired effect should be used. The number of medications and doses per day should be minimized. Although the literature is rapidly expanding, accessing it is becoming easier; electronic tools to search databases of literature and unbiased opinion will become increasingly commonplace. Genetics play a role in determining variability in drug response and may become a part of clinical practice. Electronic medical record and pharmacy systems will increasingly incorporate prescribing advice, such as indicated medications not used; unindicated medications being prescribed; and potential dosing errors, drug interactions, or genetically determined drug responses. Prescribers should be particularly wary when adding or stopping specific drugs that are especially liable to provoke interactions and adverse reactions. Prescribers should use only a limited number of drugs, with which they are thoroughly familiar. of death in women. In 1997, the majority of U.S. women surveyed 6e-1 thought that cancer (35%) rather than heart disease (30%) was the leading cause of death in women (Fig. 6e-2). In 2012, these percep- tions were reversed, with 56% of U.S. women surveyed recognizing The National Institutes of Health’s Office of Research on Women’s Health celebrated its twentieth anniversary in 2010 with a new strategic plan recognizing the study of the biologic basis of sex differences as a distinct scientific discipline. It has become clear that both sex chromosomes and sex hormones contribute to these differences. Indeed, it is recommended that the term sex difference be used for biologic processes that differ between males and females and the term gender difference be used for features related to social influences. The clinical discipline of women’s health emphasizes greater attention to patient education and involvement in disease prevention and medical decision-making and has become a model for patient-centered health care. DISEASE RISK: REALITY AND PERCEPTION The leading causes of death are the same in women and men: (1) heart disease, and (2) cancer (Table 6e-1; Fig. 6e-1). The leading cause of cancer death, lung cancer, is the same in both sexes. Breast cancer is the second leading cause of cancer death in women, but it causes about 60% fewer deaths than does lung cancer. Men are substantially more likely to die from suicide and accidents than are women. Women’s risk for many diseases increases at menopause, which occurs at a median age of 51.4 years. In the industrialized world, women spend one-third of their lives in the postmenopausal period. Estrogen levels fall abruptly at menopause, inducing a variety of physiologic and metabolic responses. Rates of cardiovascular disease (CVD) increase and bone density begins to decrease rapidly after menopause. In the United States, women live on average about 5 years longer than men, with a life expectancy at birth in 2011 of 81.1 years compared with 76.3 years in men. Elderly women outnumber elderly men, so that age-related conditions such as hypertension have a female preponderance. However, the difference in life expectancy between men and women has decreased an average of 0.1 year every year since its peak of 7.8 years in 1979. If this convergence in mortality figures continues, it is projected that mortality rates will be similar by 2054. Public awareness campaigns have resulted in a marked increase in the percentage of U.S. women knowing that CVD is the leading cause that heart disease rather than cancer (24%) was the leading cause of death in women (Fig. 6e-2). Although awareness of heart disease has improved substantially among black and Hispanic women over this time period, these groups were 66% less likely than white women to recognize that heart disease is the leading cause of death in women. Nevertheless, women younger than 65 years still consider breast cancer to be their leading health risk, despite the fact that death rates from breast cancer have been falling since the 1990s. In any specific decade of life, a woman’s risk for breast cancer never exceeds 1 in 34. Although a woman’s lifetime risk of developing breast cancer if she lives past 85 years is about 1 in 9, it is much more likely that she will die from CVD than from breast cancer. In other words, many elderly women have breast cancer but die from other causes. Similarly, a minority of women are aware that lung cancer is the leading cause of cancer death in women. Physicians are also less likely to recognize women’s risk for CVD. Even in 2012, only 21% of U.S. women surveyed reported that their physicians had counseled them about their risk for heart disease. These misconceptions are unfortunate as they perpetuate inadequate attention to modifiable risk factors such as dyslipidemia, hypertension, and cigarette smoking. (See also Chap. 448) Alzheimer’s disease (AD) affects approximately twice as many women as men. Because the risk for AD increases with age, part of this sex difference is accounted for by the fact that women live longer than men. However, additional factors probably contribute to the increased risk for AD in women, including sex differences in brain size, structure, and functional organization. There is emerging evidence for sex-specific differences in gene expression, not only for genes on the X and Y chromosomes but also for some autosomal genes. Estrogens have pleiotropic genomic and nongenomic effects on the central nervous system, including neurotrophic actions in key areas involved in cognition and memory. Women with AD have lower endogenous estrogen levels than do women without AD. These observations have led to the hypothesis that estrogen is neuroprotective. Deaths anD perCentage of totaL Deaths for the LeaDing Causes of Death By sex in the uniteD states in 2010 Cause of Death Rank Deaths Deaths, % Rank Deaths Deaths, % Note: Category titles beginning with “other” or “all other” are not ranked when determining the leading causes of death. Source: Data from Centers for Disease Control and Prevention: National Vital Statistics Reports, Vol. 61, No. 4, May 8, 2013, Table 12, http://www.cdc.gov/nchs/data/nvsr/nvsr61/nvsr61_04.pdf. Rates per 100,000 Rates per 100,000 20–24 25–29 30–34 35–39 40–44 45–49 50–54 55–59 60–64 65–69 70–74 75–79 80–84 >85 Age, years Age, years Ca lung, trachea, bronhus FIgURE 6e-1 Death rates per 100,000 population for 2007 by 5-year age groups in U.S. women. Note that the scale of the y axis is increased in the graph on the right compared with that on the left. Accidents and HIV/AIDS are the leading causes of death in young women 20–34 years of age. Accidents, breast cancer, and ischemic heart disease (IHD) are the leading causes of death in women 35–49 years of age. IHD becomes the leading cause of death in women beginning at age 50 years. In older women, IHD remains the leading cause of death, cerebrovascular disease becomes the second leading cause of death, and lung cancer is the leading cause of cancer-related deaths. At age 85 years and beyond, Alzheimer’s disease (AD) becomes the third leading cause of death. Ca, cancer; CLRD, chronic lower respiratory disease; DM, diabetes mellitus. (Data adapted from Centers for Disease Control and Prevention, http://www.cdc.gov/nchs/data/dvs/MortFinal2007_WorkTable210R.pdf.) Some studies have suggested that estrogen administration improves cognitive function in nondemented postmenopausal women as well as in women with AD, and several observational studies have suggested that postmenopausal hormone therapy (HT) may decrease the risk of AD. However, HT placebo-controlled trials have found no improvement in disease progression or cognitive function in women with AD. Further, the Women’s Health Initiative Memory Study (WHIMS), an ancillary study in the Women’s Health Initiative (WHI), found no benefit compared with placebo of estrogen alone [combined continuous equine estrogen (CEE), 0.625 mg daily] or estrogen with progestin [CEE, 0.625 mg daily, and medroxyprogesterone acetate (MPA), 2.5 mg daily] on cognitive function or the development of dementia in women ≥65 years. Indeed, there was a significantly increased risk for both dementia and mild cognitive impairment in women receiving HT. However, preliminary findings from the Kronos Early Estrogen Prevention Study (KEEPS), a randomized clinical trial of early initiation of HT after menopause that compared CEE 0.45 mg daily, 50 μg of weekly transdermal estradiol (both estrogen arms included cyclic oral micronized progesterone 200 mg daily for 12 days each month), or placebo, found no adverse effects on cognitive function. (See also Chap. 293) There are major sex differences in CVD, the leading cause of death in men and women in developed countries. A greater number of U.S. women than men die annually of CVD and stroke. Deaths from CVD have decreased markedly in men since 1980, whereas CVD deaths only began to decrease substantially in women beginning in 2000. However, in middle-aged women, the prevalence rates of both coronary heart disease (CHD) and stroke have increased in the 1999–2004 National Health and Nutrition Survey (NHANES) compared to the 1988–1994 NHANES, whereas prevalence rates have decreased or remained unchanged, respectively, in men. These increases were paralleled by an increasing prevalence of abdominal obesity and other components of metabolic syndrome in women. Sex steroids have major effects on the cardiovascular system and lipid metabolism. Estrogen increases high-density lipoprotein (HDL) FIgURE 6e-2 Changes in perceived leading causes of death among women surveyed in 1997 compared with those surveyed in 2012. In 1997, cancer was cited as the leading cause of death in women, not heart disease. In 2012, this trend had reversed. The rate of awareness that heart disease is the leading cause of death in women was significantly higher in 2012 (56% vs 30%, p <.001) than in 1997. (Data adapted from L Mosca et al: Circulation 127:1254, 2013.) and lowers low-density lipoprotein (LDL), whereas androgens have the opposite effect. Estrogen has direct vasodilatory effects on the vascular endothelium, enhances insulin sensitivity, and has antioxidant and anti-inflammatory properties. There is a striking increase in CHD after both natural and surgical menopause, suggesting that endogenous estrogens are cardioprotective. Women also have longer QT intervals on electrocardiograms, and this increases their susceptibility to certain arrhythmias. CHD presents differently in women, who are usually 10–15 years older than their male counterparts and are more likely to have comorbidities such as hypertension, congestive heart failure, and diabetes mellitus (DM). In the Framingham study, angina was the most common initial symptom of CHD in women, whereas myocardial infarction (MI) was the most common initial presentation in men. Women more often have atypical symptoms such as nausea, vomiting, indigestion, and upper back pain. Although awareness that heart disease is the leading cause of death in women has nearly doubled over the last 15 years, women remain less aware that its symptoms are often atypical, and they are less likely to contact 9-1-1 when they experience such symptoms. Women with MI are more likely to present with cardiac arrest or cardiogenic shock, whereas men are more likely to present with ventricular tachycardia. Further, younger women with MI are more likely to die than are men of similar age. However, this mortality gap has decreased substantially in recent years because younger women have experienced greater improvements in survival after MI than men (Fig. 6e-3). The improvement in survival is due largely to a reduction in comorbidities, suggesting a greater attention to modifiable risk factors in women. Nevertheless, physicians are less likely to suspect heart disease in women with chest pain and less likely to perform diagnostic and therapeutic cardiac procedures in women. Women are less likely to receive therapies such as angioplasty, thrombolytic therapy, coronary artery bypass grafts (CABGs), beta blockers, and aspirin. There are also sex differences in outcomes when women with CHD do receive therapeutic interventions. Women undergoing CABG surgery have more advanced disease, a higher perioperative mortality rate, less relief of angina, and less graft patency; however, 5and 10-year survival rates are similar. Women undergoing percutaneous transluminal coronary angioplasty have lower rates of initial angiographic and clinical success than men, but they also have a lower rate of restenosis and a better long-term outcome. Women may benefit less and have more frequent serious bleeding complications from thrombolytic therapy compared with men. Factors such as older age, more comorbid conditions, FIgURE 6e-3 Hospital mortality rates in men and women for acute myocardial infarction (MI) in 1994–1995 compared with 2004–2006. Women younger than age 65 years had substantially greater mortality than men of similar age in 1994–1995. Mortality rates declined markedly for both sexes across all age groups in 2004–2006 compared with 1994–1995. However, there was a more striking decrease in mortality in women younger than age 75 years compared with men of similar age. The mortality rate reduction was largest in women less than age 55 years (52.9%) and lowest in men of similar age (33.3%). (Data adapted from V Vaccarino et al: Arch Intern Med 169:1767, 2009.) smaller body size, and more severe CHD in women at the time of 6e-3 events or procedures account in part for the observed sex differences. Elevated cholesterol levels, hypertension, smoking, obesity, low HDL cholesterol levels, DM, and lack of physical activity are important risk factors for CHD in both men and women. Total triglyceride levels are an independent risk factor for CHD in women but not in men. Low HDL cholesterol and DM are more important risk factors for CHD in women than in men. Smoking is an important risk factor for CHD in women—it accelerates atherosclerosis, exerts direct negative effects on cardiac function, and is associated with an earlier age of menopause. Cholesterol-lowering drugs are equally effective in men and women for primary and secondary prevention of CHD. However, because of perceptions that women are at lower risk for CHD, they receive fewer interventions for modifiable risk factors than do men. In contrast to men, randomized trials showed that aspirin was not effective in the primary prevention of CHD in women; it did significantly reduce the risk of ischemic stroke. The sex differences in CHD prevalence, beneficial biologic effects of estrogen on the cardiovascular system, and reduced risk for CHD in observational studies led to the hypothesis that HT was cardioprotective. However, the WHI, which studied more than 16,000 women on CEE plus MPA or placebo and more than 10,000 women with hysterectomy on CEE alone or placebo, did not demonstrate a benefit of HT for the primary or secondary prevention of CHD. In addition, CEE plus MPA was associated with an increased risk for CHD, particularly in the first year of therapy, whereas CEE alone neither increased nor decreased CHD risk. Both CEE plus MPA and CEE alone were associated with an increased risk for ischemic stroke. In the WHI, there was a suggestion of a reduction in CHD risk in women who initiated HT closer to menopause. This finding suggests that the time at which HT is initiated is critical for cardioprotection. According to this “timing” hypothesis, HT has differential effects, depending on the stage of atherosclerosis; adverse effects are seen with advanced, unstable lesions. A recent study using data from the Danish Osteoporosis Prevention Study (DOPS), an open-label randomized trial of triphasic oral estradiol compared with no treatment in recently menopausal or perimenopausal women (a cyclic oral synthetic progestin, norethisterone acetate, was added in women who had a uterus), found significantly reduced mortality and CVD after 10 years of HT. However, DOPS was designed to investigate HT for the primary prevention of osteoporotic bone fractures, and CVD outcomes were not prespecified endpoints. Further, there were relatively few CVD events in the study groups. KEEPS was designed to directly test the “timing” hypothesis. Seven hundred twenty-seven recently menopausal women age 42–58 years (mean 52.7 years) were randomized to oral CEE (lower dose than WHI), transdermal estradiol, or placebo for 4 years; both estrogen arms included oral cyclical micronized progesterone (see above section on AD for dosing details). There were no significant beneficial or deleterious effects on the progression of atherosclerosis by computed tomography assessment of coronary artery calcification in either HT arm. Adverse events including stroke, MI, venous thromboembolism, and breast cancer were not increased in the HT arms compared with the placebo arm. There were improvements in hot flashes, night sweats, mood, sexual function, and bone density in the HT arms. This relatively small study does not suggest that early HT administration, transdermally or orally, reduces atherosclerosis. However, the study suggests that short-term HT may be safely administered for symptom relief in recently menopausal women. HT is discussed further in Chap. 413. (See also Chap. 417) Women are more sensitive to insulin than men are. Despite this, the prevalence of type 2 DM is similar in men and women. There is a sex difference in the relationship between endogenous androgen levels and DM risk. Higher bioavailable testosterone levels are associated with increased risk in women, whereas lower bioavailable testosterone levels are associated with increased risk in men. Polycystic ovary syndrome and gestational DM—common conditions in premenopausal women—are associated with a significantly increased risk for type 2 DM. Premenopausal women with DM lose the cardioprotective effect of female sex and have rates of CHD identical to those in males. These women have impaired endothelial function and reduced coronary vasodilatory responses, which may predispose to cardiovascular complications. Among individuals with DM, women have a greater risk for MI than do men. Women with DM are more likely to have left ventricular hypertrophy. Women with DM receive less aggressive treatment for modifiable CHD risk factors than men with DM. In the WHI, CEE plus MPA significantly reduced the incidence of DM, whereas with CEE alone, there was only a trend toward decreased DM incidence. (See also Chap. 298) After age 60, hypertension is more common in U.S. women than in men, largely because of the high prevalence of hypertension in older age groups and the longer survival of women. Isolated systolic hypertension is present in 30% of women >60 years old. Sex hormones affect blood pressure. Both normotensive and hypertensive women have higher blood pressure levels during the follicular phase than during the luteal phase. In the Nurses’ Health Study, the relative risk of hypertension was 1.8 in current users of oral contraceptives, but this risk is lower with the newer low-dose contraceptive preparations. HT is not associated with hypertension. Among secondary causes of hypertension, there is a female preponderance of renal artery fibromuscular dysplasia. The benefits of treatment for hypertension have been dramatic in both women and men. A meta-analysis of the effects of hypertension treatment, the Individual Data Analysis of Antihypertensive Intervention Trial, found a reduction of risk for stroke and for major cardiovascular events in women. The effectiveness of various antihypertensive drugs appears to be comparable in women and men; however, women may experience more side effects. For example, women are more likely to develop cough with angiotensin-converting enzyme inhibitors. (See also Chap. 377e) Most autoimmune disorders occur more commonly in women than in men; they include autoimmune thyroid and liver diseases, lupus, rheumatoid arthritis (RA), scleroderma, multiple sclerosis (MS), and idiopathic thrombocytopenic purpura. However, there is no sex difference in the incidence of type 1 DM, and ankylosing spondylitis occurs more commonly in men. Women may be more resistant to bacterial infections than men. Sex differences in both immune responses and adverse reactions to vaccines have been reported. For example, there is a female preponderance of postvaccination arthritis. Adaptive immune responses are more robust in women than in men; this may be explained by the stimulatory actions of estrogens and the inhibitory actions of androgens on the cellular mediators of immunity. Consistent with an important role for sex hormones, there is variation in immune responses during the menstrual cycle, and the activity of certain autoimmune disorders is altered by castration or pregnancy (e.g., RA and MS may remit during pregnancy). Nevertheless, the majority of studies show that exogenous estrogens and progestins in the form of HT or oral contraceptives do not alter autoimmune disease incidence or activity. Exposure to fetal antigens, including circulating fetal cells that persist in certain tissues, has been speculated to increase the risk of autoimmune responses. There is clearly an important genetic component to autoimmunity, as indicated by the familial clustering and HLA association of many such disorders. X chromosome genes also contribute to sex differences in immunity. Indeed, nonrandom X chromosome inactivation may be a risk factor for autoimmune diseases. (See also Chap. 226) Women account for almost 50% of the 34 million persons infected with HIV-1 worldwide. AIDS is an important cause of death in younger women (Fig. 6e-1). Heterosexual contact with an at-risk partner is the fastest-growing transmission category, and women are more susceptible to HIV infection than are men. This increased susceptibility is accounted for in part by an increased prevalence of sexually transmitted diseases in women. Some studies have suggested that hormonal contraceptives may increase the risk of HIV transmission. Progesterone has been shown to increase susceptibility to infection in nonhuman primate models of HIV. Women are also more likely to be infected by multiple variants of the virus than are men. Women with HIV have more rapid decreases in their CD4 cell counts than do men. Compared with men, HIV-infected women more frequently develop candidiasis, but Kaposi’s sarcoma is less common than it is in men. Women have more adverse reactions, such as lipodystrophy, dyslipidemia, and rash, with antiretroviral therapy than do men. This observation is explained in part by sex differences in the pharmacokinetics of certain antiretroviral drugs, resulting in higher plasma concentrations in women. (See also Chap. 416) The prevalence of both obesity (body mass index ≥30 kg/m2) and abdominal obesity (waist circumference ≥88 cm in women) is higher in U.S. women than in men. However, between 1999 and 2008, the prevalence of obesity increased significantly in men but not in women. The prevalence of abdominal obesity increased over this time period in both sexes. More than 80% of patients who undergo bariatric surgery are women. Pregnancy and menopause are risk factors for obesity. There are major sex differences in body fat distribution. Women characteristically have gluteal and femoral or gynoid pattern of fat distribution, whereas men typically have a central or android pattern. Women have more subcutaneous fat than men. In women, endogenous androgen levels are positively associated with abdominal obesity, and androgen administration increases visceral fat. In contrast, there is an inverse relationship between endogenous androgen levels and abdominal obesity in men. Further, androgen administration decreases visceral fat in these obese men. The reasons for these sex differences in the relationship between visceral fat and androgens are unknown. Studies in humans also suggest that sex steroids play a role in modulating food intake and energy expenditure. In men and women, abdominal obesity characterized by increased visceral fat is associated with an increased risk for CVD and DM. Obesity increases a woman’s risk for certain cancers, in particular postmenopausal breast and endometrial cancer, in part because adipose tissue provides an extragonadal source of estrogen through aromatization of circulating adrenal and ovarian androgens, especially the conversion of androstenedione to estrone. Obesity increases the risk of infertility, miscarriage, and complications of pregnancy. (See also Chap. 425) Osteoporosis is about five times more common in postmenopausal women than in age-matched men, and osteoporotic hip fractures are a major cause of morbidity in elderly women. Men accumulate more bone mass and lose bone more slowly than do women. Sex differences in bone mass are found as early as infancy. Calcium intake, vitamin D, and estrogen all play important roles in bone formation and bone loss. Particularly during adolescence, calcium intake is an important determinant of peak bone mass. Vitamin D deficiency is surprisingly common in elderly women, occurring in >40% of women living in northern latitudes. Receptors for estrogens and androgens have been identified in bone. Estrogen deficiency is associated with increased osteoclast activity and a decreased number of bone-forming units, leading to net bone loss. The aromatase enzyme, which converts androgens to estrogens, is also present in bone. Estrogen is an important determinant of bone mass in men (derived from the aromatization of androgens) as well as in women. On average, women have lower body weights, smaller organs, a higher percentage of body fat, and lower total-body water than men. There are also important sex differences in drug action and metabolism that are not accounted for by these differences in body size and composition. Sex steroids alter the binding and metabolism of a number of drugs. Further, menstrual cycle phase and pregnancy can alter drug action. Two-thirds of cases of drug-induced torsades des pointes, a rare, life-threatening ventricular arrhythmia, occur in women because they have a longer, more vulnerable QT interval. These drugs, which include certain antihistamines, antibiotics, antiarrhythmics, and antipsychotics, can prolong cardiac repolarization by blocking cardiac voltage-gated potassium channels. Women require lower doses of neuroleptics to control schizophrenia. Women awaken from anesthesia faster than do men given the same doses of anesthetics. Women also take more medications than men, including over-the-counter formulations and supplements. The greater use of medications combined with these biologic differences may account for the reported higher frequency of adverse drug reactions in women than in men. (See also Chap. 466) Depression, anxiety, and affective and eating disorders (bulimia and anorexia nervosa) are more common in women than in men. Epidemiologic studies from both developed and developing nations consistently find major depression to be twice as common in women as in men, with the sex difference becoming evident in early adolescence. Depression occurs in 10% of women during pregnancy and in 10–15% of women during the postpartum period. There is a high likelihood of recurrence of postpartum depression with subsequent pregnancies. The incidence of major depression diminishes after age 45 years and does not increase with the onset of menopause. Depression in women appears to have a worse prognosis than does depression in men; episodes last longer, and there is a lower rate of spontaneous remission. Schizophrenia and bipolar disorders occur at equal rates in men and women, although there may be sex differences in symptoms. Both biologic and social factors account for the greater prevalence of depressive disorders in women. Men have higher levels of the neurotransmitter serotonin. Sex steroids also affect mood, and fluctuations during the menstrual cycle have been linked to symptoms of premenstrual syndrome. Sex hormones differentially affect the hypothalamic-pituitary-adrenal responses to stress. Testosterone appears to blunt cortisol responses to corticotropin-releasing hormone. Both low and high levels of estrogen can activate the hypothalamic-pituitaryadrenal axis. (See also Chap. 38) There are striking sex differences in sleep and its disorders. During sleep, women have an increased amount of slow-wave activity, differences in timing of delta activity, and an increase in the number of sleep spindles. Testosterone modulates neural control of breathing and upper airway mechanics. Men have a higher prevalence of sleep apnea. Testosterone administration to hypogonadal men as well as to women increases apneic episodes during sleep. Women with the hyperandrogenic disorder polycystic ovary syndrome have an increased prevalence of obstructive sleep apnea, and apneic episodes are positively correlated with their circulating testosterone levels. In contrast, progesterone accelerates breathing, and in the past, progestins were used for treatment of sleep apnea. (See also Chaps. 467 and 470) Substance abuse is more common in men than in women. However, one-third of Americans who suffer from alcoholism are women. Women alcoholics are less likely 6e-5 to be diagnosed than men. A greater proportion of men than women seek help for alcohol and drug abuse. Men are more likely to go to an alcohol or drug treatment facility, whereas women tend to approach a primary care physician or mental health professional for help under the guise of a psychosocial problem. Late-life alcoholism is more common in women than in men. On average, alcoholic women drink less than alcoholic men but exhibit the same degree of impairment. Blood alcohol levels are higher in women than in men after drinking equivalent amounts of alcohol, adjusted for body weight. This greater bioavailability of alcohol in women is due to both the smaller volume of distribution and the slower gastric metabolism of alcohol secondary to lower activity of gastric alcohol dehydrogenase than is the case in men. In addition, alcoholic women are more likely to abuse tranquilizers, sedatives, and amphetamines. Women alcoholics have a higher mortality rate than do nonalcoholic women and alcoholic men. Women also appear to develop alcoholic liver disease and other alcohol-related diseases with shorter drinking histories and lower levels of alcohol consumption. Alcohol abuse also poses special risks to a woman, adversely affecting fertility and the health of the baby (fetal alcohol syndrome). Even moderate alcohol use increases the risk of breast cancer, hypertension, and stroke in women. More men than women smoke tobacco, but this sex difference continues to decrease. Women have a much larger burden of smoking-related disease. Smoking markedly increases the risk of CVD in premenopausal women and is also associated with a decrease in the age of menopause. Women who smoke are more likely to develop chronic obstructive pulmonary disease and lung cancer than men and at lower levels of tobacco exposure. Postmenopausal women who smoke have lower bone density than women who never smoked. Smoking during pregnancy increases the risk of preterm deliveries and low birth weight infants. More than one in three women in the United States have experienced rape, physical violence, and/or stalking by an intimate partner. Adult women are much more likely to be raped by a spouse, ex-spouse, or acquaintance than by a stranger. Domestic or intimate partner violence is a leading cause of death among young women. Domestic violence may be an unrecognized feature of certain clinical presentations, such as chronic abdominal pain, headaches, and eating disorders, in addition to more obvious manifestations such as trauma. Intimate partner violence is an important risk factor for depression, substance abuse, and suicide in women. Screening instruments can accurately identify women experiencing intimate partner violence. Such screening by health care providers is acceptable to women in settings ensuring adequate privacy and safety. Women’s health is now a mature discipline, and the importance of sex differences in biologic processes is well recognized. There has been a striking reduction in the excess mortality rate from MI in younger women. Nevertheless, ongoing misperceptions about disease risk, not only among women but also among their physicians, result in inadequate attention to modifiable risk factors. Research into the fundamental mechanisms of sex differences will provide important biologic insights. Further, those insights will have an impact on both women’s and men’s health. the use of performance-enhancing drugs to increase muscularity and 7e-1 lean appearance. Although menopause in women has been the subject of intense investigation for more than five decades, the issues that are Shalender Bhasin, Shehzad Basaria specific to men’s health are just beginning to gain the attention that they deserve because of their high prevalence and impact on overall health, well-being, and quality of life. The emergence of men’s health as a distinct discipline within internal medicine is founded on the evidence that men and women differ across their life span in their susceptibility to disease, in the clinical manifestations of the disease, and in their response to treatment. Furthermore, men and women weigh the health consequences of illness differently and have different motivations for seeking care. Men and women experience different types of disparities in access to health care services and in the manner in which health care is delivered to them because of a complex array of socioeconomic and cultural factors. Attitudinal and institutional barriers to accessing care, fear and embarrassment due to the perception by some that it is not manly to seek medical help, and reticence on the part of patients and physicians to discuss issues related to sexuality, drug use, and aging have heightened the need for programs tailored to address the specific health needs of men. Sex differences in disease prevalence, susceptibility, and clinical manifestations of disease were discussed in Chap. 6e (“Women’s Health”). It is notable that the two leading causes of death in both men and women—heart disease and cancer—are the same. However, men have a higher prevalence of neurodevelopmental and degenerative disorders; substance abuse disorders, including the use of performance-enhancing drugs and alcohol dependence; diabetes; and cardiovascular disease; and women have a higher prevalence of autoimmune disorders, depression, rheumatologic disorders, and osteoporosis. Men are substantially more likely to die from accidents, suicides, and homicides than women. Among men 15–34 years of age, unintentional injuries, homicides, and suicides account for over three-fourths of all deaths. Among men 35–64 years of age, heart disease, cancer, and unintentional injuries are the leading causes of death. Among men 65 years of age or older, heart disease, cancer, lower respiratory tract infections, and stroke are the major causes of death. The biologic bases of sex differences in disease susceptibility, progression, and manifestation remain incompletely understood and are likely multifactorial. Undoubtedly, sex-specific differences in the genetic architecture and circulating sex hormones influence disease phenotype; additionally, epigenetic effects of sex hormones during fetal life, early childhood, and pubertal development may imprint sexual and nonsexual behaviors, body composition, and disease susceptibility. Reproductive load and physiologic changes during pregnancy, including profound hormonal and metabolic shifts and microchimerism (transfer of cells from the mother to the fetus and from the fetus to the mother), may affect disease susceptibility and disease severity in women. Sociocultural norms of child-rearing practices, societal expectations of gender roles, and the long-term economic impact of these practices and gender roles also may affect disease risk and its clinical manifestation. The trajectories of age-related changes in sex hormones (SEE CHAP. 411) A number of studies have established that testosterone concentrations decrease with advancing age. This age-related decline starts in the third decade of life and progresses thereafter (Fig. 7e-1). Low total and bioavailable testosterone concentrations are associated with decreased skeletal muscle mass and strength, higher visceral fat mass, insulin resistance, and increased risk of coronary artery disease and mortality (Table 7e-1). Most studies suggest that these symptoms and signs develop with total testosterone levels below 320 ng/dL and free testosterone levels below 64 pg/mL in older men. Testing for low testosterone in older men should be limited to those with symptoms or signs attributable to androgen deficiency. Testosterone therapy of healthy older men with low testosterone increases lean body mass, grip strength, and self-reported physical function (Fig. 7e-2). Testosterone therapy also increases vertebral but not femoral bone mineral density. In men with sexual dysfunction and low testosterone levels, testosterone therapy improves libido, but effects on erectile function and response to selective phosphodiesterase inhibitors are variable (Chap. 67). As discussed in Chap. 411, there is concern that testosterone therapy may stimulate the growth of prostate cancers. Sexual Dysfunction (See Chap. 67) Various forms of sexual dysfunction are a major motivating factor for men seeking care at men’s health clinics. The landmark descriptions of the human sexual response cycle by Masters and Johnson, demonstrating that men and women display predictable physiologic responses after sexual stimulation, provided the basis for rational classification of human sexual disorders. Accordingly, sexual disorders have been classified into four categories depending on phase of sexual response cycle in which the abnormality exists: 1. 2. 3. 4. Disorders of pain Classification of the patient’s disorder into these categories is important because the etiologic factors, diagnostic tests, and therapeutic Total testosterone (ng/dL) vs. Age (y) during the reproductive and postreproductive years vary substantially between men and women and may influence the sex differences in the temporal evolution of age-related conditions such as osteoporosis, breast cancer, and autoimmune disease. In a reflection of the growing attention to issues related to men’s health, health clinics focused on the health problems of men are being established with increasing frequency. Although the major threats to men’s health have not changed—heart disease, cancer, and uninten tional injury continue to dominate the list of major medical causes of morbidity and mortality in men—the men who attend men’s health clinics do so largely for sexual, reproductive, and urologic health concerns involving common conditions such as androgen deficiency syndromes, age-related decline in testosterone levels, sexual dysfunc-FIGURE 7e-1 Age-related decline in total testosterone levels. Total tion, muscle dysmorphia and anabolic-androgenic steroid use, lower testosterone levels measured using liquid chromatography tandem urinary tract symptoms, and medical complications of prostate cancer mass spectrometry in men of the Framingham Heart Study (FHS), the therapy, which are the focus of this chapter. Additionally, new catego-European Male Aging Study (EMAS), and the Osteoporotic Fractures ries of body image disorders have emerged in men that had not been in Men Study (MrOS). (Reproduced with permission from S Bhasin et al: J recognized until the 1980s, such as body dysmorphia syndrome and Clin Endocrinol Metab 96:2430, 2011.) assoCiation of testosterone levels with outCoMes in older Men 1. Positively associated with: mineral density, bone geometry, and volumetric bone mineral 2. Negatively associated with: 3. Not associated with: strategies vary for each class of sexual disorder. Historically, the classification and nomenclature for sexual disorders used criteria identified in the Diagnostic and Statistical Manual of Mental Disorders (DSM), based on the erroneous belief that sexual disorders in men are largely psychogenic in their origin. However, the recognition of erectile dysfunction as a manifestation of systemic disease and the availability of easy-to-use oral selective phosphodiesterase-5 inhibitors have placed sexual disorders in men within the purview of the primary care provider. MUSCLE DYSMORPHIA SYNDROME IN MEN: A BODY IMAGE DISORDER Muscle dysmorphia is a form of body image disorder characterized by a pathologic preoccupation with muscularity and leanness. The men with muscle dysmorphia express a strong desire to be more muscular and lean. These men describe shame and embarrassment about their body size and shape and often report adverse symptoms such as dissatisfaction with appearance, preoccupation with bodybuilding and muscularity, and functional impairment. Patients with muscle dysmorphia also report higher rates of mood and anxiety disorders, as well as obsessive and compulsive behaviors. These men often experience impairment of social and occupational functioning. Patients with muscle dysmorphia syndrome—nearly all men—are almost always engaged in weightlifting and body building and are more likely to use performance-enhancing drugs, especially anabolic-androgenic steroids. Muscle dysmorphia disorder predisposes men to an increased risk of disease due to the combined interactive effects of the intensity of physical exercise, the use of performance-enhancing drugs, and other lifestyle factors associated with weightlifting and the use of performance-enhancing drugs. No randomized trials of any treatment modalities have been conducted; anecdotally, behavioral and cognitive therapies have been tried with varying degrees of success. Anabolic-Androgenic Steroid Abuse by Athletes and Recreational Body-Builders The illicit use of anabolic-androgenic steroids (AAS) to enhance athletic performance first surfaced in the 1950s among powerlifters and spread rapidly to other sports and to professional as well as high school athletes and recreational bodybuilders. In the early 1980s, the use of AAS spread beyond the athletic community into the general population. As many as 3 million Americans, most of them men, have likely used these compounds. Most AAS users are not athletes, but rather recreational weightlifters who use these drugs to look lean and more muscular. FIGURE 7e-2 The effects of testosterone therapy on body composition, muscle strength, bone mineral density, and sexual function in intervention trials. The point estimates and the associated 95% confidence intervals are shown. A. The effects of testos terone therapy on lean body mass, grip strength, and fat mass in a meta-analysis of randomized trials. (Data derived from S Bhasin et al: Nat Clin Pract Endocrinol Metab 2:146, 2006.) B. The effects of testoster analysis of randomized trials. (Data derived from a meta-analysis by MJ Tracz et al: J Clin Endocrinol Metab 91:2011, 2006.) C. The effects of testosterone therapy on measures of sexual function in men with baseline testosterone less than 10 nmol/L (290 ng/dL). (Data derived from a meta-analysis by AM Isidori et al: Clin Endocrinol (Oxf) 63:381, 2005.) (Reproduced with permission from M Spitzer et al: Nat Rev Endocrinol 9:414, 2013.) The most commonly used AAS include testosterone esters, nandrolone, stanozolol, methandienone, and methenolone. AAS users generally use increasing doses of multiple steroids in a practice known as stacking. The adverse effects of long-term AAS abuse remain poorly understood. Most of the information about the adverse effects of AAS has emerged from case reports, uncontrolled studies, or clinical trials that used replacement doses of testosterone (Table 7e-2). Of note, AAS users may administer 10–100 times the replacement doses of testosterone over many years, making it unjustifiable to extrapolate from trials using replacement doses. A substantial fraction of AAS users also use other drugs that are perceived to be muscle-building or performance-enhancing, such as growth hormone; erythropoiesisstimulating agents; insulin; and stimulants such as amphetamine, clenbuterol, cocaine, ephedrine, and thyroxine; and drugs perceived to reduce adverse effects such as human chorionic gonadotropin, aromatase inhibitors, or estrogen antagonists. The men who abuse AAS are Abbreviation: HPT axis, hypothalamic-pituitary-testicular axis. Source: Modified with permission from HG Pope Jr et al: Adverse health consequences of performance-enhancing drugs: an endocrine society scientific statement. Endocr Rev 35:341, 2014. more likely to engage in other high-risk behaviors than nonusers. The adverse events associated with AAS use may be due to AAS themselves, concomitant use of other drugs, high-risk behaviors, and host characteristics that may render these individuals more susceptible to AAS use or to other high-risk behaviors. The high rates of mortality and morbidities observed in AAS users are alarming. The risk of death among elite powerlifters has been reported to be fivefold greater than in age-matched men from the general population. The causes of death among powerlifters included suicides, myocardial infarction, hepatic coma, and non-Hodgkin’s lymphoma. Numerous reports of cardiac death among young AAS users raise concerns about the adverse cardiovascular effects of AAS. High doses of AAS may induce proatherogenic dyslipidemia, increase thrombosis risk via effects on clotting factors and platelets, induce vasospasm through their effects on vascular nitric oxide, and induce myocardial hypertrophy and fibrosis. Replacement doses of testosterone, when administered parenterally, are associated with only a small decrease in high-density lipoprotein (HDL) cholesterol and little or no effect on total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglyceride levels. In contrast, supraphysiologic doses of testosterone and orally administered, 17-α-alkylated, nonaromatizable AAS are associated with marked reductions in HDL cholesterol and increases in LDL cholesterol. Long-term AAS use may be associated with myocardial hypertrophy and fibrosis as well as shortening of QT intervals. AAS use suppresses LH and FSH secretion and inhibits endogenous testosterone production and spermatogenesis. Consequently, stopping AAS may be associated with sexual dysfunction, fatigue, infertility, and depressive symptoms. In some AAS users, hypothalamic-pituitary-testicular axis suppres-7e-3 sion may last more than a year, and in a few individuals, complete recovery may not occur. The symptoms of androgen deficiency during AAS withdrawal may cause some men to revert back to using AAS, leading to continued use and AAS dependence. As many as 30% of AAS users develop a syndrome of AAS dependence, characterized by long-term AAS use, despite adverse medical and psychiatric effects. Supraphysiologic doses of testosterone may also impair insulin sensitivity, predisposing to diabetes. Elevated liver enzymes, cholestatic jaundice, hepatic neoplasms, and peliosis hepatis have been reported with oral 17-α-alkylated AAS. AAS use may cause muscle hypertrophy without compensatory adaptations in tendons, ligaments, and joints, thus increasing the risk of tendon and joint injuries. AAS use is associated with acne, baldness, and increased body hair. Unsafe injection practices, high-risk behaviors, and increased rates of incarceration render AAS users at increased risk of HIV and hepatitis B and C. In one survey, nearly 1 in 10 gay men had injected AAS or other substances, and AAS users were more likely to report high-risk unprotected anal sex than other men. Some AAS users develop hypomanic and manic symptoms during AAS exposure (irritability, aggressiveness, reckless behavior, and occasional psychotic symptoms, sometimes associated with violence) and major depression (sometimes associated with suicidality) during AAS withdrawal. Users may also develop other forms of illicit drug use, which may be potentiated or exacerbated by AAS. APPROACH TO THE PATIENT: AAS users generally mistrust physicians and seek medical help infrequently; when they do seek medical help, it is often for the treatment of AAS withdrawal syndrome, infertility, gynecomastia, or other medical or psychiatric complications of AAS use. The suspicion of AAS use should be raised by increased hemoglobin and hematocrit levels; suppressed luteinizing hormone (LH), follicle-stimulating hormone (FSH), and testosterone levels; low HDL cholesterol; and low testicular volume and sperm density in a person who looks highly muscular (Table 7e-3). A combination of these findings and a self-report of AAS use by the patient, which usually can be elicited by a tactful interview, are often sufficient to establish a diagnosis in clinical practice. Accredited laboratories use gas chromatography and mass spectrometry or liquid chromatography and mass spectrometry to detect AAS abuse. In recent years, the availability of high-resolution mass spectrometry and tandem mass spectrometry has further improved the sensitivity of detecting AAS abuse. Illicit testosterone use is most often detected by the urinary testosterone-toepitestosterone ratio and further confirmed by the use of the 13C:12C deteCtion of the use of anaboliC-androgeniC steroids Clinical indicators that should raise suspicion of anabolic-androgenic Detection of anabolic-androgenic steroids LC-MS/MS analysis of urine Detection of exogenous testosterone use Isotope ratio mass spectrometry analysis to detect differences in 13C:12C ratio in exogenous and endogenous testosterone Abbreviations: FSH, follicle-stimulating hormone; LC-MS/MS, liquid chromatography and tandem mass spectrometry; LH, luteinizing hormone. ratio in testosterone by the use of isotope ratio combustion mass spectrometry. Exogenous testosterone administration increases urinary testosterone glucuronide excretion and, consequently, the testosterone-to-epitestosterone ratio. Ratios above 4 suggest exogenous testosterone use but can also reflect genetic variation. Genetic variations in the uridine diphosphoglucuronyl transferase 2B17 (UGT2B17), the major enzyme for testosterone glucuronidation, affect the testosterone-to-epitestosterone ratio. Synthetic testosterone has a lower 13C:12C ratio than endogenously produced testosterone, and these differences can be detected by isotope ratio combustion mass spectrometry. The nonathlete weightlifters who abuse AAS rarely seek medical treatment and do not typically view these drugs and the associated lifestyle as deleterious to their health. In turn, many internists erroneously view AAS abuse as largely a problem of cheating in competitive sports, whereas, in fact, most AAS users are not athletes. Also, physicians often have a poor understanding of the factors motivating the use of these performance-enhancing drugs, the long-term health effects of AAS, and the associated psychopathologies that may affect treatment choices. In addition to treating the underlying body dysmorphia disorder that motivates the use of these drugs, the treatment should be directed at the symptoms or the condition for which the patient seeks therapy, such as infertility, sexual dysfunction, gynecomastia, or depressive symptoms. Accordingly, therapy may include some combination of cognitive and behavioral therapy for the muscle dysmorphia syndrome, antidepressant therapy for depression, selective phosphodiesterase-5 inhibitors for erectile dysfunction, selective estrogen receptor modulators or aromatase inhibitors to reactivate the hypothalamic-pituitary-testicular axis, or hCG to restore testosterone levels. Clomiphene citrate, a partial estrogen receptor agonist, administered in a dose of 25–50 mg on alternate days, can increase LH and FSH levels and restore testosterone levels in a vast majority of men with AAS withdrawal syndrome. However, the recovery of sexual function during clomiphene administration is variable despite improvements in testosterone levels. Anecdotally, other aromatase inhibitors, such as anastrozole, have also been used. hCG, administered by intramuscular injections of 750–1500 IU three times each week, can raise testosterone levels into the normal range. Some patients may not respond to either clomiphene or hCG therapy, raising the possibility of irreversible long-term toxic effects of AAS on Leydig cell function. Lower urinary tract symptoms (LUTS) in men include storage symptoms (urgency, daytime and nighttime frequency, and urgency incontinence), voiding disturbances (slow or intermittent stream, difficulty in initiating micturition, straining to void, pain or discomfort during the passage of urine, and terminal dribbling), or postmicturition symptoms (a sense of incomplete voiding after passing urine and postmicturition dribble). The overactive bladder syndrome refers to urgency with or without urgency incontinence, usually with urinary frequency and nocturia, and is often due to detrusor muscle overactivity. LUTS have historically been attributed to benign prostatic hyperplasia, although it has become apparent that the pathophysiologic mechanisms of LUTS are complex and multifactorial and may include structural or functional abnormalities of the bladder, bladder neck, prostate, distal sphincter mechanism, and urethra, as well as abnormalities in the neural control to the lower urinary tract. A presumptive diagnosis of benign prostatic hyperplasia should be made only in men with LUTS who have demonstrable evidence of prostate enlargement and obstruction based on the size of the prostate. Diuretics, antihistamines, antidepressants, and other medications that have anticholinergic properties can cause or exacerbate LUTS in older men. The intensity of LUTS symptoms tends to fluctuate over time. LUTS is highly prevalent in older men, affecting nearly 50% of men over the age of 65 and 70% of men over the age of 80. LUTS adversely affects quality of life because of its impact on sleep, ability to perform activities of daily living, and depressive symptoms. LUTS is often associated with erectile dysfunction. APPROACH TO THE PATIENT: Medical evaluation should include assessment of the symptom severity using the International Prostate Symptom Score and, in some patients, a frequency-volume chart. The impact of LUTS on sleep and activities of daily living and quality of life should be evaluated. Evaluation should also include verification of medications that may contribute to LUTS, digital prostate examination, neurologic examination focused on perineum and lower extremities, urinalysis, fasting blood glucose, electrolytes, creatinine, and prostate-specific antigen (PSA). Urodynamic studies are not required in most patients but are recommended when invasive surgical therapies are being considered. Men who have mild symptoms can be reassured and followed. Men with mild to moderate LUTS can be treated effectively using α-adrenergic antagonists, phosphodiesterase-5 (PDE5) inhibitors, steroid 5α-reductase inhibitors, or anticholinergic agents alone or in combination. Selective α-adrenergic antagonists are typically the first line of therapy. In men with probable benign prostate obstruction with gland enlargement and LUTS, therapy using a steroid 5a-reductase inhibitor, such as finasteride or dutasteride, for 1 or more years improves urinary symptoms and flow rate and reduces prostatic volume. Long-term treatment with 5α-reductase inhibitors can reduce progression to acute urinary retention and need for prostate surgery. Combined administration of a steroid 5α-reductase inhibitor and α1-adrenergic blocker can rapidly improve urinary symptoms and reduce the relative risk of acute urinary retention and surgery. PDE5 inhibitors, when administered chronically alone or in combination with α-adrenergic blockers, are effective in improving LUTS and erectile dysfunction through their effects on nitric oxide– cyclic guanosine monophosphate (cGMP) in the bladder, urethra, and prostate. PDE5 inhibitors do not improve urinary flow parameters. Anticholinergic drugs are used for the treatment of overactive bladder in men with prominent urgency symptoms and no evidence of elevated postvoid residual urine. Surgery is indicated when medical therapy fails or if symptoms progress despite medical therapy. Prostate cancer is the most common malignancy in American men, accounting for 29% of all diagnosed cancers and approximately 13% of all cancer deaths; its incidence is on the rise, partly due to increased screening with PSA. In 2013, approximately 233,000 new cases of prostate cancer were diagnosed in the United States and there were 29,480 deaths related to prostate cancer. The majority of these men have low-grade, organ-confined prostate cancer and excellent prospects of long-term survival. Substantial improvement in survival in men with prostate cancer has focused attention on the high prevalence of sexual dysfunction, physical dysfunction, and low vitality, which are important contributors to poor quality of life among patients treated for prostate cancer. The pathophysiology of these symptoms after radical prostatectomy is multifactorial, but denervation and androgen deficiency are important contributors to these symptoms. Androgen deficiency is common in men with prostate cancer. Testosterone levels decline with age, and men with prostate cancer are at risk of having low testosterone levels simply by virtue of their age. However, total and free testosterone levels are even lower in men with prostate cancer, who have undergone prostatectomy, when compared with age-matched controls without cancer. Androgen deficiency in men with prostate cancer is associated with distressing symptoms such as fatigue, sexual dysfunction, hot flushes, mobility limitation, and decreased physical function. Even with a bilateral nerve-sparing procedure, more than 50% of men develop sexual dysfunction after surgery. Although there is some recovery of sexual function with passage of time, 40–50% of men undergoing radical prostatectomy find their sexual performance to be problematic 18 months after surgery. Sexual performance problems are a source of psychosocial distress in men with localized prostate cancer. In addition to its causal contribution to distressing symptoms, androgen deficiency in men with prostate cancer increases the risk of bone fractures, diabetes, coronary heart disease, and frailty. Testosterone Therapy in Men with History of Prostate Cancer A history of prostate cancer has historically been considered a contraindication for testosterone therapy. This guidance is based on observations that testosterone promotes the growth of metastatic prostate cancer. Metastatic prostate cancer generally regresses after orchidectomy and androgen deprivation therapy. Androgen receptor signaling plays a central role in maintaining growth of normal prostate and prostate cancer. PSA levels are lower in hypogonadal men and increase after testosterone therapy. Prostate volume is lower in hypogonadal men and increases after testosterone therapy to levels seen in age-matched controls. However, the role of testosterone in prostate cancer is complex. Epidemiologic studies have not revealed a consistent relationship between serum testosterone and prostate cancer. In a landmark randomized trial, testosterone therapy of older men with low testosterone did not affect intraprostatic androgen levels or the expression of androgen-dependent prostatic genes. The suppression of circulating testosterone levels by a gonadotropin-releasing hormone (GnRH) antagonist also does not affect intraprostatic androgen concentrations. Open-label trials and retrospective analyses of testosterone therapy in men with prostate cancer, who have undergone radical prostatectomy 7e-5 and have undetectable PSA levels after radical prostatectomy, have found very low rates of PSA recurrence. Even in men with high-grade prostatic intraepithelial neoplasia (HGPIN)—a group at high risk of developing prostate cancer—testosterone therapy for 1 year did not increase PSA or rates of prostate cancer. After radical prostatectomy, in the absence of residual cancer, PSA becomes undetectable within a month. An undetectable PSA after radical prostatectomy is a good indicator of biochemical recurrence-free survival at 5 years. Therefore, men with organ-confined prostate cancer (pT2), Gleason score ≤6, and a preoperative PSA of <10 ng/mL, who have had undetectable PSA levels (<0.1 ng/mL) for >2 years after radical prostatectomy, have very low risk of disease recurrence (<0.5% at 10 years) and may be considered for testosterone therapy on an individualized basis. If testosterone therapy is instituted, it should be associated with careful monitoring of PSA levels and done in consultation with a urologist. In patients with prostate cancer and distant metastases, androgen deprivation therapy (ADT) improves survival. In patients with locally advanced disease, ADT in combination with external-beam radiation or as an adjuvant therapy (after prostatectomy and pelvic lymphadenectomy) also has been shown to improve survival. However, ADT is being increasingly used as primary therapy in men with localized disease and in men encountering biochemical recurrence without clear evidence of survival advantage. Because most men with prostate cancer die of conditions other than their primary malignancy, recognition and management of these adverse effects is paramount. Profound hypogonadism resulting from ADT is associated with sexual dysfunction, vasomotor symptoms, gynecomastia, decreased muscle mass and strength, frailty, increased fat mass, anemia, fatigue, bone loss, loss of body hair, depressive symptoms, and reduced quality of life. Diabetes and cardiovascular disease have recently been added to the list of these complications (Fig. 7e-3). Treatment with GnRH Any fracture (1.54) Shahinian et al. 2005, NEJM Fracture requiring hospitalization (1.66) Diabetes (1.44) Keating et al. 2006, JCO Myocardial infarction (1.11) Peripheral vascular disease (1.16) Keating et al. 2006, JCO Coronary heart disease (1.16) Hu et al. 2012, Eur Urol Sudden death (1.16) FIGURE 7e-3 Adverse cardiometabolic and skeletal effects of androgen deprivation therapy (ADT) in men receiving ADT for prostate cancer. Administration of ADT has been associated with increased risk of thromboembolic events, fractures, and diabetes. Some, but not all, studies have reported increased risk of cardiovascular events in men receiving ADT. (Data on relative risk were derived from VB Shahinian et al: N Engl J Med 352:154, 2005; NL Keating et al: J Clin Oncol 24:4448, 2006; and JC Hu et al: Eur Urol 61:1119, 2012.) 1. Weigh the risks and benefits of ADT and whether intermittent ADT is a feasible and safe option. 2. Perform a baseline assessment including fasting glucose, plasma lipids, blood pressure, bone mineral density, and FRAX® score. 3. Optimize calcium and vitamin D intake, encourage structured physical activity and exercise, and consider pharmacologic therapy in men with a previous minimal trauma fracture and those with a 10-year risk of a major osteoporotic fracture >20%, unless contraindicated. 4. Monitor body weight, fasting glucose, plasma lipids, blood pressure, and bone mineral density, and encourage smoking cessation and physical activity. 5. In men who are receiving ADT and who experience bothersome hot flushes, as indicated by sleep disturbance or interference with work or activities of daily living, consider initial therapy with venlafaxine. If in effective, add medroxyprogesterone acetate. 6. In men who experience painful breast enlargement, consider therapy with an estrogen receptor antagonist, such as tamoxifen. agonists in men with prostate cancer is associated with rapid induction of insulin resistance, hyperinsulinemia, and a significant increase in the risk of incident diabetes. Metabolic syndrome is prevalent in over 50% of men undergoing long-term ADT. Some but not all studies have reported an increased risk of cardiovascular events, death due to cardiovascular events, and peripheral vascular disease in men undergoing ADT. Men receiving ADT are also at increased risk of thromboembolic events. The rates of acute kidney injury are higher in men currently receiving ADT than in men not receiving ADT; the increased risk appears to be particularly associated with the use of combined regimens of a GnRH agonist plus an antiandrogen. ADT also is associated with substantially increased risk of osteoporosis and bone fractures. APPROACH TO THE PATIENT: The benefits of ADT in treating nonmetastatic prostate cancer should be carefully weighed against the risks of ADT-induced adverse events (Table 7e-4). If ADT is medically indicated, consider whether intermittent ADT is a feasible option. Men being considered for ADT should undergo assessment of cardiovascular, diabetes, and fracture risk; this assessment may include measurement of blood glucose, plasma lipids, and bone mineral density (BMD) by dual-energy x-ray absorptiometry. Institute measures to prevent bone loss, including physical activity, adequate calcium and vitamin D intake, and pharmacologic therapy in men with a previous minimal trauma fracture and those with a 10-year risk of a major osteoporotic fracture >20%, unless contraindicated. Men with prostate cancer who are receiving ADT should be monitored for weight gain and diabetes. Encourage lifestyle interventions, including physical activity and exercise, and attention to weight, blood pressure, lipid profile, blood glucose, and smoking cessation, to reduce the risk of cardiometabolic complications. In randomized trials, medroxyprogesterone, cyproterone acetate, and the selective serotonin reuptake inhibitor venlafaxine have been shown to be more efficacious than placebo in alleviating hot flushes. The side effects of these medications, including increased appetite and weight gain with medroxyprogesterone, gynecomastia with estrogenic compounds, and dry mouth with venlafaxine, should be weighed against their relative efficacy. Acupuncture, soy products, vitamin E, and herbal medicines have been used empirically for the treatment of vasomotor symptoms without clear evidence of efficacy. Gynecomastia can be prevented by local radiation therapy or the use of an antiestrogen or an aromatase inhibitor; these therapies are effective in alleviating pain and tenderness but are less effective in reducing established gynecomastia. Chapter 8 Medical Disorders During Pregnancy Medical disorders during pregnancy Robert L. Barbieri, John T. Repke Each year, approximately 4 million births occur in the United States, and more than 130 million births occur worldwide. A significant 8 proportion of births are complicated by medical disorders. In the past, many medical disorders were contraindications to pregnancy. Advances in obstetrics, neonatology, obstetric anesthesiology, and medicine have increased the expectation that pregnancy will result in a positive outcome for both mother and fetus despite most of these conditions. A successful pregnancy requires important physiologic adaptations, such as a marked increase in cardiac output. Medical problems that interfere with the physiologic adaptations of pregnancy increase the risk for poor pregnancy outcome; conversely, in some instances, pregnancy may adversely impact an underlying medical disorder. (See also Chap. 298) In pregnancy, cardiac output increases by 40%, with most of the increase due to an increase in stroke volume. Heart rate increases by ~10 beats/min during the third trimester. In the second trimester, systemic vascular resistance decreases, and this decline is associated with a fall in blood pressure. During pregnancy, a blood pressure of 140/90 mmHg is considered to be abnormally elevated and is associated with an increase in perinatal morbidity and mortality. In all pregnant women, the measurement of blood pressure should be performed in the sitting position, because the lateral recumbent position may result in a blood pressure lower than that recorded in the sitting position. The diagnosis of hypertension requires the measurement of two elevated blood pressures at least 6 h apart. Hypertension during pregnancy is usually caused by preeclampsia, chronic hypertension, gestational hypertension, or renal disease. Approximately 5–7% of all pregnant women develop preeclampsia, the new onset of hypertension (blood pressure >140/90 mmHg) and proteinuria (either a 24 hour urinary protein >300 mg/24 h, or a protein-creatinine ratio ≥0.3) after 20 weeks of gestation. Although the precise pathophysiology of preeclampsia remains unknown, recent studies show excessive placental production of antagonists to both vascular epithelial growth factor (VEGF) and transforming growth factor β (TGF-β). These antagonists to VEGF and TGF-β disrupt endothelial and renal glomerular function resulting in edema, hypertension, and proteinuria. The renal histological feature of preeclampsia is glomerular endotheliosis. Glomerular endothelial cells are swollen and encroach on the vascular lumen. Preeclampsia is associated with abnormalities of cerebral circulatory autoregulation, which increase the risk of stroke at mildly and moderately elevated blood pressures. Risk factors for the development of preeclampsia include nulliparity, diabetes mellitus, a history of renal disease or chronic hypertension, a prior history of preeclampsia, extremes of maternal age (>35 years or <15 years), obesity, antiphospholipid antibody syndrome, and multiple gestation. Low-dose aspirin (81 mg daily, initiated at the end of the first trimester) may reduce the risk of preeclampsia in pregnant women at high risk of developing the disease. In December, 2013 The American College of Obstetricians and Gynecologists issued a report summarizing the findings and recommendations of their Task Force on Hypertension in Pregnancy. With respect to preeclampsia several pertinent revisions to the diagnostic criteria were made including: proteinuria is no longer an absolute requirement for making the diagnosis; the terms mild and severe preeclampsia have been replaced, and the disease is now termed preeclampsia either with or without severe features; removal of fetal growth restriction as a defining criterion for severe preeclampsia. Preeclampsia with severe features is the presence of new-onset hypertension and proteinuria accompanied by end-organ damage. Features may include severe elevation of blood pressure (>160/110 mmHg), evidence of central nervous system (CNS) dysfunction (headaches, blurred vision, seizures, coma), renal dysfunction (oliguria or creatinine >1.5 mg/dL), pulmonary edema, hepatocellular injury (serum alanine aminotransferase level more than twofold the upper limit of normal), hematologic dysfunction (platelet count <100,000/L or disseminated intravascular coagulation [DIC]). The HELLP syndrome (hemolysis, elevated liver enzymes, low platelets) is a special subtype of severe preeclampsia and is a major cause of morbidity and mortality in this disease. Platelet dysfunction and coagulation disorders further increase the risk of stroke. Preeclampsia resolves within a few weeks after delivery. For pregnant women with preeclampsia prior to 37 weeks of gestation, delivery reduces the mother’s morbidity but exposes the fetus to the risk of premature birth. The management of preeclampsia is challenging because it requires the clinician to balance the health of the mother and fetus simultaneously. In general, prior to term, women with mild preeclampsia without severe features may be managed conservatively with limited physical activity, although bed rest is not recommended, close monitoring of blood pressure and renal function, and careful fetal surveillance. For women with preeclampsia with severe features, delivery is recommended unless the patient is eligible for expectant management in a tertiary hospital setting. Expectant management of preeclampsia with severe features remote from term affords some benefits for the fetus but significant risks for the mother. The definitive treatment of preeclampsia is delivery of the fetus and placenta. For women with preeclampsia with severe features, aggressive management of blood pressures >160/110 mmHg reduces the risk of cerebrovascular accidents. IV labetalol or hydralazine is most commonly used to acutely manage severe hypertension in preeclampsia; labetalol is associated with fewer episodes of maternal hypotension. Oral nifedipine and labetalol are commonly used to manage hypertension in pregnancy. Elevated arterial pressure should be reduced slowly to avoid hypotension and a decrease in blood flow to the fetus. Angiotensin-converting enzyme (ACE) inhibitors as well as angiotensin-receptor blockers should be avoided in the second and third trimesters of pregnancy because of their adverse effects on fetal development. Magnesium sulfate is the preferred agent for the prevention and treatment of eclamptic seizures. Large, randomized clinical trials have demonstrated the superiority of magnesium sulfate over phenytoin and diazepam in reducing the risk of seizure and, possibly, the risk of maternal death. Magnesium may prevent seizures by interacting with N-methyl-D-aspartate (NMDA) receptors in the CNS. Given the difficulty of predicting eclamptic seizures on the basis of disease severity, once the decision to proceed with delivery is made, most patients carrying a diagnosis of preeclampsia should be treated with magnesium sulfate. Women who have had preeclampsia appear to be at increased risk of cardiovascular and renal disease later in life. Pregnancy complicated by chronic essential hypertension is associated with intrauterine growth restriction and increased perinatal mortality. Pregnant women with chronic hypertension are at increased risk for superimposed preeclampsia and abruptio placentae. Women with chronic hypertension should have a thorough prepregnancy evaluation, both to identify remediable causes of hypertension and to ensure that the prescribed antihypertensive agents (e.g., ACE inhibitors, angiotensin-receptor blockers) are not associated with an adverse outcome of pregnancy. α-Methyldopa, labetalol, and nifedipine are the most commonly used medications for the treatment of chronic hypertension in pregnancy. The target blood pressure is in the range of 130–150 mmHg systolic and 80–100 mmHg diastolic. Should hypertension worsen during pregnancy, baseline evaluation of renal function (see below) is necessary to help differentiate the effects of chronic hypertension from those of superimposed preeclampsia. There are no convincing data that the treatment of mild chronic hypertension improves perinatal outcome. The development of elevated blood pressure during pregnancy or in the first 24 h post-partum in the absence of preexisting chronic hypertension or proteinuria is referred to as gestational hypertension. Mild gestational hypertension that does not progress to preeclampsia has not been associated with adverse pregnancy outcome or adverse long-term prognosis. (See also Chaps. 333 and 341) Normal pregnancy is characterized by an increase in glomerular filtration rate and creatinine clearance. This increase occurs secondary to a rise in renal plasma flow and increased glomerular filtration pressures. Patients with underlying renal disease and hypertension may expect a worsening of hypertension during pregnancy. If superimposed preeclampsia develops, the additional endothelial injury results in a capillary leak syndrome that may make management challenging. In general, patients with underlying renal disease and hypertension benefit from aggressive management of blood pressure. Preconception counseling is also essential for these patients so that accurate risk assessment and medication changes can occur prior to pregnancy. In general, a prepregnancy serum creatinine level <133 μmol/L (<1.5 mg/dL) is associated with a favorable prognosis. When renal disease worsens during pregnancy, close collaboration between the internist and the maternal-fetal medicine specialist is essential so that decisions regarding delivery can be weighed to balance the sequelae of prematurity for the neonate versus long-term sequelae for the mother with respect to future renal function. (See also Chaps. 283–286) Valvular heart disease is the most common cardiac problem complicating pregnancy. Mitral Stenosis This is the valvular disease most likely to cause death during pregnancy. The pregnancy-induced increase in blood volume, cardiac output, and tachycardia can increase the transmitral pressure gradient and cause pulmonary edema in women with mitral stenosis. Women with moderate to severe mitral stenosis who are planning pregnancy and have either symptomatic disease or pulmonary hypertension should undergo valvuloplasty prior to conception. Pregnancy associated with long-standing mitral stenosis may result in pulmonary hypertension. Sudden death has been reported when hypovolemia occurs. Careful control of heart rate, especially during labor and delivery, minimizes the impact of tachycardia and reduced ventricular filling times on cardiac function. Pregnant women with mitral stenosis are at increased risk for the development of atrial fibrillation and other tachyarrhythmias. Medical management of severe mitral stenosis and atrial fibrillation with digoxin and beta blockers is recommended. Balloon valvulotomy can be carried out during pregnancy. The immediate postpartum period is a time of particular concern secondary to rapid volume shifts. Careful monitoring of cardiac and fluid status should be observed. Mitral Regurgitation and Aortic Regurgitation and Stenosis The pregnancy-induced decrease in systemic vascular resistance reduces the risk of cardiac failure with these conditions. As a rule, mitral valve prolapse does not present problems for the pregnant patient, and aortic stenosis, unless very severe, is well tolerated. In the most severe cases of aortic stenosis, limitation of activity or balloon valvuloplasty may be indicated. (See also Chap. 282) Reparative surgery has markedly increased the number of women with surgically repaired congenital heart disease. Maternal morbidity and mortality are greater among these women than among those without surgical repairs. When pregnant, these patients should be jointly managed by a cardiologist and an obstetrician familiar with these problems. The presence of a congenital cardiac lesion in the mother increases the risk of congenital cardiac disease in the newborn. Prenatal screening of the fetus for congenital cardiac disease with ultrasound is recommended. Atrial or ventricular septal defect is usually well tolerated during pregnancy in the absence of pulmonary hypertension, provided that the woman’s prepregnancy cardiac status is favorable. Use of air filters on IV sets during labor and delivery in patients with intracardiac shunts is recommended. Supraventricular tachycardia (Chap. 276) is a common cardiac complication of pregnancy. Treatment is the same as in the nonpregnant patient, and fetal tolerance of medications such as adenosine and calcium channel blockers is acceptable. When necessary, pharmacologic or electric cardioversion may be performed to improve cardiac performance and reduce symptoms. This intervention is generally well tolerated by mother and fetus. Peripartum cardiomyopathy (Chap. 287) is an uncommon disorder of pregnancy associated with myocarditis, and its etiology remains unknown. Treatment is directed toward symptomatic relief and improvement of cardiac function. Many patients recover completely; others are left with progressive dilated cardiomyopathy. Recurrence in a subsequent pregnancy has been reported, and women who do not have normal baseline left-ventricular function after an episode of peripartum cardiomyopathy should be counseled to avoid pregnancy. SPECIFIC HIgH-RISK CARdIAC LESIoNS Marfan Syndrome (See also Chap. 427) This autosomal dominant disease is associated with a high risk of maternal morbidity. Approximately 15% of pregnant women with Marfan syndrome develop a major cardiovascular manifestation during pregnancy, with almost all women surviving. An aortic root diameter <40 mm is associated with a favorable outcome of pregnancy. Prophylactic therapy with beta blockers has been advocated, although large-scale clinical trials in pregnancy have not been performed. Ehlers-Danlos syndrome (EDS) may be associated with premature labor, and in type IV EDS there is increased risk of organ or vascular rupture that may cause death. Pulmonary Hypertension (See also Chap. 304) Maternal mortality in the setting of severe pulmonary hypertension is high, and primary pulmonary hypertension is a contraindication to pregnancy. Termination of pregnancy may be advisable in these circumstances to preserve the life of the mother. In the Eisenmenger syndrome, i.e., the combination of pulmonary hypertension with right-to-left shunting due to congenital abnormalities (Chap. 282), maternal and fetal deaths occur frequently. Systemic hypotension may occur after blood loss, prolonged Valsalva maneuver, or regional anesthesia; sudden death secondary to hypotension is a dreaded complication. Management of these patients is challenging, and invasive hemodynamic monitoring during labor and delivery is recommended in severe cases. In patients with pulmonary hypertension, vaginal delivery is less stressful hemodynamically than cesarean section, which should be reserved for accepted obstetric indications. (See also Chap. 300) A hypercoagulable state is characteristic of pregnancy, and deep venous thrombosis (DVT) occurs in about 1 in 500 pregnancies. In pregnant women, most unilateral DVTs occur in the left leg because the left iliac vein is compressed by the right iliac artery and the uterus compresses the inferior vena cava. Pregnancy is associated with an increase in procoagulants such as factors V and VII and a decrease in anticoagulant activity, including proteins C and S. Pulmonary embolism is one of the most common causes of maternal death in the United States. Activated protein C resistance caused by the factor V Leiden mutation increases the risk for DVT and pulmonary embolism during pregnancy. Approximately 25% of women with DVT during pregnancy carry the factor V Leiden allele. Additional genetic mutations associated with DVT during pregnancy include the prothrombin G20210A mutation (heterozygotes and homozygotes) and the methylenetetrahydrofolate reductase C677T mutation (homozygotes). Aggressive diagnosis and management of DVT and suspected pulmonary embolism optimize the outcome for mother and fetus. In general, all diagnostic and therapeutic modalities afforded the non-pregnant patient should be utilized in pregnancy except for D-dimer measurement, in which values are elevated in normal pregnancy. Anticoagulant therapy with low-molecular-weight heparin (LMWH) or unfractionated heparin is indicated in pregnant women with DVT. LMWH may be associated with an increased risk of epidural hematoma in women receiving an epidural anesthetic in labor. Four weeks prior to anticipated delivery, LMWH should be switched to unfractionated heparin. Warfarin therapy is contraindicated in the first trimester due to its association with fetal chondrodysplasia punctata. In the second and third trimesters, warfarin may cause fetal optic atrophy and mental retardation. When DVT occurs in the postpartum period, LMWH therapy for 7–10 days may be followed by warfarin therapy for 3–6 months. Warfarin is not contraindicated in breast-feeding women. For women at moderate or high risk of DVT who have a cesarean delivery, mechanical and/or pharmacologic prophylaxis is warranted. (See also Chaps. 417–419) In pregnancy, the fetoplacental unit induces major metabolic changes, the purpose of which is to shunt glucose and amino acids to the fetus while the mother uses ketones and triglycerides to fuel her metabolic needs. These metabolic changes are accompanied by maternal insulin resistance caused in part by placental production of steroids, a growth hormone variant, and placental lactogen. Although pregnancy has been referred to as a state of “accelerated starvation,” it is better characterized as “accelerated ketosis.” In pregnancy, after an overnight fast, plasma glucose is lower by 0.8–1.1 mmol/L (15–20 mg/dL) than in the nonpregnant state. This difference is due to the use of glucose by the fetus. In early pregnancy, fasting may result in circulating glucose concentrations in the range of 2.2 mmol/L (40 mg/dL) and may be associated with symptoms of hypoglycemia. In contrast to the decrease in maternal glucose concentration, plasma hydroxybutyrate and acetoacetate levels rise to two to four times normal after a fast. Pregnancy complicated by diabetes mellitus is associated with higher maternal and perinatal morbidity and mortality rates. Preconception counseling and treatment are important for the diabetic patient contemplating pregnancy and can reduce the risk of congenital malformations and improve pregnancy outcome. Folate supplementation reduces the incidence of fetal neural tube defects, which occur with greater frequency in fetuses of diabetic mothers. In addition, optimizing glucose control during key periods of organogenesis reduces other congenital anomalies, including sacral agenesis, caudal dysplasia, renal agenesis, and ventricular septal defect. Once pregnancy is established, glucose control should be managed more aggressively than in the nonpregnant state. In addition to dietary changes, this enhanced management requires more frequent blood glucose monitoring and often involves additional injections of insulin or conversion to an insulin pump. Fasting blood glucose levels should be maintained at <5.8 mmol/L (<105 mg/dL), with avoidance of values >7.8 mmol/L (140 mg/dL). Commencing in the third trimester, regular surveillance of maternal glucose control as well as assessment of fetal growth (obstetric sonography) and fetoplacental oxygenation (fetal heart rate monitoring or biophysical profile) optimize pregnancy outcome. Pregnant diabetic patients without vascular disease are at greater risk for delivering a macrosomic fetus, and attention to fetal growth via clinical and ultrasound examination is important. Fetal macrosomia is associated with an increased risk of maternal and fetal birth trauma, including permanent newborn Erb’s palsy. Pregnant women with diabetes have an increased risk of developing preeclampsia, and those with vascular disease are at greater risk for developing intrauterine growth restriction, which is associated with an increased risk of fetal and neonatal death. Excellent pregnancy outcomes in patients with diabetic nephropathy and proliferative retinopathy have been reported with aggressive glucose control and intensive maternal and fetal surveillance. As pregnancy progresses, glycemic control may become more difficult to achieve due to an increase in insulin resistance. Because of delayed pulmonary maturation of the fetuses of diabetic mothers, early delivery should be avoided unless there is biochemical evidence of fetal lung maturity. In general, efforts to control glucose and avoid preterm delivery result in the best overall outcome for both mother and newborn. Preterm delivery is generally performed only for the usual obstetric indications (e.g., preeclampsia, fetal growth restriction, non-reassuring fetal testing) or for worsening maternal renal or active proliferative retinopathy. Gestational diabetes occurs in approximately 4% of pregnancies. All pregnant women should be screened for gestational diabetes unless they are in a low-risk group. Women at low risk for gestational diabetes are those <25 years of age; those with a body mass index <25 kg/m2, no maternal history of macrosomia or gestational diabetes, and no diabetes in a first-degree relative; and those who are not members of a high-risk ethnic group (African American, Hispanic, Native American). A typical two-step strategy for establishing the diagnosis of gestational diabetes involves administration of a 50-g oral glucose challenge with a single serum glucose measurement at 60 min. If the plasma glucose is <7.8 mmol/L (<130 mg/dL), the test is considered normal. Plasma glucose >7.8 mmol/L (>130 mg/dL) warrants administration of a 100-g oral glucose challenge with plasma glucose measurements obtained in the fasting state and at 1, 2, and 3 h. Normal plasma glucose concentrations at these time points are <5.8 mmol/L (<105 mg/dL), 10.5 mmol/L (190 mg/dL), 9.1 mmol/L (165 mg/dL), and 8.0 mmol/L (145 mg/dL), respectively. Some centers have adopted more sensitive criteria, using values of <5.3 mmol/L (<95 mg/dL), <10 mmol/L (<180 mg/dL), <8.6 mmol/L (<155 mg/dL), and <7.8 mmol/L (<140 mg/dL) as the upper norms for a 3-h glucose tolerance test. Two elevated glucose values indicate a positive test. Adverse pregnancy outcomes for mother and fetus appear to increase with glucose as a continuous variable; thus it is challenging to define the optimal threshold for establishing the diagnosis of gestational diabetes. Pregnant women with gestational diabetes are at increased risk of stillbirth, preeclampsia, and delivery of infants who are large for their gestational age, with resulting birth lacerations, shoulder dystocia, and birth trauma including brachial plexus injury. These fetuses are at risk of hypoglycemia, hyperbilirubinemia, and polycythemia. Tight control of blood sugar during pregnancy and labor can reduce these risks. Treatment of gestational diabetes with a two-step strategy—dietary intervention followed by insulin injections if diet alone does not adequately control blood sugar [fasting glucose <5.6 mmol/L (<100 mg/dL) and 2-h postprandial glucose <7.0 mmol/L (<126 mg/dL)]— is associated with a decreased risk of birth trauma for the fetus. Oral hypoglycemic agents such as glyburide and metformin have become more commonly utilized for managing gestational diabetes refractory to nutritional management, but many experts favor insulin therapy. For women with gestational diabetes, there is a 40% risk of being diagnosed with diabetes within the 10 years after the index pregnancy. In women with a history of gestational diabetes, exercise, weight loss, and treatment with metformin reduce the risk of developing diabetes. All women with a history of gestational diabetes should be counseled about prevention strategies and evaluated regularly for diabetes. (See also Chap. 416) Pregnant women who are obese have an increased risk of stillbirth, congenital fetal malformations, gestational diabetes, preeclampsia, urinary tract infections, post-date delivery, and cesarean delivery. Women contemplating pregnancy should attempt to attain a healthy weight prior to conception. For morbidly obese women who have not been able to lose weight with lifestyle changes, bariatric surgery may result in weight loss and improve pregnancy outcomes. Following bariatric surgery, women should delay conception for 1 year to avoid pregnancy during an interval of rapid metabolic changes. (See also Chap. 405) In pregnancy, the estrogen-induced increase in thyroxine-binding globulin increases circulating levels of total T3 and total T4. The normal range of circulating levels of free T4, free T3, and thyroid-stimulating hormone (TSH) remain unaltered by pregnancy. The thyroid gland normally enlarges during pregnancy. Many physiologic adaptations to pregnancy may mimic subtle signs of hyperthyroidism. Maternal hyperthyroidism occurs at a rate of ~2 per 1000 pregnancies and is generally well tolerated by pregnant women. Clinical signs and symptoms should alert the physician to the occurrence of this condition. Hyperthyroidism in pregnancy is most commonly caused by Graves’ disease, but autonomously functioning nodules and gestational trophoblastic disease should also be considered. Although pregnant women are able to tolerate mild hyperthyroidism without adverse sequelae, more severe hyperthyroidism can cause spontaneous abortion or premature labor, and thyroid storm is associated with a significant risk of maternal death. Testing for hypothyroidism using TSH measurements before or early in pregnancy may be warranted in symptomatic women and in women with a personal or family history of thyroid disease. With use of this case-finding approach, about 30% of pregnant women with mild hypothyroidism remain undiagnosed, leading some to recommend universal screening. Children born to women with an elevated serum TSH (and a normal total thyroxine) during pregnancy may have impaired performance on neuropsychologic tests. Methimazole crosses the placenta to a greater degree than propylthiouracil and has been associated with fetal aplasia cutis. However, propylthiouracil can be associated with liver failure. Some experts recommend propylthiouracil in the first trimester and methimazole thereafter. Radioiodine should not be used during pregnancy, either for scanning or for treatment, because of effects on the fetal thyroid. In emergent circumstances, additional treatment with beta blockers may be necessary. Hyperthyroidism is most difficult to control in the first trimester of pregnancy and easiest to control in the third trimester. The goal of therapy for hypothyroidism is to maintain the serum TSH in the normal range, and thyroxine is the drug of choice. During pregnancy, the dose of thyroxine required to keep the TSH in the normal range rises. In one study, the mean replacement dose of thyroxine required to maintain the TSH in the normal range was 0.1 mg daily before pregnancy and increased to 0.15 mg daily during pregnancy. Since the increased thyroxine requirement occurs as early as the fifth week of pregnancy, one approach is to increase the thyroxine dose by 30% (two additional pills weekly) as soon as pregnancy is diagnosed and then adjust the dose by serial measurements of TSH. Pregnancy has been described as a state of physiologic anemia. Part of the reduction in hemoglobin concentration is dilutional, but iron and folate deficiencies are major causes of correctable anemia during pregnancy. In populations at high risk for hemoglobinopathies (Chap. 127), hemoglobin electrophoresis should be performed as part of the prenatal screen. Hemoglobinopathies can be associated with increased maternal and fetal morbidity and mortality. Management is tailored to the specific hemoglobinopathy and is generally the same for both pregnant and nonpregnant women. Prenatal diagnosis of hemoglobinopathies in the fetus is readily available and should be discussed with prospective parents either prior to or early in pregnancy. Thrombocytopenia occurs commonly during pregnancy. The majority of cases are benign gestational thrombocytopenias, but the differential diagnosis should include immune thrombocytopenia (Chap. 140), thrombotic thrombocytopenic purpura, and preeclampsia. Maternal thrombocytopenia may also be caused by DIC, which is a consumptive coagulopathy characterized by thrombocytopenia, prolonged prothrombin time (PT) and activated partial thromboplastin time (aPTT), elevated fibrin degradation products, and a low fibrinogen concentration. Several catastrophic obstetric events are associated with the development of DIC, including retention of a dead fetus, sepsis, abruptio placentae, and amniotic fluid embolism. Headache appearing during pregnancy is usually due to migraine (Chap. 21), a condition that may worsen, improve, or be unaffected by pregnancy. A new or worsening headache, particularly if associated with visual blurring, may signal eclampsia (above) or pseudotumor cerebri (benign intracranial hypertension); diplopia due to a sixth-nerve palsy suggests pseudotumor cerebri (Chap. 39). The risk of seizures in patients with epilepsy increases in the postpartum period but not consistently during pregnancy; management is discussed in Chap. 445. The risk of stroke is generally thought to increase during pregnancy because of a hypercoagulable state; however, studies suggest that the period of risk occurs primarily in the postpartum period and that both ischemic and hemorrhagic strokes may occur at this time. Guidelines for use of heparin therapy are summarized above (see “Deep Venous Thrombosis and Pulmonary Embolism”); warfarin is teratogenic and should be avoided. The onset of a new movement disorder during pregnancy suggests chorea gravidarum, a variant of Sydenham’s chorea associated with rheumatic fever and streptococcal infection (Chap. 381); the chorea may recur with subsequent pregnancies. Patients with preexisting multiple sclerosis (Chap. 458) experience a gradual decrease in the risk of relapses as pregnancy progresses and, conversely, an increase in attack risk during the postpartum period. Disease-modifying agents, including interferon β, should not be administered to pregnant multiple sclerosis patients, but moderate or severe relapses can be safely treated with pulse glucocorticoid therapy. Finally, certain tumors, particularly pituitary adenoma and meningioma (Chap. 403), may manifest during pregnancy because of accelerated growth, possibly driven by hormonal factors. Peripheral nerve disorders associated with pregnancy include Bell’s palsy (idiopathic facial paralysis) (Chap. 459), which is approximately threefold more likely to occur during the third trimester and immediate postpartum period than in the general population. Therapy with glucocorticoids should follow the guidelines established for non-pregnant patients. Entrapment neuropathies are common in the later stages of pregnancy, presumably as a result of fluid retention. Carpal tunnel syndrome (median nerve) presents first as pain and paresthesia in the hand (often worse at night) and later with weakness in the thenar muscles. Treatment is generally conservative; wrist splints may be helpful, and glucocorticoid injections or surgical section of the carpal tunnel can usually be postponed. Meralgia paresthetica (lateral femoral cutaneous nerve entrapment) consists of pain and numbness in the lateral aspect of the thigh without weakness. Patients are usually reassured to learn that these symptoms are benign and can be expected to remit spontaneously after the pregnancy has been completed. Restless leg syndrome is the most common peripheral nerve and movement disorder in pregnancy. Disordered iron metabolism is the suspected etiology. Management is expectant in most cases. Up to 90% of pregnant women experience nausea and vomiting during the first trimester of pregnancy. Hyperemesis gravidarum is a severe form that prevents adequate fluid and nutritional intake and may require hospitalization to prevent dehydration and malnutrition. Crohn’s disease may be associated with exacerbations in the second and third trimesters. Ulcerative colitis is associated with disease exacerbations in the first trimester and during the early postpartum period. Medical management of these diseases during pregnancy is similar to management in the nonpregnant state (Chap. 351). Exacerbation of gallbladder disease is common during pregnancy. In part, this aggravation may be due to pregnancy-induced alteration in the metabolism of bile and fatty acids. Intrahepatic cholestasis of pregnancy is generally a third-trimester event. Profound pruritus may accompany this condition, and it may be associated with increased fetal mortality. Placental bile salt deposition may contribute to progressive uteroplacental insufficiency. Therefore, regular fetal surveillance should be undertaken once the diagnosis of intrahepatic cholestasis is made, and delivery should be planned once the fetus reaches about 37 weeks of gestation. Favorable results with ursodiol have been reported. Acute fatty liver is a rare complication of pregnancy. Frequently confused with the HELLP syndrome (see “Preeclampsia” above) and severe preeclampsia, the diagnosis of acute fatty liver of pregnancy may be facilitated by imaging studies and laboratory evaluation. Acute fatty liver of pregnancy is generally characterized by markedly increased serum levels of bilirubin and ammonia and by hypoglycemia. Management of acute fatty liver of pregnancy is supportive; recurrence in subsequent pregnancies has been reported. All pregnant women should be screened for hepatitis B. This information is important for pediatricians after delivery of the infant. All infants receive hepatitis B vaccine. Infants born to mothers who are carriers of hepatitis B surface antigen should also receive hepatitis B immune globulin as soon after birth as possible and preferably within the first 72 h. Screening for hepatitis C is recommended for individuals at high risk for exposure. Other than bacterial vaginosis, the most common bacterial infections during pregnancy involve the urinary tract (Chap. 162). Many pregnant women have asymptomatic bacteriuria, most likely due to stasis caused by progestational effects on ureteral and bladder smooth muscle and later in pregnancy due to compression effects of the enlarging uterus. In itself, this condition is not associated with an adverse outcome of pregnancy. However, if asymptomatic bacteriuria is left untreated, symptomatic pyelonephritis may occur. Indeed, ~75% of pregnancy-associated pyelonephritis cases are the result of untreated asymptomatic bacteriuria. All pregnant women should be screened with a urine culture for asymptomatic bacteriuria at the first prenatal visit. Subsequent screening with nitrite/leukocyte esterase strips is indicated for high-risk women, such as those with sickle cell trait or a history of urinary tract infections. All women with positive screens should be treated. Pregnant women who develop pyelonephritis need careful monitoring, including inpatient IV antibiotic administration due to the elevated risk of urosepsis and acute respiratory distress syndrome in pregnancy. Abdominal pain and fever during pregnancy create a clinical dilemma. The diagnosis of greatest concern is intrauterine amniotic infection. While amniotic infection most commonly follows rupture of the membranes, this is not always the case. In general, antibiotic therapy is not recommended as a temporizing measure in these circumstances. If intrauterine infection is suspected, induced delivery with concomitant antibiotic therapy is generally indicated. Intrauterine amniotic infection is most often caused by pathogens such as Escherichia coli and group B Streptococcus (GBS). In high-risk patients at term or in preterm patients, routine intrapartum prophylaxis of GBS disease is recommended. Penicillin G and ampicillin are the drugs of choice. In penicillin-allergic patients with a low risk of anaphylaxis, cefazolin is recommended. If the patient is at high risk of anaphylaxis, vancomycin is recommended. If the organism is known to be sensitive to clindamycin, this antibiotic may be used. For the reduction of neonatal morbidity due to GBS, universal screening of pregnant women for GBS between 35 and 37 weeks of gestation, with intrapartum antibiotic treatment of infected women, is recommended. Postpartum infection is a significant cause of maternal morbidity and mortality. Postpartum endomyometritis is more common after cesarean delivery than vaginal delivery and develops in 2% of women after elective repeat cesarean section and in up to 10% after emergency cesarean section following prolonged labor. To reduce the risk of endomyometritis, prophylactic antibiotics should be given to all patients undergoing cesarean section, and administration 30–60 min prior to skin incision is preferable to administration at the time of umbilical cord clamping. As most cases of postpartum endomyometritis are polymicrobial, broad-spectrum antibiotic coverage with a penicillin, an aminoglycoside, and metronidazole is recommended (Chap. 201). Most cases resolve within 72 h. Women who do not respond to antibiotic treatment for postpartum endomyometritis should be evaluated for septic pelvic thrombophlebitis. Imaging studies may be helpful in establishing the diagnosis, which is primarily a clinical diagnosis of exclusion. Patients with septic pelvic thrombophlebitis generally have tachycardia out of proportion to their fever and respond rapidly to IV administration of heparin. All pregnant patients are screened prenatally for gonorrhea and chlamydial infections, and the detection of either should result in prompt treatment. Ceftriaxone and azithromycin are the agents of choice (Chaps. 181 and 213). VIRAL INFECTIoNS Influenza (See also Chap. 224) Pregnant women with influenza are at increased risk of serious complications and death. All women who are pregnant or plan to become pregnant in the near future should receive inactivated influenza vaccine. The prompt initiation of antiviral treatment is recommended for pregnant women in whom influenza is suspected. Treatment can be reconsidered once the results of high-sensitivity tests are available. Prompt initiation of treatment lowers the risk of admission to an intensive care unit and death. Cytomegalovirus Infection The most common cause of congenital viral infection in the United States is cytomegalovirus (CMV) (Chap. 219). As many as 50–90% of women of childbearing age have antibodies to CMV, but only rarely does CMV reactivation result in neonatal infection. More commonly, primary CMV infection during pregnancy creates a risk of congenital CMV. No currently accepted treatment of CMV infection during pregnancy has been demonstrated to protect the fetus effectively. Moreover, it is difficult to predict which fetus will sustain a life-threatening CMV infection. Severe CMV disease in the newborn is characterized most often by petechiae, hepatosplenomegaly, and jaundice. Chorioretinitis, microcephaly, intracranial calcifications, hepatitis, hemolytic anemia, and purpura may also develop. CNS involvement, resulting in the development of psychomotor, ocular, auditory, and dental abnormalities over time, has been described. Rubella (See also Chap. 230e) Rubella virus is a known teratogen; first-trimester rubella carries a high risk of fetal anomalies, though the risk significantly decreases later in pregnancy. Congenital rubella may be diagnosed by percutaneous umbilical-blood sampling with the detection of IgM antibodies in fetal blood. All pregnant women and all women of childbearing age should be tested for their immune status to rubella. All nonpregnant women who are not immune to rubella should be vaccinated. The incidence of congenital rubella in the United States is extremely low. Herpesvirus Infection (See also Chap. 216) The acquisition of genital herpes during pregnancy is associated with spontaneous abortion, prematurity, and congenital and neonatal herpes. A cohort study of pregnant women without evidence of previous herpesvirus infection demonstrated that ~2% acquired a new herpesvirus infection during the pregnancy. Approximately 60% of the newly infected women had no clinical symptoms. Infection occurred with equal frequency in all three trimesters. If herpesvirus seroconversion occurred early in pregnancy, the risk of transmission to the newborn was very low. In women who acquired genital herpes shortly before delivery, the risk of transmission was high. The risk of active genital herpes lesions at term can be reduced by prescribing acyclovir for the last 4 weeks of pregnancy to women who have had their first episode of genital herpes during the pregnancy. Herpesvirus infection in the newborn can be devastating. Disseminated neonatal herpes carries with it high mortality and morbidity rates from CNS involvement. It is recommended that pregnant women with active genital herpes lesions at the time of presentation in labor be delivered by cesarean section. Parvovirus Infection (See also Chap. 221) Parvovirus infection (caused by human parvovirus B19) may occur during pregnancy. It rarely causes sequelae, but susceptible women infected during pregnancy may be at risk for fetal hydrops secondary to erythroid aplasia and profound anemia. HIV Infection (See also Chap. 226) The predominant cause of HIV infection in children is transmission of the virus from mother to newborn during the perinatal period. All pregnant women should be screened for HIV infection. Factors that increase the risk of mother-to-newborn transmission include high maternal viral load, low maternal CD4+ T cell count, prolonged labor, prolonged duration of membrane rupture, and the presence of other genital tract infections, such as syphilis or herpes. Prior to the widespread use of antiretroviral treatment, the perinatal transmission rate was in the range of 20%. In women with a good response to antiretroviral treatment, the transmission rate is about 1%. Measurement of maternal plasma HIV RNA copy number guides the decision for vaginal versus cesarean delivery. For women with <1000 copies of plasma HIV RNA/ml who are receiving combination antiretroviral therapy, the risk of transmission to the newborn is approximately 1% regardless of mode of delivery or duration of membrane rupture. These women may elect to attempt a vaginal birth following the spontaneous onset of labor. For women with a viral load of ≥1000 copies/ml prior to 38 weeks of gestation, a scheduled prelabor cesarean at 38 weeks is recommended to reduce the risk of HIV transmission to the newborn. To reduce the risk of mother-to-newborn transmission, women with >400 copies of HIV RNA/ml should be treated during the intrapartum interval with zidovudine. All newborns of HIV-infected mothers should be treated with zidovudine for 6 months after birth. Women who are HIV-positive may transmit the virus through their breast milk. In developed countries, HIV-infected mothers are advised not to breast-feed. (See also Chap. 148) For rubella-nonimmune individuals contemplating pregnancy, measles-mumps-rubella vaccine should be administered, ideally at least 3 months prior to conception but otherwise in the immediate postpartum period. In addition, pregnancy is not a contraindication for vaccination against influenza, tetanus, diphtheria, and pertussis (Tdap), and these vaccines are recommended for appropriate individuals. Maternal death is defined as death occurring during pregnancy or within 42 days of completion of pregnancy from a cause related to or aggravated by pregnancy, but not due to accident or incidental causes. From 1935 to 2007, the U.S. maternal death rate decreased from nearly 600/100,000 births to 12.7/100,000 births. There are significant health disparities in the maternal mortality rate, with the highest rates among non-Hispanic black women. In 2007, maternal mortality rates (per 100,000) by race were 10.5 among non-Hispanic white women, 8.9 among Hispanic women, and 28.4 among non-Hispanic black women. The most common causes of maternal death in the United States today are pulmonary embolism, obstetric hemorrhage, hypertension, sepsis, cardiovascular conditions (including peripartum cardiomyopathy), and ectopic pregnancy. As stated above, the maternal mortality rate in the United States is about 12.7/100,00 births. In some countries in sub- Saharan Africa and southern Asia, the maternal mortality rate is about 500/100,000 live births. The most common cause of maternal death in these countries is maternal hemorrhage. The high maternal death rates are due in part to inadequate contraceptive and family-planning services, an insufficient number of skilled birth attendants, and difficulty in accessing birthing centers and emergency obstetrical care units. Maternal death is a global public-health tragedy that could be mitigated with the application of modest resources. With improved diagnostic and therapeutic modalities as well as advances in the treatment of infertility, more patients with medical complications will be seeking and will require complex obstetric care. Improved outcomes of pregnancy in these women will be best attained by a team of internists, maternal-fetal medicine (high-risk obstetrics) specialists, and anesthesiologists assembled to counsel these patients about the risks of pregnancy and to plan their treatment prior to conception. The importance of preconception counseling cannot be overstated. It is the responsibility of all physicians caring for women in the reproductive age group to assess their patients’ reproductive plans as part of their overall health evaluation. 9 Medical evaluation of the surgical patient Wei C. Lau, Kim A. Eagle Cardiovascular and pulmonary complications continue to account for major morbidity and mortality in patients undergoing noncardiac surgery. Emerging evidence-based practices dictate that the internist should perform an individualized evaluation of the surgical patient to provide an accurate preoperative risk assessment and stratification that will guide optimal perioperative risk-reduction strategies. This chapter reviews cardiovascular and pulmonary preoperative risk assessment, targeting intermediateand high-risk patients with the goal of improving outcome. It also reviews perioperative management and prophylaxis of diabetes mellitus, endocarditis, and venous thromboembolism. Simple, standardized preoperative screening questionnaires, such as the one shown in Table 9-1, have been developed for the purpose of identifying patients at intermediate or high risk who may benefit 1. Age, weight, height 2. Are you: Female and 55 years of age or older or male and 45 years of age of older? If yes, are you 70 years of age or older? 3. Do you take anticoagulant medications (“blood thinners”)? 4. Do you have or have you had any of the following heart-related conditions? Heart disease Heart attack within the last 6 months Angina (chest pain) 5. Do you have or have you ever had any of the following? Rheumatoid arthritis Kidney disease Liver disease Diabetes 6. Do you get short of breath when you lie flat? 7. Are you currently on oxygen treatment? 8. Do you have a chronic cough that produces any discharge or fluid? 9. Do you have lung problems or diseases? 10. Have you or any blood member of your family ever had a problem other than nausea with any anesthesia? If yes, describe: 11. If female, is it possible that you are pregnant? Pregnancy test: Please list date of last menstrual period: aUniversity of Michigan Health System patient information report. Patients who answer yes to any of questions 2–9 should receive a more detailed clinical evaluation. Source: Adapted from KK Tremper, P Benedict: Anesthesiology 92:1212, 2000; with permission. from a more detailed clinical evaluation. Evaluation of such patients for surgery should always begin with a thorough history and physical examination and with a 12-lead resting electrocardiogram (ECG), in accordance with the American College of Cardiology/American Heart Association (ACC/AHA) guidelines. The history should focus on symptoms of occult cardiac or pulmonary disease. The urgency of the surgery should be determined, as true emergency procedures are associated with unavoidably higher morbidity and mortality risk. Preoperative laboratory testing should be carried out only for specific clinical conditions, as noted during clinical examination. Thus, healthy patients of any age who are undergoing elective surgical procedures without coexisting medical conditions should not require any testing unless the degree of surgical stress may result in unusual changes from the baseline state. A stepwise approach to cardiac risk assessment and stratification in patients undergoing noncardiac surgery is illustrated in Fig. 9-1. Assessment of exercise tolerance in the prediction of in-hospital perioperative risk is most helpful in patients who self-report worsening exercise-induced cardiopulmonary symptoms; those who may benefit from noninvasive or invasive cardiac testing regardless of a scheduled surgical procedure; and those with known coronary artery disease (CAD) or with multiple risk factors who are able to exercise. For predicting perioperative events, poor exercise tolerance has been defined as the inability to walk four blocks or climb two flights of stairs at a normal pace or to meet a metabolic equivalent (MET) level of 4 (e.g., carrying objects of 15–20 lb or playing golf or doubles tennis) because of the development of dyspnea, angina, or excessive fatigue (Table 9-2). Previous studies have compared several cardiac risk indices. The American College of Surgeons’ National Surgical Quality Improvement Chapter 9 Medical Evaluation of the Surgical Patient SurgeryCoronary revascularizationwithin 5 yearsIf yes, and no recurrentsymptoms2SurgeryCoronary revascularization within 5 years If yes, and no recurrent symptoms2 Recent coronary evaluation3If no, or recurrent symptomsRecent coronary evaluation3 If no, or recurrent symptoms No Clinical assessment --Age >70, <4 METs --Signs of CHF, AS --EKG changes ischemic or infarct SurgerySurgeryPositive stress testNon invasive cardiac testInitiate and/orcontinue optimalpreventive medicaltherapy treatmentNegative stress testIdentify, initiate treatment inpatients requiring preventiveor continue long termmedical preventive therapySurgery Surgery Positive stress test Coronary revascularization ACC/AHA guidelines Non invasive cardiac test Initiate and/or continue optimal preventive medical therapy treatment Negative stress test Identify, initiate treatment in patients requiring preventive or continue long term medical preventive therapy Poor functional capacity, history of angina Yes No FIgURE 9-1 Composite algorithm for cardiac risk assessment and stratification in patients undergoing noncardiac surgery. Stepwise clinical evaluation: [1] emergency surgery; [2] prior coronary revascularization; [3] prior coronary evaluation; [4] clinical assessment; [5] RCRI; [6] risk modification strategies. Preventive medical therapy = beta blocker and statin therapy. RCRI, revised cardiac risk index. (Adapted from LA Fleisher et al: Circulation 116:1971, 2007, with permission.) Program prospective database has identified five predictors of perioperative myocardial infarction (MI) and cardiac arrest based on increasing age, American Society of Anesthesiologists class, type of surgery, dependent functional status, and abnormal serum creatinine level. However, given its accuracy and simplicity, the revised cardiac risk assessMent of CardIaC rIsk By funCtIonaL status Higher • Has difficulty with adult activities of daily living • Cannot walk four blocks or up two flights of stairs or does not meet a MET level of 4 active: easily does vigorous tasks Lower • Performs regular vigorous exercises Source: From LA Fleisher et al: Circulation 116:1971, 2007. index (RCRI) (Table 9-3) is favored. The RCRI relies on the presence or absence of six identifiable predictive factors: high-risk surgery, ischemic heart disease, congestive heart failure, cerebrovascular disease, diabetes mellitus, and renal dysfunction. Each of these predictors is assigned one point. The risk of major cardiac events—defined as myocardial infarction, pulmonary edema, ventricular fibrillation or primary cardiac arrest, and complete heart block—can then be predicted. Based on the presence of none, one, two, three, or more of these clinical predictors, the rate of development of one of these four major cardiac events is estimated to be 0.4, 0.9, 7, and 11%, respectively (Fig. 9-2). An RCRI score of 0 signifies a 0.4–0.5% risk of cardiac events; RCRI 1, 0.9–1.3%; RCRI 2, 4–7%; and RCRI ≥3, 9–11%. The clinical utility of the RCRI is to identify patients with three or more predictors who are at very high risk (≥11%) for cardiac complications and who may benefit from further risk stratification with noninvasive cardiac testing or initiation of preoperative preventive medical management. History of myocardial infarction Current angina considered to be ischemic Requirement for sublingual nitroglycerin Positive exercise test Pathological Q-waves on ECG History of PCI and/or CABG with current angina considered to be ischemic Left ventricular failure by physical examination History of paroxysmal nocturnal dyspnea History of pulmonary edema S3 gallop on cardiac auscultation Bilateral rales on pulmonary auscultation Pulmonary edema on chest x-ray History of transient ischemic attack History of cerebrovascular accident Treatment with insulin Abbreviations: CABG, coronary artery bypass grafting; ECG, electrocardiogram; PCI, percutaneous coronary interventions. Source: Adapted from TH Lee et al: Circulation 100:1043, 1999. There is little evidence to support widespread application of preoperative noninvasive cardiac testing for all patients undergoing major surgery. Rather, a discriminative approach based on clinical risk categorization appears to be both clinically useful and cost-effective. There is potential benefit in identifying asymptomatic but high-risk patients, such as those with left main or left main–equivalent CAD or those with three-vessel CAD and poor left ventricular function, who may benefit from coronary revascularization (Chap. 293). However, evidence does not support aggressive attempts to identify patients at intermediate risk who have asymptomatic but advanced coronary artery disease, in whom coronary revascularization appears to offer little advantage over medical therapy. RCRI 0 1 2 ˜3 Event Rate 0.50% 1.30% 6.00% 11% An RCRI score ≥3 in patients with severe myocardial ischemia on stress testing should lead to consideration of coronary revascularization prior to noncardiac surgery. Noninvasive cardiac testing is most appropriate if it is anticipated that, in the event of a strongly positive test, a patient will meet guidelines for coronary angiography and coronary revascularization. Pharmacologic stress tests are more useful than exercise testing in patients with functional limitations. Dobutamine echocardiography and persantine, adenosine, or dobutamine nuclear perfusion testing (Chap. 270e) have excellent negative predictive values (near 100%) but poor positive predictive values (<20%) in the identification of patients at risk for perioperative MI or death. Thus, a negative study is reassuring, but a positive study is a relatively weak predictor of a “hard” perioperative cardiac event. RISK ModIFICATIoN: PREVENTIVE STRATEgIES To REdUCE CARdIAC RISK Perioperative Coronary Revascularization Currently, potential options for reducing perioperative cardiovascular risk include coronary artery revascularization and/or perioperative preventive medical therapies (Chap. 293). Prophylactic coronary revascularization with either coronary artery bypass grafting (CABG) or percutaneous coronary intervention (PCI) provides no shortor midterm survival benefit for patients without left main CAD or three-vessel CAD in the presence of poor left ventricular systolic function and is not recommended for patients with stable CAD before noncardiac surgery. Although PCI is associated with lower procedural risk than is CABG in the perioperative setting, the placement of a coronary artery stent soon before noncardiac surgery may increase the risk of bleeding during surgery if dual antiplatelet therapy (aspirin and thienopyridine) is administered; moreover, stent placement shortly before noncardiac surgery increases the perioperative risk of MI and cardiac death due to stent thrombosis if such therapy is withdrawn prematurely (Chap. 296e). It is recommended that, if possible, noncardiac surgery be delayed 30–45 days after placement of a bare metal coronary stent and for 365 days after a drug-eluting stent. For patients who must undergo noncardiac surgery early (>14 days) after PCI, balloon angioplasty without stent placement appears to be a reasonable alternative because dual antiplatelet therapy is not necessary in such patients. One recent clinical trial further suggests that after 6 months, bare metal and drug eluting stents may not pose a threat. perIoperatIve preventIve medIcal therapIes The goal of perioperative preventive medical therapies with β-adrenergic antagonists, HMG-CoA reductase inhibitors (statins), antiplatelet agents, and α2 agonists is to reduce perioperative adrenergic stimulation, ischemia, and inflammation, which are triggered during the perioperative period. β-adrenergIc antagonIsts The use of perioperative beta blockade should be based on a thorough assessment of a patient’s perioperative clinical and surgery-specific cardiac risk (RCRI ≥2). For patients with or without mild to moder- Chapter 9 Medical Evaluation of the Surgical Patient 10% 15% 4–7 9–11 0% 5%Risk of cardiac events0.9–1.30.4–0.5 The POISE trial highlights the importance of a clear risk-and-benefit assessment, with careful initiation and titration to therapeutic efficacy of preoperative beta blockers in patients undergoing noncardiac surgery. A recent meta-analysis which included the POISE study further supports that excessive beta blocker dosing is, in fact, harmful. The ACC/AHA guidelines recommend the following: (1) Beta blockers should be continued in patients with active cardiac conditions who are undergoing surgery and are receiving beta blockers. (2) Beta blockers titrated to heart rate and blood pressure are probably recommended for patients undergoing vascular surgery who are at high cardiac risk defined by CAD or cardiac ischemia on preoperative testing. (3) Beta blockers are reasonable for high-risk patients (RCRI ≥2) who undergo vascular surgery. (4) Beta blockers are reasonable for patients with known CAD or high risk (RCRI ≥2) who undergo intermediate-risk surgery. (5) Nondiscriminant administration of high-dose beta blockers without dose titration to effectiveness is contraindicated for patients who have never been treated with a beta blocker. hmg-coa reductase InhIbItors (statIns) A number of prospective and retrospective studies support the perioperative prophylactic use of statins for reduction of cardiac complications in patients with established atherosclerosis. The ACC/AHA Guidelines support the protective efficacy of perioperative statins on cardiac complications in intermediate risk patients undergoing major noncardiac surgery. For patients undergoing noncardiac surgery and currently taking statins, statin therapy should be continued to reduce perioperative cardiac risk. Statins are reasonable for patients undergoing vascular surgery with or without clinical risk factors (RCRI ≥1). angIotensIn-convertIng enzyme (ace) InhIbItors Evidence supports the discontinuation of ACE inhibitors and angiotensin receptor blockers for 24 h prior to noncardiac surgery due to adverse circulatory effects after induction of anesthesia. oral antIplatelet agents Evidence-based recommendations regarding perioperative use of aspirin and/or thienopyridine to reduce cardiac risk currently lack clarity. A substantial increase in perioperative bleeding and in the need for transfusion in patients receiving dual antiplatelet therapy has been observed. The discontinuation of thienopyridine and aspirin for 5–7 days prior to major surgery to minimize the risk of perioperative bleeding and transfusion must be balanced with the potential increased risk of an acute coronary syndrome and of subacute stent thrombosis in patients with recent coronary stent implantation. If clinicians elect to withhold antiplatelet agents prior to surgery, these agents should be restarted as soon as possible postoperatively. α2 agonIsts Several prospective and retrospective meta-analyses of perioperative α2 agonists (clonidine and mivazerol) demonstrated a reduction of cardiac death rates among patients with known coronary artery disease who underwent noncardiac surgery. α2 agonists thus may be considered for perioperative control of hypertension in patients with known coronary artery disease or an RCRI score ≥2. calcIum channel blockers Evidence is lacking to support the use of calcium channel blockers as a prophylactic strategy to decrease perioperative risk in major noncardiac surgery. anesthetIcs Mortality risk is low with safe delivery of modern anesthesia, especially among low-risk patients undergoing low-risk surgery (Table 9-4). Inhaled anesthetics have predictable circulatory and respiratory effects: all decrease arterial pressure in a dose-dependent manner by reducing sympathetic tone and causing systemic vasodilation, myocardial depression, and decreased cardiac output. Inhaled anesthetics also cause respiratory depression, with diminished responses to both hypercapnia and hypoxemia, in a dose-dependent manner; in addition, these agents have a variable effect on heart rate. Prolonged residual neuromuscular blockade also increases the risk of postoperative pulmonary complications due to reduction in functional residual lung capacity, loss of diaphragmatic and intercostal muscle function, atelectasis, and arterial hypoxemia from ventilation-perfusion mismatch. Higher • Emergent major operations, especially in the elderly • Prolonged surgery associated with large fluid shift and/or • Prostate surgery Lower • Eye, skin, and superficial surgery Source: From LA Fleisher et al: Circulation 116:1971, 2007, with permission. Several meta-analyses have shown that rates of pneumonia and respiratory failure are lower among patients receiving neuroaxial anesthesia (epidural or spinal) rather than general anesthesia (inhaled). However, there were no significant differences in cardiac events between the two approaches. Evidence from a meta-analysis of randomized controlled trials supports postoperative epidural analgesia for >24 h for the purpose of pain relief. However, the risk of epidural hematoma in the setting of systemic anticoagulation for venous thromboembolism prophylaxis (see below) and postoperative epidural catheterization must be considered. Perioperative pulmonary complications occur frequently and lead to significant morbidity and mortality. The guidelines from the American College of Physicians recommend the following: 1. All patients undergoing noncardiac surgery should be assessed for risk of pulmonary complications (Table 9-5). 2. Patients undergoing emergency or prolonged (3to 4-h) surgery; aortic aneurysm repair; vascular surgery; major abdominal, thoracic, neurologic, head, or neck surgery; and general anesthesia 1. Upper respiratory tract infection: cough, dyspnea 2. 3. 4. 5. American Society of Anesthesiologists Class ≥2 6. 7. 8. Serum albumin <3.5 g/dL 9. 10. Impaired sensorium (confusion, delirium, or mental status changes) 11. 12. 13. 14. Spirometry threshold before lung resection a. FEV1 <2 L b. MVV <50% of predicted c. PEF <100 L or 50% predicted value d. PCO2 ≥45 mmHg e. PO2 ≤50 mmHg Abbreviations: FEV1, forced expiratory volume in 1 s; MVV, maximal voluntary ventilation; PEF, peak expiratory flow rate; PCO2, partial pressure of carbon dioxide; PO2, partial pressure of oxygen. Source: A Qaseem et al: Ann Intern Med 144:575-80. Modified from GW Smetana et al: Ann Intern Med 144:581, 2006, and from DN Mohr et al: Postgrad Med 100:247, 1996. • Cessation of smoking for at least 8 weeks before and until at least 10 days bronchodilator and/or steroid therapy, when indicated of infection and secretion, when indicated reduction, when appropriate duration of anesthesia of long-acting neuromuscular blocking drugs, when indicated of aspiration and maintenance of optimal bronchodilation • Optimization of inspiratory capacity maneuvers, with attention to: Mobilization of secretions Encouragement of coughing Selective use of a nasogastric tube Adequate pain control without excessive narcotics Source: From VA Lawrence et al: Ann Intern Med 144:596, 2006, and WF Dunn, PD Scanlon: Mayo Clin Proc 68:371, 1993. should be considered to be at elevated risk for postoperative pulmonary complications. 3. Patients at higher risk of pulmonary complications should undergo incentive spirometry, deep-breathing exercises, cough encouragement, postural drainage, percussion and vibration, suctioning and ambulation, intermittent positive-pressure breathing, continuous positive airway pressure, and selective use of a nasogastric tube for postoperative nausea, vomiting, or symptomatic abdominal distention to reduce postoperative risk (Table 9-6). 4. Routine preoperative spirometry and chest radiography should not be used routinely for predicting risk of postoperative pulmonary complications but may be appropriate for patients with chronic obstructive pulmonary disease or asthma. 5. Spirometry is of value before lung resection in determining candidacy for coronary artery bypass; however, it does not provide a spirometric threshold for extrathoracic surgery below which the risks of surgery are unacceptable. 6. Pulmonary artery catheterization, administration of total parenteral nutrition (as opposed to no supplementation), or total enteral nutrition has no benefit in reducing postoperative pulmonary complications. (See also Chaps. 417–419) Many patients with diabetes mellitus have significant symptomatic or asymptomatic CAD and may have silent myocardial ischemia due to autonomic dysfunction. Evidence supports intensive perioperative glycemic control to achieve near-normal glucose levels (90–110 mg/dL) rather than moderate glycemic control (120–200 mg/dL), using insulin infusion. This practice must be balanced against the risk of hypoglycemic complications. Oral hypoglycemic agonists should not be given on the morning of surgery. Perioperative hyperglycemia should be treated with IV infusion of short-acting insulin or SC sliding-scale insulin. Patients whose diabetes is diet controlled may proceed to surgery with close postoperative monitoring. (See also Chap. 155) Perioperative prophylactic antibiotics should be administered to patients with congenital or valvular heart disease, prosthetic valves, mitral valve prolapse, or other cardiac abnormalities, in accordance with ACC/AHA practice guidelines. (See also Chap. 300) Perioperative prophylaxis of venous thromboembolism should follow established guidelines of the American College of Chest Physicians. Aspirin is not supported as a single agent for thromboprophylaxis. Low-dose unfractionated heparin (≤5000 units SC bid), low-molecular weight heparin (e.g., enoxaparin, 30 mg bid or 40 mg qd), or a pentasaccharide (fondaparinux, 2.5 mg qd) is appropriate for patients at moderate risk; unfractionated heparin (5000 units SC tid) is appropriate for patients at high risk. Graduated compression stockings and pneumatic compression devices are useful supplements to anticoagulant therapy. In 2010, according to the Centers for Disease Control and Prevention, 2,468,435 individuals died in the United States (Table 10-1). Approximately 73% of all deaths occur in those >65 years of age. The epidemiology of mortality is similar in most developed countries; cardiovascular diseases and cancer are the predominant causes of death, a marked change since 1900, when heart disease caused ~8% of all deaths and cancer accounted for <4% of all deaths. In 2010, the year with the most recent available data, AIDS did not rank among the top 15 causes of death, causing just 8369 deaths. Even among people age 35–44, heart disease, cancer, chronic liver disease, and accidents all cause more deaths than AIDS. It is estimated that in developed countries ~70% of all deaths are preceded by a disease or condition, making it reasonable to plan for dying in the foreseeable future. Cancer has served as the paradigm for terminal care, but it is not the only type of illness with a recognizable and predictable terminal phase. Because heart failure, chronic obstructive pulmonary disease (COPD), chronic liver failure, dementia, and many other conditions have recognizable terminal phases, a systematic approach to end-of-life care should be part of all medical specialties. Many patients with illness-related suffering also can benefit from palliative care regardless of prognosis. Ideally, palliative care should be considered part of comprehensive care for all patients. Palliative care can be improved by coordination between caregivers, doctors, and patients for advance care planning, as well as dedicated teams of physicians, nurses, and other providers. The rapid increases in life expectancy in developed countries over the last century have been accompanied by new difficulties facing individuals, families, and society as a whole in addressing the needs of an aging population. These challenges include both more complicated conditions and technologies to address them at the end of life. The development of technologies that can prolong life without restoring full health has led many Americans to seek out alternative end-of-life care settings and approaches that relieve suffering for those with terminal diseases. Over the last few decades in the United States, a significant change in the site of death has occurred that coincides with patient and family preferences. Nearly 60% of Americans died as inpatients in hospitals in 1980. By 2000, the trend was reversing, with ~31% of Americans dying as hospital inpatients (Fig. 10-1). This shift has been most dramatic for those dying from cancer and COPD and for younger and very old individuals. In the last decade, it has been associated with the increased use of hospice care; in 2008, approximately 39% of all decedents in the United States received such care. Cancer patients currently constitute ~36.9% of hospice users. About 79% of patients receiving hospice care die out of the hospital, and around 42% of those receiving hospice care die in a private residence. In addition, in 2008, for the first time, the American Board of Medical Specialties (ABMS) offered certification in hospice and palliative medicine. With shortening of hospital stays, Number of Deaths Among People Cause of Death Number of Deaths Percentage of Total ≥65 Years of Age Number of Deaths Percentage of Total All deaths Heart disease Malignant neoplasms Chronic lower respiratory diseases Cerebrovascular diseases Accidents Alzheimer’s disease Diabetes mellitus Nephritis, nephritic syndrome, 2,468,435 100 1,798,276 499,331 100 597,689 24.2 477,338 141,362 28.3 574,743 23.3 396,670 142,107 28.5 138,080 5.6 118,031 27,132 5.4 129,476 5.2 109,990 35,846 7.2 120,859 4.9 41,300 11,256 2.3 83,494 3.4 82,616 8859 1.8 69,071 2.8 49,191 4931 1.0 50,476 2.0 41,994 4102 0.8 50,097 2.0 42,846 26,138 5.2 38,364 1.6 6008 3671 0.7 Source: National Center for Health Statistics (data for all age groups from 2010), http://www.cdc.gov/nchs; National Statistics (England and Wales, 2012), http://www.statistics.gov.uk. many serious conditions are being treated at home or on an outpatient basis. Consequently, providing optimal palliative and end-of-life care requires ensuring that appropriate services are available in a variety of settings, including noninstitutional settings. Central to this type of care is an interdisciplinary team approach that typically encompasses pain and symptom management, spiritual and psychological care for the patient, and support for family caregivers during the patient’s illness and the bereavement period. Terminally ill patients have a wide variety of advanced diseases, often with multiple symptoms that demand relief, and require noninvasive therapeutic regimens to be delivered in flexible care settings. Fundamental to ensuring quality palliative and end-of-life care is a focus on four broad domains: (1) physical symptoms; (2) psychological symptoms; (3) social needs that include interpersonal relationships, caregiving, and economic concerns; and (4) existential or spiritual needs. A comprehensive assessment screens for and evaluates needs in each of these four domains. Goals for care are established in discussions with the patient and/or family, based on the assessment in each of the domains. Interventions then are aimed at improving or managing symptoms and needs. Although physicians are responsible Decedents, % 60.00 50.00 40.00 30.00 20.00 10.00 0.00 FIgURE 10-1 Graph showing trends in the site of death in the last two decades. , percentage of hospital inpatient deaths; , percentage of decedents enrolled in a hospice. for certain interventions, especially technical ones, and for coordinating the interventions, they cannot be responsible for providing all of them. Because failing to address any one of the domains is likely to preclude a good death, a well-coordinated, effectively communicating interdisciplinary team takes on special importance in end-of-life care. Depending on the setting, critical members of the interdisciplinary team will include physicians, nurses, social workers, chaplains, nurse’s aides, physical therapists, bereavement counselors, and volunteers. ASSESSMENT ANd CARE PLANNINg Comprehensive Assessment Standardized methods for conducting a comprehensive assessment focus on evaluating the patient’s condition in all four domains affected by illness: physical, psychological, social, and spiritual. The assessment of physical and mental symptoms should follow a modified version of the traditional medical history and physical examination that emphasizes symptoms. Questions should aim at elucidating symptoms and discerning sources of suffering and gauging how much those symptoms interfere with the patient’s quality of life. Standardized assessment is critical. Currently, there are 21 symptom assessment instruments for cancer alone. Further research on and validation of these assessment tools, especially taking into account patient perspectives, could improve their effectiveness. Instruments with good psychometric properties that assess a wide range of symptoms include the Memorial Symptom Assessment Scale (MSAS), the Rotterdam Symptom Checklist, the Worthing Chemotherapy Questionnaire, and the Computerized Symptom Assessment Instrument. These instruments are long and may be useful for initial clinical or for research assessments. Shorter instruments are useful for patients whose performance status does not permit comprehensive assessments. Suitable shorter instruments include the Condensed Memorial Symptom Assessment Scale, the Edmonton Symptom Assessment System, the M.D. Anderson Symptom Assessment Inventory, and the Symptom Distress Scale. Using such instruments ensures that the assessment is comprehensive and does not focus only on pain and a few other physical symptoms. Invasive tests are best avoided in end-of-life care, and even minimally invasive tests should be evaluated carefully for their benefit-to-burden ratio for the patient. Aspects of the physical examination that are uncomfortable and unlikely to yield useful information can be omitted. Regarding social needs, health care providers should assess the status of important relationships, financial burdens, caregiving needs, and access to medical care. Relevant questions will include the following: How often is there someone to feel close to? How has this illness been for your family? How has it affected your relationships? How much help do you need with things like getting meals and getting around? How much trouble do you have getting the medical care you need? In the area of existential needs, providers should assess distress and the patient’s sense of being emotionally and existentially settled and of finding purpose or meaning. Helpful assessment questions can include the following: How much are you able to find meaning since your illness began? What things are most important to you at this stage? In addition, it can be helpful to ask how the patient perceives his or her care: How much do you feel your doctors and nurses respect you? How clear is the information from us about what to expect regarding your illness? How much do you feel that the medical care you are getting fits with your goals? If concern is detected in any of these areas, deeper evaluative questions are warranted. Communication Especially when an illness is life-threatening, there are many emotionally charged and potentially conflict-creating moments, collectively called “bad news” situations, in which empathic and effective communication skills are essential. Those moments include communicating with the patient and/or family about a terminal diagnosis, the patient’s prognosis, any treatment failures, deemphasizing efforts to cure and prolong life while focusing more on symptom management and palliation, advance care planning, and the patient’s death. Although these conversations can be difficult and lead to tension, research indicates that end-of-life discussions can lead to earlier hospice referrals rather than overly aggressive treatment, benefiting quality of life for patients and improving the bereavement process for families. Just as surgeons plan and prepare for major operations and investigators rehearse a presentation of research results, physicians and health care providers caring for patients with significant or advanced illness can develop a practiced approach to sharing important information and planning interventions. In addition, families identify as important both how well the physician was prepared to deliver bad news and the setting in which it was delivered. For instance, 27% of families making critical decisions for patients in an intensive care unit (ICU) desired better and more private physical space to communicate with physicians, and 48% found having clergy present reassuring. An organized and effective seven-step procedure for communicating bad news goes by the acronym P-SPIKES: (1) prepare for the discussion, (2) set up a suitable environment, (3) begin the discussion by finding out what the patient and/or family understand, (4) determine how they will comprehend new information best and how much they want to know, (5) provide needed new knowledge accordingly, (6) allow for emotional responses, and (7) share plans for the next steps in care. Table 10-2 provides a summary of these steps along with Acronym Steps Aim of the Interaction Preparations, Questions, or Phrases S Setting of the interaction K Knowledge of the condition Mentally prepare for the interaction with the patient and/or family. Ensure the appropriate setting for a serious and potentially emotionally charged discussion. Begin the discussion by establishing the baseline and whether the patient and family can grasp the information. Ease tension by having the patient and family contribute. Discover what information needs the patient and/or family have and what limits they want regarding the bad information. Provide the bad news or other information to the patient and/or family sensitively. Identify the cause of the emotions— e.g., poor prognosis. Empathize with the patient and/or family’s feelings. Explore by asking open-ended questions. Delineate for the patient and the family the next steps, including additional tests or interventions. Review what information needs to be communicated. Plan how you will provide emotional support. Rehearse key steps and phrases in the interaction. Ensure that patient, family, and appropriate social supports are present. Devote sufficient time. Ensure privacy and prevent interruptions by people or beeper. Bring a box of tissues. Start with open-ended questions to encourage participation. Possible phrases to use: What do you understand about your illness? When you first had symptom X, what did you think it might be? What did Dr. X tell you when he or she sent you here? What do you think is going to happen? Possible phrases to use: If this condition turns out to be something serious, do you want to know? Would you like me to tell you all the details of your condition? If not, who would you like me to talk to? Do not just dump the information on the patient and family. Check for patient and family understanding. Possible phrases to use: I feel badly to have to tell you this, but . . . Unfortunately, the tests showed . . . I’m afraid the news is not good . . . Strong feelings in reaction to bad news are normal. Acknowledge what the patient and family are feeling. Remind them such feelings are normal, even if frightening. Give them time to respond. Remind patient and family you won’t abandon them. Possible phrases to use: I imagine this is very hard for you to hear. You look very upset. Tell me how you are feeling. I wish the news were different. We’ll do whatever we can to help you. It is the unknown and uncertain that can increase anxiety. Recommend a schedule with goals and landmarks. Provide your rationale for the patient and/or family to accept (or reject). If the patient and/or family are not ready to discuss the next steps, schedule a follow-up visit. Source: Adapted from R Buckman: How to Break Bad News: A Guide for Health Care Professionals. Baltimore, Johns Hopkins University Press, 1992. suggested phrases and underlying rationales for each one. Additional research that further considers the response of patients to systematic methods of delivering bad news could build the evidence base for even more effective communication procedures. Continuous goal Assessment Major barriers to ensuring quality palliative and end-of-life care include difficulty providing an accurate prognosis and emotional resistance of patients and their families to accepting the implications of a poor prognosis. There are two practical solutions to these barriers. One is to integrate palliative care with curative care regardless of prognosis. With this approach, palliative care no longer conveys the message of failure, having no more treatments, or “giving up hope.” Fundamental to integrating palliative care with curative therapy is to include continuous goal assessment as part of the routine patient reassessment that occurs at most patient-physician encounters. Alternatively, some practices may find it useful to implement a standard point in the clinical course to address goals of care and advance care planning. For example, some oncology practices ask all patients whose Eastern Cooperative Oncology Group (ECOG) performance status is 3 or less—meaning they spend 50% or more of the day in bed—or those who develop metastatic disease about their goals of care and advance care preferences. Goals for care are numerous, ranging from cure of a specific disease, to prolonging life, to relief of a symptom, to delaying the course of an incurable disease, to adapting to progressive disability without disrupting the family, to finding peace of mind or personal meaning, to dying in a manner that leaves loved ones with positive memories. Discernment of goals for care can be approached through a seven-step protocol: (1) ensure that medical and other information is as complete as reasonably possible and is understood by all relevant parties (see above); (2) explore what the patient and/or family are hoping for while identifying relevant and realistic goals; (3) share all the options with the patient and family; (4) respond with empathy as they adjust to changing expectations; (5) make a plan, emphasizing what can be done toward achieving the realistic goals; (6) follow through with the plan; and (7) review and revise the plan periodically, considering at every encounter whether the goals of care should be reviewed with the patient and/or family. Each of these steps need not be followed in rote order, but together they provide a helpful framework for interactions with patients and their families about goals for care. It can be especially challenging if a patient or family member has difficulty letting go of an unrealistic goal. One strategy is to help them refocus on more realistic goals and also suggest that while hoping for the best, it is still prudent to plan for other outcomes as well. Advance Care Planning • practIces Advance care planning is a process of planning for future medical care in case the patient becomes incapable of making medical decisions. A 2010 study of adults 60 or older who died between 2000 and 2006 found that 42% required decision making about treatment in the final days of life but 70% lacked decision-making capacity. Among those lacking decision-making capacity, around one-third did not have advance planning directives. Ideally, such planning would occur before a health care crisis or the terminal phase of an illness. Diverse barriers prevent this. Polls suggest 80% of Americans endorse advance care planning and completing living wills. However, data suggest between 33 and 42% have actually completed one. Other countries have even lower completion rates. Most patients expect physicians to initiate advance care planning and will wait for physicians to broach the subject. Patients also wish to discuss advance care planning with their families. Yet patients with unrealistic expectations are significantly more likely to prefer aggressive treatments. Fewer than one-third of health care providers have completed advance care planning for themselves. Hence, a good first step is for health care providers to complete their own advance care planning. This makes providers aware of the critical choices in the process and the issues that are especially charged and allows them to tell their patients truthfully that they personally have done advance planning. Lessons from behavioral economics suggest that setting this kind of social norming helps people view completing an advance directive as acceptable and even expected. Steps in effective advance care planning center on (1) introducing the topic, (2) structuring a discussion, (3) reviewing plans that have been discussed by the patient and family, (4) documenting the plans, (5) updating them periodically, and (6) implementing the advance care directives (Table 10-3). Two of the main barriers to advance care planning are problems in raising the topic and difficulty in structuring a succinct discussion. Raising the topic can be done efficiently as a routine matter, noting that it is recommended for all patients, analogous to purchasing insurance or estate planning. Many of the most difficult cases have involved unexpected, acute episodes of brain damage in young individuals. Structuring a focused discussion is a central communication skill. Identify the health care proxy and recommend his or her involvement in the process of advance care planning. Select a worksheet, preferably one that has been evaluated and demonstrated to produce reliable and valid expressions of patient preferences, and orient the patient and proxy to it. Such worksheets exist for both general and disease-specific situations. Discuss with the patient and proxy one scenario as an example to demonstrate how to think about the issues. It is often helpful to begin with a scenario in which the patient is likely to have settled preferences for care, such as being in a persistent vegetative state. Once the patient’s preferences for interventions in this scenario are determined, suggest that the patient and proxy discuss and complete the worksheet for the others. If appropriate, suggest that they involve other family members in the discussion. On a return visit, go over the patient’s preferences, checking and resolving any inconsistencies. After having the patient and proxy sign the document, place it in the medical chart and be sure that copies are provided to relevant family members and care sites. Because patients’ preferences can change, these documents have to be reviewed periodically. types of documents Advance care planning documents are of three broad types. The first includes living wills or instructional directives; these are advisory documents that describe the types of decisions that should direct care. Some are more specific, delineating different scenarios and interventions for the patient to choose from. Among these, some are for general use and others are designed for use by patients with a specific type of disease, such as cancer or HIV. A second type is a less specific directive that provides general statements of not wanting life-sustaining interventions or forms that describe the values that should guide specific discussions about terminal care. These can be problematic because, when critical decisions about specific treatments are needed, they require assessments by people other than the patient of whether a treatment fulfills a particular wish. The third type of advance directive allows the designation of a health care proxy (sometimes also referred to as a durable attorney for health care), who is an individual selected by the patient to make decisions. The choice is not either/or; a combined directive that includes a living will and designates a proxy is often used, and the directive should indicate clearly whether the specified patient preferences or the proxy’s choice takes precedence if they conflict. The Five Wishes and the Medical Directive are such combined forms. Some states have begun to put into practice a “Physician Orders for Life-Sustaining Treatment (POLST)” paradigm, which builds on communication between providers and patients to include guidance for end-of-life care in a color-coordinated form that follows the patient across treatment settings. The procedures for completing advance care planning documents vary according to state law. A potentially misleading distinction relates to statutory as opposed to advisory documents. Statutory documents are drafted to fulfill relevant state laws. Advisory documents are drafted to reflect the patient’s wishes. Both are legal, the first under state law and the latter under common or constitutional law. legal aspects The U.S. Supreme Court has ruled that patients have a constitutional right to decide about refusing and terminating medical interventions, including life-sustaining interventions, and that mentally incompetent patients can exercise this right by providing “clear and convincing evidence” of their preferences. Because advance care directives permit patients to provide such evidence, commentators agree that Step Goals to be Achieved and Measures to Cover Useful Phrases or Points to Make Introducing advance Ask the patient what he or she knows about advance care planning I’d like to talk with you about something I try to discuss with all my care planning and if he or she has already completed an advance care directive. patients. It’s called advance care planning. In fact, I feel that this is such an important topic that I have done this myself. Are you familiar with advance care planning or living wills? Indicate that you as a physician have completed advance care Have you thought about the type of care you would want if you ever planning. became too sick to speak for yourself? That is the purpose of advance care planning. Indicate that you try to perform advance care planning with all There is no change in health that we have not discussed. I am bring-patients regardless of prognosis. ing this up now because it is sensible for everyone, no matter how well or ill, old or young. Explain the goals of the process as empowering the patient and Have many copies of advance care directives available, including ensuring that you and the proxy understand the patient’s preferences. in the waiting room, for patients and families. Provide the patient relevant literature, including the advance care Know resources for state-specific forms (available at directive that you prefer to use. www.nhpco.org). Recommend the patient identify a proxy decision-maker who should attend the next meeting. Structured discus-Affirm that the goal of the process is to follow the patient’s wishes if Use a structured worksheet with typical scenarios. sion of scenarios the patient loses decision-making capacity. Begin the discussion with persistent vegetative state and con-and patient Elicit the patient’s overall goals related to health care. sider other scenarios, such as recovery from an acute event with serious disability, asking the patient about his or her preferences Elicit the patient’s preferences for specific interventions in a few regarding specific interventions, such as ventilators, artificial salient and common scenarios. nutrition, and CPR, and then proceeding to less invasive inter-Help the patient define the threshold for withdrawing and ventions, such as blood transfusions and antibiotics. withholding interventions. Define the patient’s preference for the role of the proxy. Review the patient’s After the patient has made choices of interventions, review them to preferences ensure they are consistent and the proxy is aware of them. Document the Formally complete the advance care directive and have a witness sign it. patient’s preferences Provide a copy for the patient and the proxy. Insert a copy into the patient’s medical record and summarize in a progress note. Update the directive Periodically, and with major changes in health status, review the directive with the patient and make any modifications. Apply the directive The directive goes into effect only when the patient becomes unable to make medical decisions for himself or herself. Reread the directive to be sure about its content. Discuss your proposed actions based on the directive with the proxy. Abbreviation: CPR, cardiopulmonary resuscitation. they are constitutionally protected. Most commentators believe that a legislation, the Affordable Care Act of 2010, raised substantial con-state is required to honor any clear advance care directive whether or troversy when early versions of the law included Medicare reimburse-not it is written on an “official” form. Many states have enacted laws ment for advance care planning consultations. These provisions were explicitly to honor out-of-state directives. If a patient is not using a withdrawn over accusations that they would lead to the rationing of statutory form, it may be advisable to attach a statutory form to the care for the elderly. advance care directive being used. State-specific forms are readily available free of charge for health care providers and patients and fami- lies through the National Hospice and Palliative Care Organization’s “Caring Connections” website (http://www.caringinfo.org). PHYSICAL SYMPToMS ANd THEIR MANAgEMENT In January 2014, Texas judge R. H. Wallace ruled that a brain dead Great emphasis has been placed on addressing dying patients’ pain. woman who was 23 weeks pregnant should be removed from life sup-Some institutions have made pain assessment a fifth vital sign to port. This was after several months of disagreement between the wom-emphasize its importance. This also has been advocated by large health an’s family and the hospital providing care. The hospital cited Texas care systems such as the Veterans’ Administration and accrediting law that states that life-sustaining treatment must be administered to a bodies such as the Joint Commission. Although this embrace of pain as pregnant woman, but the judge sided with the woman’s family saying the fifth vital sign has been symbolically important, no data document that the law did not apply because the patient was legally dead. that it has improved pain management practices. Although good pal- As of 2013, advance directives are legal in all states and the District liative care requires good pain management, it also requires more. The of Columbia either through state specific legislation, state judicial rul-frequency of symptoms varies by disease and other factors. The most ings, or United States Supreme Court rulings. Many states have their common physical and psychological symptoms among all terminally own statutory forms. Massachusetts and Michigan do not have living ill patients include pain, fatigue, insomnia, anorexia, dyspnea, depreswill laws, although both have health care proxy laws. In 27 states, sion, anxiety, and nausea and vomiting. In the last days of life, terminal the laws state that the living will is not valid if a woman is pregnant. delirium is also common. Assessments of patients with advanced can-However, like all other states except Alaska, these states have enacted cer have shown that patients experienced an average of 11.5 different durable power of attorney for health care laws that permit patients physical and psychological symptoms (Table 10-4). to designate a proxy decision-maker with authority to terminate Evaluations to determine the etiology of these symptoms usually life-sustaining treatments. Only in Alaska does the law prohibit prox-can be limited to the history and physical examination. In some cases, ies from terminating life-sustaining treatments. The health reform radiologic or other diagnostic examinations will provide sufficient benefit in directing optimal palliative care to warrant the risks, potential discomfort, and inconvenience, especially to a seriously ill patient. Only a few of the common symptoms that present difficult management issues will be addressed in this chapter. Additional information on the management of other symptoms, such as nausea and vomiting, insomnia, and diarrhea, can be found in Chaps. 54 and 99, Chap. 38, and Chap. 55, respectively. Pain • Frequency The frequency of pain among terminally ill patients varies widely. Substantial pain occurs in 36–90% of patients with advanced cancer. In the SUPPORT study of hospitalized patients with diverse conditions and an estimated survival ≤6 months, 22% reported moderate to severe pain, and caregivers of those patients noted that 50% had similar levels of pain during the last few days of life. A meta-analysis found pain prevalence of 58–69% in studies that included patients characterized as having advanced, metastatic, or terminal cancer; 44–73% in studies that included patients characterized as undergoing cancer treatment; and 21–46% in studies that included posttreatment individuals. etIology Nociceptive pain is the result of direct mechanical or chemical stimulation of nociceptors and normal neural signaling to the brain. It tends to be localized, aching, throbbing, and cramping. The classic example is bone metastases. Visceral pain is caused by nociceptors in gastrointestinal, respiratory, and other organ systems. It is a deep or colicky type of pain classically associated with pancreatitis, myocardial infarction, or tumor invasion of viscera. Neuropathic pain arises from disordered nerve signals. It is described by patients as burning, electrical, or shocklike pain. Classic examples are poststroke pain, tumor invasion of the brachial plexus, and herpetic neuralgia. assessment Pain is a subjective experience. Depending on the patient’s circumstances, perspective, and physiologic condition, the same physical lesion or disease state can produce different levels of reported pain and need for pain relief. Systematic assessment includes eliciting the following: (1) type: throbbing, cramping, burning, etc.; (2) periodicity: continuous, with or without exacerbations, or incident; (3) location; (4) intensity; (5) modifying factors; (6) effects of treatments; (7) functional impact; and (8) impact on patient. Several validated pain assessment measures may be used, such as the Visual Analogue Scale, the Brief Pain Inventory, and the pain component of one of the more comprehensive symptom assessment instruments. Frequent reassessments are essential to assess the effects of interventions. InterventIons Interventions for pain must be tailored to each individual, with the goal of preempting chronic pain and relieving breakthrough pain. At the end of life, there is rarely reason to doubt a patient’s report of pain. Pain medications are the cornerstone of management. If they are failing and nonpharmacologic interventions— including radiotherapy and anesthetic or neurosurgical procedures such as peripheral nerve blocks or epidural medications—are required, a pain consultation is appropriate. Pharmacologic interventions follow the World Health Organization three-step approach involving nonopioid analgesics, mild opioids, and strong opioids, with or without adjuvants (Chap. 18). Nonopioid analgesics, especially nonsteroidal anti-inflammatory drugs (NSAIDs), are the initial treatments for mild pain. They work primarily by inhibiting peripheral prostaglandins and reducing inflammation but also may have central nervous system (CNS) effects. They have a ceiling effect. Ibuprofen, up to a total dose of 1600 mg/d given in four doses of 400 mg each, has a minimal risk of causing bleeding and renal impairment and is a good initial choice. In patients with a history of severe gastrointestinal (GI) or other bleeding, it should be avoided. In patients with a history of mild gastritis or gastroesophageal reflux disease (GERD), acid-lowering therapy such as a proton pump inhibitor should be used. Acetaminophen is an alternative in patients with a history of GI bleeding and can be used safely at up to 4 g/d given in four doses of 1 g each. In patients with liver dysfunction due to metastases or other causes and in patients with heavy alcohol use, doses should be reduced. If nonopioid analgesics are insufficient, opioids should be introduced. They work by interacting with µ opioid receptors in the CNS to activate pain-inhibitory neurons; most are receptor antagonists. The mixed agonist/antagonist opioids useful for postacute pain should not be used for the chronic pain in end-of-life care. Weak opioids such as codeine can be used initially. However, if they are escalated and fail to relieve pain, strong opioids such as morphine, 5–10 mg every 4 h, should be used. Nonopioid analgesics should be combined with opioids because they potentiate the effect of opioids. For continuous pain, opioids should be administered on a regular, around-the-clock basis consistent with their duration of analgesia. They should not be provided only when the patient experiences pain; the goal is to prevent patients from experiencing pain. Patients also should be provided rescue medication, such as liquid morphine, for breakthrough pain, generally at 20% of the baseline dose. Patients should be informed that using the rescue medication does not obviate the need to take the next standard dose of pain medication. If after 24 h the patient’s pain remains uncontrolled and recurs before the next dose, requiring the patient to use the rescue medication, the daily opioid dose can be increased by the total dose of rescue medications used by the patient, or by 50% for moderate pain and 100% for severe pain of the standing opioid daily dose. It is inappropriate to start with extended-release preparations. Instead, an initial focus on using short-acting preparations to determine how much is required in the first 24–48 h will allow clinicians to determine opioid needs. Once pain relief is obtained with short-acting preparations, one should switch to extended-release preparations. Even with a stable extended-release preparation regimen, the patient may have incident pain, such as during movement or dressing changes. Short-acting preparations should be taken before such predictable episodes. Although less common, patients may have “end-of-dose failure” with long-acting opioids, meaning that they develop pain after 8 h in the case of an every-12-h medication. In these cases, a trial of giving an every-12-h medication every 8 h is appropriate. Because of differences in opioid receptors, cross-tolerance among opioids is incomplete, and patients may experience different side effects with different opioids. Therefore, if a patient is not experiencing pain relief or is experiencing too many side effects, a change to another opioid preparation is appropriate. When switching, one should begin with 50–75% of the published equianalgesic dose of the new opioid. Unlike NSAIDs, opioids have no ceiling effect; therefore, there is no maximum dose no matter how many milligrams the patient is receiving. The appropriate dose is the dose needed to achieve pain relief. This is an important point for clinicians to explain to patients and families. Addiction or excessive respiratory depression is extremely unlikely in the terminally ill; fear of these side effects should neither prevent escalating opioid medications when the patient is experiencing insufficient pain relief nor justify using opioid antagonists. Opioid side effects should be anticipated and treated preemptively. Nearly all patients experience constipation that can be debilitating (see below). Failure to prevent constipation often results in noncompliance with opioid therapy. Methylnaltrexone is a drug that targets opioid-induced constipation by blocking peripheral opioid receptors but not central receptors for analgesia. In placebo-controlled trials, it has been shown to cause laxation within 24 h of administration. As with the use of opioids, about a third of patients using methylnaltrexone experience nausea and vomiting, but unlike constipation, tolerance develops, usually within a week. Therefore, when one is beginning opioids, an antiemetic such as metoclopramide or a serotonin antagonist often is prescribed prophylactically and stopped after 1 week. Olanzapine also has antinausea properties and can be effective in countering delirium or anxiety, with the advantage of some weight gain. Drowsiness, a common side effect of opioids, also usually abates within a week. During this period, drowsiness can be treated with psychostimulants such as dextroamphetamine, methylphenidate, and modafinil. Modafinil has the advantage of everyday dosing. Pilot reports suggest that donepezil may also be helpful for opiate-induced drowsiness as well as relieving fatigue and anxiety. Metabolites of morphine and most opioids are cleared renally; doses may have to be adjusted for patients with renal failure. Seriously ill patients who require chronic pain relief rarely if ever become addicted. Suspicion of addiction should not be a reason to withhold pain medications from terminally ill patients. Patients and families may withhold prescribed opioids for fear of addiction or dependence. Physicians and health care providers should reassure patients and families that the patient will not become addicted to opioids if they are used as prescribed for pain relief; this fear should not prevent the patient from taking the medications around the clock. However, diversion of drugs for use by other family members or illicit sale may occur. It may be necessary to advise the patient and caregiver about secure storage of opioids. Contract writing with the patient and family can help. If that fails, transfer to a safe facility may be necessary. Tolerance is the need to increase medication dosage for the same pain relief without a change in disease. In the case of patients with advanced disease, the need for increasing opioid dosage for pain relief usually is caused by disease progression rather than tolerance. Physical dependence is indicated by symptoms from the abrupt withdrawal of opioids and should not be confused with addiction. Adjuvant analgesic medications are nonopioids that potentiate the analgesic effects of opioids. They are especially important in the management of neuropathic pain. Gabapentin and pregabalin, calcium channel alpha 2-delta ligands, are now the first-line treatments for neuropathic pain from a variety of causes. Gabapentin is begun at 100–300 mg bid or tid, with 50–100% dose increments every 3 days. Usually 900–3600 mg/d in two or three doses is effective. The combination of gabapentin and nortriptyline may be more effective than gabapentin alone. One potential side effect of gabapentin to be aware of is confusion and drowsiness, especially in the elderly. Pregabalin has the same mechanism of action as gabapentin but is absorbed more efficiently from the GI tract. It is started at 75 mg bid and increased to 150 mg bid. The maximum dose is 225 mg bid. Carbamazepine, a first-generation agent, has been proved effective in randomized trials for neuropathic pain. Other potentially effective anticonvulsant adjuvants include topiramate (25–50 mg qd or bid, rising to 100–300 mg/d) and oxcarbazepine (75–300 mg bid, rising to 1200 mg bid). Glucocorticoids, preferably dexamethasone given once a day, can be useful in reducing inflammation that causes pain while elevating mood, energy, and appetite. Its main side effects include confusion, sleep difficulties, and fluid retention. Glucocorticoids are especially effective for bone pain and abdominal pain from distention of the GI tract or liver. Other drugs, including clonidine and baclofen, can be effective in pain relief. These drugs are adjuvants and generally should be used in conjunction with—not instead of—opioids. Methadone, carefully dosed because of its unpredictable half-life in many patients, has activity at the N-methyl-d-aspartamate (NMDA) receptor and is useful for complex pain syndromes and neuropathic pain. It generally is reserved for cases in which first-line opioids (morphine, oxycodone, hydromorphone) are either ineffective or unavailable. Radiation therapy can treat bone pain from single metastatic lesions. Bone pain from multiple metastases can be amenable to radiopharmaceuticals such as strontium-89 and samarium-153. Bisphosphonates (such as pamidronate [90 mg every 4 weeks]) and calcitonin (200 IU intranasally once or twice a day) also provide relief from bone pain but have an onset of action of days. Constipation • Frequency Constipation is reported in up to 87% of patients requiring palliative care. etIology Although hypercalcemia and other factors can cause constipation, it is most frequently a predictable consequence of the use of opioids for the relief of pain and dyspnea and of tricyclic antidepressants, from their anticholinergic effects, and of the inactivity and poor diet that are common among seriously ill patients. If untreated, constipation can cause substantial pain and vomiting and also is associated with confusion and delirium. Whenever opioids and other medications known to cause constipation are used, preemptive treatment for constipation should be instituted. assessment The physician should establish the patient’s previous bowel habits, including the frequency, consistency, and volume. Abdominal and rectal examinations should be performed to exclude impaction or acute abdomen. A number of constipation assessment scales are available, although guidelines issued in the Journal of Palliative Medicine did not recommend them for routine practice. Radiographic assessments beyond a simple flat plate of the abdomen in cases in which obstruction is suspected are rarely necessary. InterventIon Intervention to reestablish comfortable bowel habits and relieve pain and discomfort should be the goals of any measures to address constipation during end-of-life care. Although physical activity, adequate hydration, and dietary treatments with fiber can be helpful, each is limited in its effectiveness for most seriously ill patients, and fiber may exacerbate problems in the setting of dehydration and if impaired motility is the etiology. Fiber is contraindicated in the presence of opioid use. Stimulant and osmotic laxatives, stool softeners, fluids, and enemas are the mainstays of therapy (Table 10-5). In preventing constipation from opioids and other medications, a combination of a laxative and a stool softener (such as senna and docusate) should be used. If after several days of treatment, a bowel movement has not occurred, a rectal examination to remove impacted stool and place a suppository is necessary. For patients with impending bowel obstruction or gastric stasis, octreotide to reduce secretions can be helpful. For patients in whom the suspected mechanism is dysmotility, metoclopramide can be helpful. Nausea • Frequency Up to 70% of patients with advanced cancer have nausea, defined as the subjective sensation of wanting to vomit. etIology Nausea and vomiting are both caused by stimulation at one of four sites: the GI tract, the vestibular system, the chemoreceptor trigger zone (CTZ), and the cerebral cortex. Medical treatments for nausea are aimed at receptors at each of these sites: the GI tract contains mechanoreceptors, chemoreceptors, and 5-hydroxytryptamine type 3 (5-HT3) receptors; the vestibular system probably contains histamine and acetylcholine receptors; and the CTZ contains chemoreceptors, dopamine type 2 receptors, and 5-HT3 receptors. An example of nausea that most likely is mediated by the cortex is anticipatory nausea before a dose of chemotherapy or other noxious stimuli. Specific causes of nausea include metabolic changes (liver failure, uremia from renal failure, hypercalcemia), bowel obstruction, constipation, infection, GERD, vestibular disease, brain metastases, medications (including antibiotics, NSAIDs, proton pump inhibitors, opioids, and chemotherapy), and radiation therapy. Anxiety can also contribute to nausea. InterventIon Medical treatment of nausea is directed at the anatomic and receptor-mediated cause that a careful history and physical examination reveals. When a single specific cause is not found, many advocate beginning treatment with a dopamine antagonist such as haloperidol or prochlorperazine. Prochlorperazine is usually more sedating than haloperidol. When decreased motility is suspected, metoclopramide can be an effective treatment. When inflammation of the GI tract is suspected, glucocorticoids such as dexamethasone are an appropriate treatment. For nausea that follows chemotherapy and radiation therapy, one of the 5-HT3 receptor antagonists (ondansetron, granisetron, dolasetron, palonosetron) is recommended. Studies suggest palonosetron has higher receptor binding affinity and clinical superiority to the other 5-HT3 receptor antagonists. Clinicians should attempt prevention of postchemotherapy nausea rather than provide treatment after the fact. Current clinical guidelines recommend tailoring the strength of treatments to the specific emetic risk posed by a specific chemotherapy drug. When a vestibular cause (such as “motion sickness” or labyrinthitis) is suspected, antihistamines such as meclizine (whose primary side effect is drowsiness) or anticholinergics such as scopolamine can be effective. In anticipatory nausea, a benzodiazepine such as lorazepam is indicated. As with antihistamines, drowsiness and confusion are the main side effects. dyspnea • Frequency Dyspnea is a subjective experience of being short of breath. Frequencies vary among causes of death, but it can affect 80–90% of dying patients with lung cancer, COPD, and heart disease. Dyspnea is among the most distressing physical symptoms and can be even more distressing than pain. assessment As with pain, dyspnea is a subjective experience that may not correlate with objective measures of Po2, Pco2, or respiratory rate. Consequently, measurements of oxygen saturation through pulse oximetry or blood gases are rarely helpful in guiding therapy. Despite the limitations of existing assessment methods, physicians should regularly assess and document patients’ experience of dyspnea and its intensity. Guidelines recommend visual or analogue dyspnea scales to assess the severity of symptoms and the effects of treatment. Potentially reversible or treatable causes of dyspnea include infection, pleural effusions, pulmonary emboli, pulmonary edema, asthma, and tumor encroachment on the airway. However, the risk-versus-benefit ratio of the diagnostic and therapeutic interventions for patients with little time left to live must be considered carefully before one undertakes diagnostic steps. Frequently, the specific etiology cannot be identified, and dyspnea is the consequence of progression of the underlying disease that cannot be treated. The anxiety caused by dyspnea and the choking sensation can significantly exacerbate the underlying dyspnea in a negatively reinforcing cycle. InterventIons When reversible or treatable etiologies are diagnosed, they should be treated as long as the side effects of treatment, such as repeated drainage of effusions or anticoagulants, are less burdensome than the dyspnea itself. More aggressive treatments such as stenting a bronchial lesion may be warranted if it is clear that the dyspnea is due to tumor invasion at that site and if the patient and family understand the risks of such a procedure. Usually, treatment will be symptomatic (Table 10-6). A dyspnea scale and careful monitoring should guide dose adjustment. Low-dose opioids reduce the sensitivity of the central respiratory center and the sensation of dyspnea. If patients are not receiving opioids, weak opioids can be initiated; if patients are already receiving opioids, morphine or other strong opioids should be used. Controlled trials do not support the use of nebulized opioids for dyspnea at the end of life. Phenothiazines and chlorpromazine may be helpful when combined with opioids. Benzodiazepines can be helpful if anxiety is present but should be neither used as first-line therapy nor used alone in the treatment of dyspnea. If the patient has a history of COPD or asthma, inhaled bronchodilators and glucocorticoids may be helpful. If the patient has pulmonary edema due to heart failure, diuresis with a medication such as furosemide is indicated. Excess secretions can be dried with scopolamine, transdermally or intravenously. Use of oxygen is controversial. There are conflicting data on its effectiveness for patients with proven hypoxemia. But there is no clear benefit of oxygen compared to room air for nonhypoxemic patients. Noninvasive positive-pressure ventilation using a facemask or nasal plugs may be used for some patients for symptom relief. For some families and patients, oxygen is distressing; for others, it is reassuring. More general interventions that medical staff can do include sitting the patient upright, removing smoke or other irritants such as perfume, ensuring a supply of fresh air with sufficient humidity, and minimizing other factors that can increase anxiety. Fatigue • Frequency More than 90% of terminally ill patients experience fatigue and/or weakness. Fatigue is one of the most commonly reported symptoms of cancer treatment as well as in the palliative care of multiple sclerosis, COPD, heart failure, and HIV. Fatigue frequently is cited as among the most distressing symptoms. etIology The multiple causes of fatigue in the terminally ill can be categorized as resulting from the underlying disease; from disease-induced factors such as tumor necrosis factor and other cytokines; and from secondary factors such as dehydration, anemia, infection, hypothyroidism, and drug side effects. Apart from low caloric intake, loss of muscle mass and changes in muscle enzymes may play an important role in fatigue of terminal illness. The importance of changes in the CNS, especially the reticular activating system, have been hypothesized based on reports of fatigue in patients receiving cranial radiation, experiencing depression, or having chronic pain in the absence of cachexia or other physiologic changes. Finally, depression and other causes of psychological distress can contribute to fatigue. assessment Like pain and dyspnea, fatigue is subjective. Objective changes, even in body mass, may be absent. Consequently, assessment must rely on patient self-reporting. Scales used to measure fatigue, such as the Edmonton Functional Assessment Tool, the Fatigue Self-Report Scales, and the Rhoten Fatigue Scale, are usually appropriate for research rather than clinical purposes. In clinical practice, a simple performance assessment such as the Karnofsky Performance Status or the ECOG’s question “How much of the day does the patient spend in bed?” may be the best measure. In this 0–4 performance status assessment, 0 = normal activity; 1 = symptomatic without being bedridden; 2 = requiring some, but <50%, bed time; 3 = bedbound more than half the day; and 4 = bedbound all the time. Such a scale allows for assessment over time and correlates with overall disease severity and prognosis. A 2008 review by the European Association of Palliative Care also described several longer assessment tools with 9–20 items, including the Piper Fatigue Inventory, the Multidimensional Fatigue Inventory, and the Brief Fatigue Inventory (BFI). InterventIons For some patients, there are reversible causes such as anemia, but for most patients at the end of life, fatigue will not be “cured.” The goal is to ameliorate it and help patients and families adjust expectations. Behavioral interventions should be used to avoid blaming the patient for inactivity and to educate both the family and the patient that the underlying disease causes physiologic changes that produce low energy levels. Understanding that the problem is physiologic and not psychological can help alter expectations regarding the patient’s level of physical activity. Practically, this may mean reducing routine activities such as housework and cooking or social events outside the house and making it acceptable to receive guests lying on a couch. At the same time, institution of exercise regimens and physical therapy can raise endorphins, reduce muscle wasting, and reduce the risk of depression. In addition, ensuring good hydration without worsening edema may help reduce fatigue. Discontinuing medications that worsen fatigue may help, including cardiac medications, benzodiazepines, certain antidepressants, or opioids if pain is well-controlled. As end-of-life care proceeds into its final stages, fatigue may protect patients from further suffering, and continued treatment could be detrimental. There are woefully few pharmacologic interventions that target fatigue and weakness. Glucocorticoids can increase energy and enhance mood. Dexamethasone is preferred for its once-a-day dosing and minimal mineralocorticoid activity. Benefit, if any, usually is seen within the first month. Psychostimulants such as dextroamphetamine (5–10 mg PO) and methylphenidate (2.5–5 mg PO) may also enhance energy levels, although a randomized trial did not show methylphenidate beneficial compared with placebo in cancer fatigue. Doses should be given in the morning and at noon to minimize the risk of counterproductive insomnia. Modafinil, developed for narcolepsy, has shown some promise in the treatment of severe fatigue and has the advantage of once-daily dosing. Its precise role in fatigue at the end of life has not been determined. Anecdotal evidence suggests that L-carnitine may improve fatigue, depression, and sleep disruption. Similarly, some studies suggest ginseng can reduce fatigue. PSYCHoLogICAL SYMPToMS ANd THEIR MANAgEMENT depression • Frequency Depression at the end of life presents an apparently paradoxical situation. Many people believe that depression is normal among seriously ill patients because they are dying. People frequently say, “Wouldn’t you be depressed?” However, depression is not a necessary part of terminal illness and can contribute to needless suffering. Although sadness, anxiety, anger, and irritability are normal responses to a serious condition, they are typically of modest intensity and transient. Persistent sadness and anxiety and the physically disabling symptoms that they can lead to are abnormal and suggestive of major depression. Although as many as 75% of terminally ill patients experience emotional distress and depressive symptoms, <30% of terminally ill patients have major depression. Depression is not limited to cancer patients but found in patients with end-stage renal disease, Parkinson’s disease, multiple sclerosis, and other terminal conditions. etIology Previous history of depression, family history of depression or bipolar disorder, and prior suicide attempts are associated with increased risk for depression among terminally ill patients. Other symptoms, such as pain and fatigue, are associated with higher rates of depression; uncontrolled pain can exacerbate depression, and depression can cause patients to be more distressed by pain. Many medications used in the terminal stages, including glucocorticoids, and some anticancer agents, such as tamoxifen, interleukin 2, interferon α, and vincristine, also are associated with depression. Some terminal conditions, such as pancreatic cancer, certain strokes, and heart failure, have been reported to be associated with higher rates of depression, although this is controversial. Finally, depression may be attributable to grief over the loss of a role or function, social isolation, or loneliness. assessment Diagnosing depression among seriously ill patients is complicated because many of the vegetative symptoms in the DSM-V (Diagnostic and Statistical Manual of Mental Disorders) criteria for clinical depression—insomnia, anorexia and weight loss, fatigue, decreased libido, and difficulty concentrating—are associated with the dying process itself. The assessment of depression in seriously ill patients therefore should focus on the dysphoric mood, helplessness, hopelessness, and lack of interest and enjoyment and concentration in normal activities. The single questions “How often do you feel downhearted and blue?” (more than a good bit of the time or similar responses) and “Do you feel depressed most of the time?” are appropriate for screening. Visual Analog Scales can also be useful in screening. InterventIons Physicians must treat any physical symptom, such as pain, that may be causing or exacerbating depression. Fostering adaptation to the many losses that the patient is experiencing can also be helpful. Nonpharmacologic interventions, including group or individual psychological counseling, and behavioral therapies such as relaxation and imagery can be helpful, especially in combination with drug therapy. Pharmacologic interventions remain the core of therapy. The same medications are used to treat depression in terminally ill as in non– terminally ill patients. Psychostimulants may be preferred for patients with a poor prognosis or for those with fatigue or opioid-induced somnolence. Psychostimulants are comparatively fast acting, working within a few days instead of the weeks required for selective serotonin reuptake inhibitors (SSRIs). Dextroamphetamine or methylphenidate should be started at 2.5–5.0 mg in the morning and at noon, the same starting doses used for treating fatigue. The dose can be escalated up to 15 mg bid. Modafinil is started at 100 mg qd and can be increased to 200 mg if there is no effect at the lower dose. Pemoline is a nonamphetamine psychostimulant with minimal abuse potential. It is also effective as an antidepressant beginning at 18.75 mg in the morning and at noon. Because it can be absorbed through the buccal mucosa, it is preferred for patients with intestinal obstruction or dysphagia. If it is used for prolonged periods, liver function must be monitored. The psychostimulants can also be combined with more traditional antidepressants while waiting for the antidepressants to become effective and then tapered after a few weeks if necessary. Psychostimulants have side effects, particularly initial anxiety, insomnia, and rarely paranoia, which may necessitate lowering the dose or discontinuing treatment. Mirtazapine, an antagonist at the postsynaptic serotonin receptors, is a promising psychostimulant. It should be started at 7.5 mg before bed. It has sedating, antiemetic, and anxiolytic properties with few drug interactions. Its side effect of weight gain may be beneficial for seriously ill patients; it is available in orally disintegrating tablets. For patients with a prognosis of several months or longer, SSRIs, including fluoxetine, sertraline, paroxetine, citalopram, escitalopram, and fluvoxamine, and serotonin-noradrenaline reuptake inhibitors such as venlafaxine, are the preferred treatment because of their efficacy and comparatively few side effects. Because low doses of these medications may be effective for seriously ill patients, one should use half the usual starting dose for healthy adults. The starting dose for fluoxetine is 10 mg once a day. In most cases, once-a-day dosing is possible. The choice of which SSRI to use should be driven by the patient’s past success or failure with the specific medication, the most favorable side effect profile for that specific agent, and (3) the time it takes to reach steady-state drug levels. For instance, for a patient in whom fatigue is a major symptom, a more activating SSRI (fluoxetine) would be appropriate. For a patient in whom anxiety and sleeplessness are major symptoms, a more sedating SSRI (paroxetine) would be appropriate. Atypical antidepressants are recommended only in selected circumstances, usually with the assistance of a specialty consultation. Trazodone can be an effective antidepressant but is sedating and can cause orthostatic hypotension and, rarely, priapism. Therefore, it should be used only when a sedating effect is desired and is often used for patients with insomnia, at a dose starting at 25 mg. In addition to its antidepressant effects, bupropion is energizing, making it useful for depressed patients who experience fatigue. However, it can cause seizures, preventing its use for patients with a risk of CNS neoplasms or terminal delirium. Finally, alprazolam, a benzodiazepine, starting at 0.25–1.0 mg tid, can be effective in seriously ill patients who have a combination of anxiety and depression. Although it is potent and works quickly, it has many drug interactions and may cause delirium, especially among very ill patients, because of its strong binding to the benzodiazepine–γ-aminobutyric acid (GABA) receptor complex. Unless used as adjuvants for the treatment of pain, tricyclic antidepressants are not recommended. Similarly, monoamine oxidase (MAO) inhibitors are not recommended because of their side effects and dangerous drug interactions. delirium (See Chap. 34) • Frequency In the weeks or months before death, delirium is uncommon, although it may be significantly under-diagnosed. However, delirium becomes relatively common in the hours and days immediately before death. Up to 85% of patients dying from cancer may experience terminal delirium. etIology Delirium is a global cerebral dysfunction characterized by alterations in cognition and consciousness. It frequently is preceded by anxiety, changes in sleep patterns (especially reversal of day and night), and decreased attention. In contrast to dementia, delirium has an acute onset, is characterized by fluctuating consciousness and inattention, and is reversible, although reversibility may be more theoretical than real for patients near death. Delirium may occur in a patient with dementia; indeed, patients with dementia are more vulnerable to delirium. Causes of delirium include metabolic encephalopathy arising from liver or renal failure, hypoxemia, or infection; electrolyte imbalances such as hypercalcemia; paraneoplastic syndromes; dehydration; and primary brain tumors, brain metastases, or leptomeningeal spread of tumor. Commonly, among dying patients, delirium can be caused by side effects of treatments, including radiation for brain metastases, and medications, including opioids, glucocorticoids, anticholinergic drugs, antihistamines, antiemetics, benzodiazepines, and chemotherapeutic agents. The etiology may be multifactorial; e.g., dehydration may exacerbate opioid-induced delirium. assessment Delirium should be recognized in any terminally ill patient with new onset of disorientation, impaired cognition, somnolence, fluctuating levels of consciousness, or delusions with or without agitation. Delirium must be distinguished from acute anxiety and depression as well as dementia. The central distinguishing feature is altered consciousness, which usually is not noted in anxiety, depression, and dementia. Although “hyperactive” delirium characterized by overt confusion and agitation is probably more common, patients also should be assessed for “hypoactive” delirium characterized by sleep-wake reversal and decreased alertness. In some cases, use of formal assessment tools such as the Mini-Mental Status Examination (which does not distinguish delirium from dementia) and the Delirium Rating Scale (which does distinguish delirium from dementia) may be helpful in distinguishing delirium from other processes. The patient’s list of medications must be evaluated carefully. Nonetheless, a reversible etiologic factor for delirium is found in fewer than half of terminally ill patients. Because most terminally ill patients experiencing delirium will be very close to death and may be at home, extensive diagnostic evaluations such as lumbar punctures and neuroradiologic examinations are usually inappropriate. InterventIons One of the most important objectives of terminal care is to provide terminally ill patients the lucidity to say goodbye to the people they love. Delirium, especially with agitation during the final days, is distressing to family and caregivers. A strong determinant of bereavement difficulties is witnessing a difficult death. Thus, terminal delirium should be treated aggressively. At the first sign of delirium, such as day-night reversal with slight changes in mentation, the physician should let the family members know that it is time to be sure that everything they want to say has been said. The family should be informed that delirium is common just before death. If medications are suspected of being a cause of the delirium, unnecessary agents should be discontinued. Other potentially reversible causes, such as constipation, urinary retention, and metabolic abnormalities, should be treated. Supportive measures aimed at providing a familiar environment should be instituted, including restricting visits only to individuals with whom the patient is familiar and eliminating new experiences; orienting the patient, if possible, by providing a clock and calendar; and gently correcting the patient’s hallucinations or cognitive mistakes. Pharmacologic management focuses on the use of neuroleptics and, in the extreme, anesthetics (Table 10-7). Haloperidol remains first-line therapy. Usually, patients can be controlled with a low dose (1–3 mg/d), usually given every 6 h, although some may require as much as 20 mg/d. It can be administered PO, SC, or IV. IM injections should not be used, except when this is the only way to get a patient under control. Newer atypical neuroleptics, such as olanzapine, risperidone, and quetiapine, have shown significant effectiveness in completely resolving delirium in cancer patients. These drugs also have fewer side effects than haloperidol, along with other beneficial effects for terminally ill patients, including antinausea, antianxiety, and weight gain. They are useful for patients with longer anticipated life expectancy because they are less likely to cause dysphoria and have a lower risk of dystonic reactions. Also, because they are metabolized through multiple pathways, they can be used in patients with hepatic and renal dysfunction. Olanzapine has the disadvantage that it is available only orally and that it takes a week to reach steady state. The usual dose is 2.5–5 mg PO bid. Chlorpromazine (10–25 mg every 4–6 h) can be useful if sedation is desired and can be administered IV or PR in addition to PO. Dystonic reactions resulting from dopamine blockade are a side effect of neuroleptics, although they are reported to be rare when these drugs are used to treat terminal delirium. If patients develop dystonic reactions, benztropine should be administered. Neuroleptics may be combined with lorazepam to reduce agitation when the delirium is the result of alcohol or sedative withdrawal. If no response to first-line therapy is seen, a specialty consultation should be obtained with a change to a different medication. If patients fail to improve after a second neuroleptic, sedation with an anesthetic such as propofol or continuous-infusion midazolam may be necessary. By some estimates, at the very end of life, as many as 25% of patients experiencing delirium, especially restless delirium with myoclonus or convulsions, may require sedation. Physical restraints should be used with great reluctance and only when the patient’s violence is threatening to self or others. If they are used, their appropriateness should be reevaluated frequently. Insomnia • Frequency Sleep disorders, defined as difficulty initiating sleep or maintaining sleep, sleep difficulty at least 3 nights a week, or sleep difficulty that causes impairment of daytime functioning, occur in 19–63% of patients with advanced cancer. Some 30–74% of patients with other end-stage conditions, including AIDS, heart disease, COPD, and renal disease, experience insomnia. etIology Patients with cancer may have changes in sleep efficiency such as an increase in stage I sleep. Other etiologies of insomnia are coexisting physical illness such as thyroid disease and coexisting psychological illnesses such as depression and anxiety. Medications, including antidepressants, psychostimulants, steroids, and β agonists, are significant contributors to sleep disorders, as are caffeine and alcohol. Multiple over-the-counter medications contain caffeine and antihistamines, which can contribute to sleep disorders. assessment Assessment should include specific questions concerning sleep onset, sleep maintenance, and early-morning wakening as these will provide clues to the causative agents and to management. Patients should be asked about previous sleep problems, screened for depression and anxiety, and asked about symptoms of thyroid disease. Caffeine and alcohol are prominent causes of sleep problems, and a careful history of the use of these substances should be obtained. Both excessive use and withdrawal from alcohol can be causes of sleep problems. InterventIons The mainstays of intervention include improvement of sleep hygiene (encouragement of regular time for sleep, decreased nighttime distractions, elimination of caffeine and other stimulants and alcohol), intervention to treat anxiety and depression, and treatment for the insomnia itself. For patients with depression who have insomnia and anxiety, a sedating antidepressant such as mirtazapine can be helpful. In the elderly, trazodone, beginning at 25 mg at nighttime, is an effective sleep aid at doses lower than those which cause its antidepressant effect. Zolpidem may have a decreased incidence of delirium in patients compared with traditional benzodiazepines, but this has not been clearly established. When benzodiazepines are prescribed, short-acting ones (such as lorazepam) are favored over longer-acting ones (such as diazepam). Patients who receive these medications should be observed for signs of increased confusion and delirium. SoCIAL NEEdS ANd THEIR MANAgEMENT Financial Burdens • Frequency Dying can impose substantial economic strains on patients and families, causing distress. In the United States, with one of the least comprehensive health insurance systems among the developed countries, ~20% of terminally ill patients and their families spend >10% of family income on health care costs over and above health insurance premiums. Between 10 and 30% of families sell assets, use savings, or take out a mortgage to pay for the patient’s health care costs. Nearly 40% of terminally ill patients in the United States report that the cost of their illness is a moderate or great economic hardship for their family. The patient is likely to reduce and eventually stop working. In 20% of cases, a family member of the terminally ill patient also stops working to provide care. The major underlying causes of economic burden are related to poor physical functioning and care needs, such as the need for housekeeping, nursing, and personal care. More debilitated patients and poor patients experience greater economic burdens. InterventIon This economic burden should not be ignored as a private matter. It has been associated with a number of adverse health outcomes, including preferring comfort care over life-prolonging care as well as consideration of euthanasia or physician-assisted suicide. Economic burdens increase the psychological distress of families and caregivers of terminally ill patients, and poverty is associated with many adverse health outcomes. Importantly, recent studies found that “patients with advanced cancer who reported having end-of-life conversations with physicians had significantly lower health care costs in their final week of life. Higher costs were associated with worse quality of death.” Assistance from a social worker, early on if possible, to ensure access to all available benefits may be helpful. Many patients, families, and health care providers are unaware of options for long-term care insurance, respite care, the Family Medical Leave Act (FMLA), and other sources of assistance. Some of these options (such as respite care) may be part of a formal hospice program, but others (such as the FMLA) do not require enrollment in a hospice program. Relationships • Frequency Settling personal issues and closing the narrative of lived relationships are universal needs. When asked if sudden death or death after an illness is preferable, respondents often initially select the former but soon change to the latter as they reflect on the importance of saying goodbye. Bereaved family members who have not had the chance to say goodbye often have a more difficult grief process. InterventIons Care of seriously ill patients requires efforts to facilitate the types of encounters and time spent with family and friends that are necessary to meet those needs. Family and close friends may need to be accommodated with unrestricted visiting hours, which may include sleeping near the patient even in otherwise regimented institutional settings. Physicians and other health care providers may be able to facilitate and resolve strained interactions between the patient and other family members. Assistance for patients and family members who are unsure about how to create or help preserve memories, whether by providing materials such as a scrapbook or memory box or by offering them suggestions and informational resources, can be deeply appreciated. Taking photographs and creating videos can be especially helpful to terminally ill patients who have younger children or grandchildren. Family Caregivers • Frequency Caring for seriously ill patients places a heavy burden on families. Families frequently are required to provide transportation and homemaking as well as other services. Typically, paid professionals such as home health nurses and hospice workers supplement family care; only about a quarter of all caregiving consists of exclusively paid professional assistance. The trend toward more out-of-hospital deaths will increase reliance on families for endof-life care. Increasingly, family members are being called upon to provide physical care (such as moving and bathing patients) and medical care (such as assessing symptoms and giving medications) in addition to emotional care and support. Three-quarters of family caregivers of terminally ill patients are women—wives, daughters, sisters, and even daughters-in-law. Because many are widowed, women tend to be able to rely less on family for caregiving assistance and may need more paid assistance. About 20% of terminally ill patients report substantial unmet needs for nursing and personal care. The impact of caregiving on family caregivers is substantial: both bereaved and current caregivers have a higher mortality rate than that of non-caregiving controls. InterventIons It is imperative to inquire about unmet needs and to try to ensure that those needs are met either through the family or by paid professional services when possible. Community assistance through houses of worship or other community groups often can be mobilized by telephone calls from the medical team to someone the patient or family identifies. Sources of support specifically for family caregivers should be identified through local sources or nationally through groups such as the National Family Caregivers Association (www.nfcacares.org), the American Cancer Society (www.cancer.org), and the Alzheimer’s Association (www.alz.org). EXISTENTIAL NEEdS ANd THEIR MANAgEMENT Frequency Religion and spirituality are often important to dying patients. Nearly 70% of patients report becoming more religious or spiritual when they became terminally ill, and many find comfort in religious or spiritual practices such as prayer. However, ~20% of terminally ill patients become less religious, frequently feeling cheated or betrayed by becoming terminally ill. For other patients, the need is for existential meaning and purpose that is distinct from and may even be antithetical to religion or spirituality. When asked, patients and family caregivers frequently report wanting their professional caregivers to be more attentive to religion and spirituality. assessment Health care providers are often hesitant about involving themselves in the religious, spiritual, and existential experiences of their patients because it may seem private or not relevant to the current illness. But physicians and other members of the care team should be able at least to detect spiritual and existential needs. Screening questions have been developed for a physician’s spiritual history taking. Spiritual distress can amplify other types of suffering and even masquerade as intractable physical pain, anxiety, or depression. The screening questions in the comprehensive assessment are usually sufficient. Deeper evaluation and intervention are rarely appropriate for the physician unless no other member of a care team is available or suitable. Pastoral care providers may be helpful, whether from the medical institution or from the patient’s own community. InterventIons Precisely how religious practices, spirituality, and existential explorations can be facilitated and improve end-of-life care is not well established. What is clear is that for physicians, one main intervention is to inquire about the role and importance of spirituality and religion in a patient’s life. This will help a patient feel heard and help physicians identify specific needs. In one study, only 36% of respondents indicated that a clergy member would be comforting. Nevertheless, the increase in religious and spiritual interest among a substantial fraction of dying patients suggests inquiring of individual patients how this need can be addressed. Some evidence supports specific methods of addressing existential needs in patients, ranging from establishing a supportive group environment for terminal patients to individual treatments emphasizing a patient’s dignity and sources of meaning. legal aspects For centuries, it has been deemed ethical to withhold or withdraw life-sustaining interventions. The current legal consensus in the United States and most developed countries is that patients have a moral as well as constitutional or common law right to refuse medical interventions. American courts also have held that incompetent patients have a right to refuse medical interventions. For patients who are incompetent and terminally ill and who have not completed an advance care directive, next of kin can exercise that right, although this may be restricted in some states, depending how clear and convincing the evidence is of the patient’s preferences. Courts have limited families’ ability to terminate life-sustaining treatments in patients who are conscious, incompetent, but not terminally ill. In theory, patients’ right to refuse medical therapy can be limited by four countervailing interests: (1) preservation of life, (2) prevention of suicide, (3) protection of third parties such as children, and (4) preservation of the integrity of the medical profession. In practice, these interests almost never override the right of competent patients and incompetent patients who have left explicit and advance care directives. For incompetent patients who either appointed a proxy without specific indications of their wishes or never completed an advance care directive, three criteria have been suggested to guide the decision to terminate medical interventions. First, some commentators suggest that ordinary care should be administered but extraordinary care could be terminated. Because the ordinary/extraordinary distinction is too vague, courts and commentators widely agree that it should not be used to justify decisions about stopping treatment. Second, many courts have advocated the use of the substituted-judgment criterion, which holds that the proxy decision-makers should try to imagine what the incompetent patient would do if he or she were competent. However, multiple studies indicate that many proxies, even close family members, cannot accurately predict what the patient would have wanted. Therefore, substituted judgment becomes more of a guessing game than a way of fulfilling the patient’s wishes. Finally, the best-interests criterion holds that proxies should evaluate treatments by balancing their benefits and risks and select those treatments in which the benefits maximally outweigh the burdens of treatment. Clinicians have a clear and crucial role in this by carefully and dispassionately explaining the known benefits and burdens of specific treatments. Yet even when that information is as clear as possible, different individuals can have very different views of what is in the patient’s best interests, and families may have disagreements or even overt conflicts. This criterion has been criticized because there is no single way to determine the balance between benefits and burdens; it depends on a patient’s personal values. For instance, for some people, being alive even if mentally incapacitated is a benefit, whereas for others, it may be the worst possible existence. As a matter of practice, physicians rely on family members to make decisions that they feel are best and object only if those decisions seem to demand treatments that the physicians consider not beneficial. practIces Withholding and withdrawing acutely life-sustaining medical interventions from terminally ill patients are now standard practice. More than 90% of American patients die without cardiopulmonary resuscitation (CPR), and just as many forgo other potentially life-sustaining interventions. For instance, in ICUs in the period 1987–1988, CPR was performed 49% of the time, but it was performed only 10% of the time in 1992–1993. On average, 3.8 interventions, such as vasopressors and transfusions, were stopped for each dying ICU patient. However, up to 19% of decedents in hospitals received interventions such as extubation, ventilation, and surgery in the 48 h preceding death. However, practices vary widely among hospitals and ICUs, suggesting an important element of physician preferences rather than objective data. Mechanical ventilation may be the most challenging intervention to withdraw. The two approaches are terminal extubation, which is the removal of the endotracheal tube, and terminal weaning, which is the gradual reduction of the or ventilator rate. One-third of ICU physicians prefer to use the terminal weaning technique, and 13% extubate; the majority of physicians use both techniques. The American Thoracic Society’s 2008 clinical policy guidelines note that there is no single correct process of ventilator withdrawal and that physicians use and should be proficient in both methods but that the chosen approach should carefully balance benefits and burdens as well as patient and caregiver preferences. Physicians’ assessment of patients’ likelihood of survival, their prediction of possible cognitive damage, and patients’ preferences about the use of life support are primary factors in determining the likelihood of withdrawal of mechanical ventilation. Some recommend terminal weaning because patients do not develop upper airway obstruction and the distress caused by secretions or stridor; however, terminal weaning can prolong the dying process and not allow a patient’s family to be with him or her unencumbered by an endotracheal tube. To ensure comfort for conscious or semiconscious patients before withdrawal of the ventilator, neuromuscular blocking agents should be terminated and sedatives and analgesics administered. Removing the neuromuscular blocking agents permits patients to show discomfort, facilitating the titration of sedatives and analgesics; it also permits interactions between patients and their families. A common practice is to inject a bolus of midazolam (2–4 mg) or lorazepam (2–4 mg) before withdrawal, followed by 5–10 mg of morphine and continuous infusion of morphine (50% of the bolus dose per hour) during weaning. In patients who have significant upper airway secretions, IV scopolamine at a rate of 100 μg/h can be administered. Additional boluses of morphine or increases in the infusion rate should be administered for respiratory distress or signs of pain. Higher doses will be needed for patients already receiving sedatives and opioids. Families need to be reassured about treatments for common symptoms after withdrawal of ventilatory support, such as dyspnea and agitation, and warned about the uncertainty of length of survival after withdrawal of ventilatory support: up to 10% of patients unexpectedly survive for 1 day or more after mechanical ventilation is stopped. Beginning in the late 1980s, some commentators argued that physicians could terminate futile treatments demanded by the families of terminally ill patients. Although no objective definition or standard of futility exists, several categories have been proposed. Physiologic futility means that an intervention will have no physiologic effect. Some have defined qualitative futility as applying to procedures that “fail to end a patient’s total dependence on intensive medical care.” Quantitative futility occurs “when physicians conclude (through personal experience, experiences shared with colleagues, or consideration of reported empiric data) that in the last 100 cases, a medical treatment has been useless.” The term conceals subjective value judgments about when a treatment is “not beneficial.” Deciding whether a treatment that obtains an additional 6 weeks of life or a 1% survival advantage confers benefit depends on patients’ preferences and goals. Furthermore, physicians’ predictions of when treatments were futile deviated markedly from the quantitative definition. When residents thought CPR was quantitatively futile, more than one in five patients had a >10% chance of survival to hospital discharge. Most studies that purport to guide determinations of futility are based on insufficient data to provide statistical confidence for clinical decision making. Quantitative futility rarely applies in ICU settings. Many commentators reject using futility as a criterion for withdrawing care, preferring instead to consider futility situations as ones that represent conflict that calls for careful negotiation between families and health care providers. In the wake of a lack of consensus over quantitative measures of futility, many hospitals adopted process-based approaches to resolve disputes over futility and enhance communication with patients and surrogates, including focusing on interests and alternatives rather than opposing positions and generating a wide range of options. Some hospitals have enacted “unilateral do not resuscitate (DNR)” policies to allow clinicians to provide a DNR order in cases in which consensus cannot be reached with families and medical opinion is that resuscitation would be futile if attempted. This type of a policy is not a replacement for careful and patient communication and negotiation but recognizes that agreement cannot always be reached. Over the last 15 years, many states, such as Texas, Virginia, Maryland, and California, have enacted so-called medical futility laws that provide physicians a “safe harbor” from liability if they refuse a patient or family’s request for life-sustaining interventions. For instance, in Texas when a disagreement about terminating interventions between the medical team and the family has not been resolved by an ethics consultation, the hospital is supposed to try to facilitate transfer of the patient to an institution willing to provide treatment. If this fails after 10 days, the hospital and physician may unilaterally withdraw treatments determined to be futile. The family may appeal to a state court. Early data suggest that the law increases futility consultations for the ethics committee and that although most families concur with withdrawal, about 10–15% of families refuse to withdraw treatment. Approximately 12 cases have gone to court in Texas in the 7 years since the adoption of the law. As of 2007, there had been 974 ethics committee consultations on medical futility cases and 65 in which committees ruled against families and gave notice that treatment would be terminated. Treatment was withdrawn for 27 of those patients, and the remainder were transferred to other facilities or died while awaiting transfer. Euthanasia and physician-assisted suicide are defined in Table 10-8. Terminating life-sustaining care and providing opioid medications to manage symptoms have long been considered ethical by the medical profession and legal by courts and should not be confused with euthanasia or physician-assisted suicide. legal aspects Euthanasia is legal in the Netherlands, Belgium, and Luxembourg. It was legalized in the Northern Territory of Australia in 1995, but that legislation was repealed in 1997. Euthanasia is not legal in any state in the United States. With certain conditions, in Switzerland, a layperson can legally assist suicide. In the United States, physician-assisted suicide is legal in four states: Oregon, Vermont, and Washington State by legislation and Montana by court ruling. In jurisdictions where physician-assisted suicide is legal, physicians wishing to prescribe the necessary medication must fulfill multiple criteria and complete processes that include a waiting period. In other countries and all other states in the United States, physician-assisted suicide and euthanasia are illegal explicitly or by common law. practIces Fewer than 10–20% of terminally ill patients actually consider euthanasia and/or physician-assisted suicide for themselves. In the Netherlands and Oregon, >70% of patients using these interventions are dying of cancer; in Oregon, in 2013, just 1.2% of physician-assisted suicide cases involved patients with HIV/AIDS and 7.2% involved patients with amyotrophic lateral sclerosis. In the Netherlands, the share of deaths attributable to euthanasia or physician-assisted suicide declined from around 2.8% of all deaths in 2001 to around 1.8% in 2005. In 2013, the last year with complete data, around 71 patients in Oregon (just 0.2% of all deaths) died by physician-assisted suicide, although this may be an underestimate. In Washington State, between March 2009 (when the law allowing physician-assisted suicide went into force) and December 2009, 36 individuals died from prescribed lethal doses. Pain is not a primary motivator for patients’ requests for or interest in euthanasia and/or physician-assisted suicide. Fewer than 25% of all patients in Oregon cite inadequate pain control as the reason for desiring physician-assisted suicide. Depression, hopelessness, and, more profoundly, concerns about loss of dignity or autonomy or being a burden on family members appear to be primary factors motivating a desire for euthanasia or physician-assisted suicide. Over 75% cite loss of autonomy or dignity and inability to engage in enjoyable activities as the reason for wanting physician-assisted suicide. About 40% cite being a burden on family. A study from the Netherlands showed that depressed terminally ill cancer patients were four times more likely Voluntary active Intentionally administering medica-Netherlands, euthanasia tions or other interventions to cause Belgium the patient’s death with the patient’s informed consent euthanasia tions or other interventions to cause the patient’s death when the patient was competent to consent but did not—e.g., the patient may not have been asked Passive euthanasia Withholding or withdrawing life-Everywhere sustaining medical treatments from a patient to let him or her die (terminating life-sustaining treatments) Physician-assisted A physician provides medications or Oregon, suicide other interventions to a patient with Netherlands, the understanding that the patient Belgium, can use them to commit suicide Switzerland to request euthanasia and confirmed that uncontrolled pain was not associated with greater interest in euthanasia. Interestingly, despite the importance of emotional distress in motivating requests for euthanasia and physician-assisted suicide, few patients receive psychiatric care. For instance, in Oregon, only 5.9% of patients have been referred for psychiatric evaluation. Euthanasia and physician-assisted suicide are no guarantee of a painless, quick death. Data from the Netherlands indicate that in as many as 20% of cases technical and other problems arose, including patients waking from coma, not becoming comatose, regurgitating medications, and experiencing a prolonged time to death. Data from Oregon indicate that between 1997 and 2013, 22 patients (~5%) regurgitated after taking prescribed medication, 1 patient awaked, and none experienced seizures. Problems were significantly more common in physician-assisted suicide, sometimes requiring the physician to intervene and provide euthanasia. Whether practicing in a setting where euthanasia is legal or not, over a career, 12–54% of physicians receive a request for euthanasia or physician-assisted suicide from a patient. Competency in dealing with such a request is crucial. Although challenging, the request can also provide a chance to address intense suffering. After receiving a request for euthanasia and/or physician-assisted suicide, health care providers should carefully clarify the request with empathic, open-ended questions to help elucidate the underlying cause for the request, such as “What makes you want to consider this option?” Endorsing either moral opposition or moral support for the act tends to be counterproductive, giving an impression of being judgmental or of endorsing the idea that the patient’s life is worthless. Health care providers must reassure the patient of continued care and commitment. The patient should be educated about alternative, less controversial options, such as symptom management and withdrawing any unwanted treatments and the reality of euthanasia and/or physician-assisted suicide, because the patient may have misconceptions about their effectiveness as well as the legal implications of the choice. Depression, hopelessness, and other symptoms of psychological distress as well as physical suffering and economic burdens are likely factors motivating the request, and such factors should be assessed and treated aggressively. After these interventions and clarification of options, most patients proceed with another approach, declining life-sustaining interventions, possibly including refusal of nutrition and hydration. Most laypersons have limited experiences with the actual dying process and death. They frequently do not know what to expect of the final hours and afterward. The family and other caregivers must be prepared, especially if the plan is for the patient to die at home. Patients in the last days of life typically experience extreme weakness and fatigue and become bedbound; this can lead to pressure sores. The issue of turning patients who are near the end of life, however, must be balanced against the potential discomfort that movement may cause. Patients stop eating and drinking with drying of mucosal membranes and dysphagia. Careful attention to oral swabbing, lubricants for lips, and use of artificial tears can provide a form of care to substitute for attempts at feeding the patient. With loss of the gag reflex and dysphagia, patients may also experience accumulation of oral secretions, producing noises during respiration sometimes called “the death rattle.” Scopolamine can reduce the secretions. Patients also experience changes in respiration with periods of apnea or Cheyne-Stokes breathing. Decreased intravascular volume and cardiac output cause tachycardia, hypotension, peripheral coolness, and livedo reticularis (skin mottling). Patients can have urinary and, less frequently, fecal incontinence. Changes in consciousness and neurologic function generally lead to two different paths to death (Fig. 10-2). Each of these terminal changes can cause patients and families distress, requiring reassurance and targeted interventions (Table 10-9). Informing families that these changes might occur and providing them with an information sheet can help preempt problems and minimize distress. Understanding that patients stop eating because they are dying, not dying because they have stopped eating, can reduce family FIgURE 10-2 Common and uncommon clinical courses in the last days of terminally ill patients. (Adapted from FD Ferris et al: Module 4: Palliative care, in Comprehensive Guide for the Care of Persons with HIV Disease. Toronto: Mt. Sinai Hospital and Casey Hospice, 1995, http://www .cpsonline.info/content/resources/hivmodule/module4complete.pdf.) and caregiver anxiety. Similarly, informing the family and caregivers that the “death rattle” may occur and that it is not indicative of suffocation, choking, or pain can reduce their worry from the breathing sounds. Families and caregivers may also feel guilty about stopping treatments, fearing that they are “killing” the patient. This may lead to demands for interventions, such as feeding tubes, that may be ineffective. In such cases, the physician should remind the family and caregivers about the inevitability of events and the palliative goals. Interventions may prolong the dying process and cause discomfort. Physicians also should emphasize that withholding treatments is both legal and ethical and that the family members are not the cause of the patient’s death. This reassurance may have to be provided multiple times. Hearing and touch are said to be the last senses to stop functioning. Whether this is the case or not, families and caregivers can be encouraged to communicate with the dying patient. Encouraging them to talk directly to the patient, even if he or she is unconscious, and hold the patient’s hand or demonstrate affection in other ways can be an effective way to channel their urge “to do something” for the patient. When the plan is for the patient to die at home, the physician must inform the family and caregivers how to determine that the patient has died. The cardinal signs are cessation of cardiac function and respiration; the pupils become fixed; the body becomes cool; muscles relax; and incontinence may occur. Remind the family and caregivers that the eyes may remain open even after the patient has died because Changes in the Profound fatigue Bedbound with development of pressure ulcers that are prone to infection, malodor, and pain, and joint pain Dysphagia Inability to swallow oral medications needed for palliative care Apnea, Cheyne-Stokes respirations, dyspnea Potential transmission of infectious agents to caregivers Dry mucosal Cracked lips, mouth sores, membranes and candidiasis can also cause pain. Patient is lazy and giving up. Patient is giving up; patient will suffer from hunger and will starve to death. Patient will suffer from thirst and die of dehydration. Patient is choking and suffocating. Patient is suffocating. Patient is dirty, malodorous, and physically repellent. Patient is in horrible pain and going to have a horrible death. Patient may be malodorous or physically repellent. Reassure family and caregivers that terminal fatigue will not respond to interventions and should not be resisted. Use an air mattress if necessary. Reassure family and caregivers that the patient is not eating because he or she is dying; not eating at the end of life does not cause suffering or death. Forced feeding, whether oral, parenteral, or enteral, does not reduce symptoms or prolong life. Reassure family and caregivers that dehydration at the end of life does not cause suffering because patients lose consciousness before any symptom distress. Intravenous hydration can worsen symptoms of dyspnea by pulmonary edema and peripheral edema as well as prolong dying process. Do not force oral intake. Discontinue unnecessary medications that may have been continued, including antibiotics, diuretics, antidepressants, and laxatives. If swallowing pills is difficult, convert essential medications (analgesics, antiemetics, anxiolytics, and psychotropics) to oral solutions, buccal, sublingual, or rectal administration. Reassure the family and caregivers that this is caused by secretions in the oropharynx and the patient is not choking. Reduce secretions with scopolamine (0.2–0.4 mg SC q4h or 1–3 patches q3d). Reposition patient to permit drainage of secretions. Do not suction. Suction can cause patient and family discomfort and is usually ineffective. Reassure family and caregivers that unconscious patients do not experience suffocation or air hunger. Apneic episodes are frequently a premorbid change. Opioids or anxiolytics may be used for dyspnea. Oxygen is unlikely to relieve dyspneic symptoms and may prolong the dying process. Remind family and caregivers to use universal precautions. Frequent changes of bedclothes and bedding. Use diapers, urinary catheter, or rectal tube if diarrhea or high urine output. not necessarily connote physical pain. Depending on the prognosis and goals of treatment, consider evaluating for causes of delirium and modify medications. Manage symptoms with haloperidol, chlorpromazine, diazepam, or midazolam. Use baking soda mouthwash or saliva preparation q15–30min. Use topical nystatin for candidiasis. Coat lips and nasal mucosa with petroleum jelly q60–90min. Use ophthalmic lubricants q4h or artificial tears q30min. the retroorbital fat pad may be depleted, permitting the orbit to fall posteriorly, which makes it difficult for the eyelids to cover the eyeball. The physician should establish a plan for who the family or caregivers will contact when the patient is dying or has died. Without a plan, they may panic and call 911, unleashing a cascade of unwanted events, from arrival of emergency personnel and resuscitation to hospital admission. The family and caregivers should be instructed to contact the hospice (if one is involved), the covering physician, or the on-call member of the palliative care team. They should also be told that the medical examiner need not be called unless the state requires it for all deaths. Unless foul play is suspected, the health care team need not contact the medical examiner either. Just after the patient dies, even the best-prepared family may experience shock and loss and be emotionally distraught. They need time to assimilate the event and be comforted. Health care providers are likely to find it meaningful to write a bereavement card or letter to the family. The purpose is to communicate about the patient, perhaps emphasizing the patient’s virtues and the honor it was to care for the patient, and to express concern for the family’s hardship. Some physicians attend the funerals of their patients. Although this is beyond any medical obligation, the presence of the physician can be a source of support to the grieving family and provides an opportunity for closure for the physician. Death of a spouse is a strong predictor of poor health, and even mortality, for the surviving spouse. It may be important to alert the spouse’s physician about the death so that he or she is aware of symptoms that might require professional attention. PALLIATIVE CARE SERVICES: HoW ANd WHERE Determining the best approach to providing palliative care to patients will depend on patient preferences, the availability of caregivers and specialized services in close proximity, institutional resources, and reimbursement. Hospice is a leading, but not the only, model of palliative care services. In the United States, a plurality—41.5%—of hospice care is provided in residential homes. In 2012, just over 17% of hospice care was provided in nursing homes. In the United States, Medicare pays for hospice services under Part A, the hospital insurance part of reimbursement. Two physicians must certify that the patient has a prognosis of ≤6 months if the disease runs its usual course. Prognoses are probabilistic by their nature; patients are not required to die within 6 months but rather to have a condition from which half the individuals with it would not be alive within 6 months. Patients sign a hospice enrollment form that states their intent to forgo curative services related to their terminal illness, but they can still receive medical services for other comorbid conditions. Patients also can withdraw enrollment and reenroll later; the hospice Medicare benefit can be revoked later to secure traditional Medicare benefits. Payments to the hospice are per diem (or capitated), not fee-for-service. Payments are intended to cover physician services for the medical direction of the care team; regular home care visits by registered nurses and licensed practical nurses; home health aid and homemaker services; chaplain services; social work services; bereavement counseling; and medical equipment, supplies, and medications. No specific therapy is excluded, and the goal is for each therapy to be considered for its symptomatic (as opposed to disease-modifying) effect. Additional clinical care, including services of the primary physician, is covered by Medicare Part B even while the hospice Medicare benefit is in place. The health reform legislation signed into law in March 2010—the Affordable Care Act—directs the Secretary of Health and Human Services to gather data on Medicare hospice reimbursement with the goal of reforming payment rates to account for resource use over an entire episode of care. The legislation also requires additional evaluations and reviews of eligibility for hospice care by hospice physicians or nurses. Finally, the legislation establishes a demonstration project for concurrent hospice care in Medicare, which would test and evaluate allowing patients to remain eligible for regular Medicare during hospice care. By 2012, the mean length of enrollment in a hospice was around 71.8 days, with the median being 18.7 days. Such short stays create barriers to establishing high-quality palliative services in patients’ homes and also place financial strains on hospice providers because the initial assessments are resource intensive. Physicians should initiate early referrals to the hospice to allow more time for patients to receive palliative care. Hospice care has been the main method in the United States for securing palliative services for terminally ill patients. However, efforts rarely the primary focus of carefully developed or widely used outcome measures. Nevertheless, outcomes are as important in end-of-life care as in any other field of medical care. Specific end-of-life care instruments are being developed both for assessment, such as The Brief Hospice Inventory and NEST (needs near the end of life screening tool), and for outcome measures, such as the Palliative Care Outcomes Scale, as well as for prognosis, such as the Palliative Prognostic Index. The field of end-of-life care is entering an era of evidence-based practice and continuous improvement through clinical trials. Clinical problems of aging Luigi Ferrucci, Stephanie Studenski While an in-depth understanding of internal medicine serves as a foundation, proper care of older adults should be complemented by insight into the multidimensional effects of aging on disease manifestations, consequences, and response to treatment. In younger adults, individual diseases tend to have a more distinct pathophysiology with well-defined risk factors; the same diseases in older persons may have a less distinct pathophysiology and are often the result of failed homeostatic mechanisms. Causes and clinical manifestations are less specific and can vary widely between individuals. Therefore, the care of older patients demands an understanding of the effects of aging on human physiology and a broader perspective that incorporates geriatric syndromes, disability, social contexts, and goals of care. For example, care planning for the older patient should account for the substantial portion of the wide variability in life expectancy across individuals of the same age that can be predicted by simple and inexpensive measures such as walking speed. Estimation of the expected remaining years of life can guide recommendations about appropriate preventive and other long-term interventions and can shape discussions about treatment alternatives. (See also Chap. 93e) Population aging emerged as a worldwide phenomenon for the first time in history within the past century. Since aging influences many facets of life, governments and societies—as well as families and communities—now face new social and economic challenges that affect health care. Fig. 11-1 highlights recent and predicted changes in U.S. population structure. are being made to ensure continuity of palliative care across settings and 25000 through time. Palliative care services are becoming available as consultative services and more rarely as palliative care units in hospitals, in day care and other outpatient settings, and in nursing homes. Palliative care consultations for nonhospice patients can be billed as for other consultations under Medicare Part B, the physician reimbursement part. Many believe palliative care should be offered to patients regardless of their prognosis. A patient, his or her family, and physicians should not have to make a “curative versus palliative care” decision because it is rarely possible to make such a decisive switch to embracing mortality. Number of persons in thousands Care near the end of life cannot be measured by most of the available validated outcome measures because palliative care does not consider death a bad outcome. Similarly, the family and patients receiving end-of-life care may not desire the elements elicited in current qualityof-life measurements. Symptom control, enhanced family relationships, and quality of bereavement are difficult to measure and are Many chronic diseases increase in prevalence with age. It is not unusual for older persons to have multiple chronic diseases (Fig. 11-4), although some seem more susceptible than others to co-occurring problems. Functional problems that pose difficulties or require help in performing basic activities of daily living (ADLs) (Table 11-1) increase with age and are more common among women than among Chapter 11 Clinical Problems of Aging men. In recent decades, the age-specific prevalence of disability has declined, especially in the oldest old. Estimated rates are shown in Fig. 11-5 as the percentage of persons who reported severe difficulty or needed help in bathing, and data on other basic ADLs show similar trends. Although the age-specific prevalence of disability is decreasing, the magnitude of this decline is small compared to the overwhelming effect of population aging. Thus, the number of people with disability in the United States and other countries is rapidly expanding. Rates of cognitive impairments, such as memory problems, also increase with aging (Fig. 11-6). Chronic disease and disability lead to increased use of health care resources. Health care expenditures increase with age, increase more with disability, and are highest in the last year of life. However, new medical technologies and expensive medications are greater influences on health care costs than population aging itself. General practitioners and internists with little specific training in geriatric medicine provide the bulk of care for older persons. Systemic consequences of aging are widespread but can be clustered into four main domains or processes (Fig. 11-7): (1) body composition; (2) balance between energy availability and energy demand; (3) signaling networks that maintain homeostasis; and (4) neurodegeneration. Each domain can be assessed with routine clinical tests, although more detailed research techniques are also available (Table 11-2). Body Composition Profound changes in body composition may be the most evident and inescapable effect of aging (Fig. 11-8). Over the life span, body weight tends to increase through childhood, puberty, and adulthood until late middle age. Weight tends to decline in men between ages 65 and 70 years and in women somewhat later. Lean body mass, composed predominantly of muscle and visceral organs, decreases steadily after the third decade. In muscle, this atrophy is greater in fast-twitch than in slow-twitch fibers. The origin of this change is unknown, but several lines of evidence suggest that progressive loss of motor neurons probably plays an important role. Fat mass tends to increase in middle age and then declines in late life, reflecting the trajectory of weight change. Waist circumference continues to increase across the life span, a pattern suggesting that visceral FIgURE 11-2 Population aging in different geographic regions. (From United Nations World Population Prospects: The 2008 Revision, http://www .un.org/esa/population/publications/wpp2008/wpp2008_highlights.pdf.) The overall number of children has remained relatively stable, but explosive growth has occurred among older populations. The percentage of growth is particularly dramatic among the oldest of the old. For example, the number of persons aged 80–89 years more than tripled between 1960 and 2010 and will increase over tenfold between 1960 and 2050. Women already outlive men by many years, and the sex discrepancy in longevity is projected to increase further in the future. Population aging occurs at different rates in varying geographic regions of the world. Over the past century, Europe, Australia, and North America have had the populations with the greatest proportions of older persons, but the populations of Asia and South America are aging rapidly, and the population structure on these continents will resemble that of “older” countries by around 2050 (Fig. 11-2). Among older persons, the oldest old (those >80 years of age) are the fastest-growing segment of the population (Fig. 11-3), and the pace of population aging is projected to accelerate in most countries over the next 50 years. There is no evidence that the rate of population aging is decreasing. fat, which is responsible for most of the pathologic consequences of obesity, continues to accumulate. In some individuals, fat also accumulates inside muscle, affecting muscle quality and function. With age, fibroconnective tissue tends to increase in many organ systems. In Percentage of population 80+ years old muscle, fibroconnective tissue buildup also affects muscle quality and function. In combination, the loss of muscle mass and quality result in reduced muscle strength, which ultimately affects functional capacity and mobility. Muscle strength declines with aging; this decrease not only affects functional status but also is a strong independent predictor of mortality (Fig. 11-9). Progressive demineralization and architectural modification occur in bone, resulting in a decline of bone strength. Loss of bone strength increases the risk of fracture. Sex differences in the effects of aging on bone mass are due to differences in peak bone mass and the effects of gonadal hormones on bone. Overall, compared with men, women tend to lose bone mass at a younger age and more quickly reach the threshold of low bone strength that increases fracture risk. All of these changes in body composition can be attributed to disruptions in the links between synthesis, degradation, and repair that Years normally serve to remodel tissues. Such changes in body composition FIgURE 11-3 Percentage of the population age >80 years from are influenced not only by aging and illness but also by lifestyle factors 1950 to 2050 in representative nations. The pace of population such as physical activity and diet. Body composition can be approxiaging will accelerate. (From United Nations World Population Prospects: mated in clinical practice on the basis of weight, height, body mass The 2008 Revision, http://www.un.org/esa/population/publications/ index (BMI; weight in kilograms divided by height in meters squared), wpp2008/wpp2008_highlights.pdf.) and waist circumference or, more precisely, with dual-energy x-ray FIgURE 11-4 Prevalence of comorbidity by age group in persons ≥65 years old living in the United States and enrolled in Medicare parts A and B in 1999. (From JL Wolff et al: Arch Intern Med 162:2269, 2002.) Basic Activities of Daily Living: Self-Care Tasks Transferring from bed to chair and back Using the toilet Moving around (as opposed to being bedridden) Instrumental Activities of Daily Living: Not Necessary for Fundamental Functioning, but Permit an Individual to Live Independently in a Percentage with memory impairment Using the telephone aWith the recognition that older persons may not be that technologically savvy since they were not as extensively exposed to technology during their lifetime.) FIgURE 11-6 Rates of memory impairment in different age groups. The definition of “moderate or severe memory impairment” is 4 or fewer words recalled out of 20. (Source: Health and Retirement Survey. Accessed November 15, 2013, at aoa.gov/agingstatsdotnet/Main_ Site/Data/2000_Documents/healthstatus.aspx.) FIgURE 11-5 Self-reported prevalence of disability (severe difficulty) in bathing/showering between 1992 and 2007, according to age and sex.) (From Medicare Current Beneficiary Survey 1992–2007.) Aging Discrepancy in energy production/utilization Changes in body composition Anorexia/ malnutrition Gait disorders/ falls Disability Disease susceptibility Comorbidity Urinary incontinence Decubitus ulcers Sleep disorders Delirium Cognitive impairment Homeostatic dysregulation Neurodegeneration Domains of the aging phenotype Frailty Disease susceptibility, reduced functional reserve, reduced healing capacity, unstable health, failure to thrive Geriatric syndromes FIgURE 11-7 A unifying model of aging, frailty, and the geriatric syndromes. Chapter 11 Clinical Problems of Aging absorptiometry, CT, or MRI. In healthy men and women in their twenties, lean body mass is, on average, 85% of body weight, with roughly 50% of lean mass represented by skeletal muscle. With aging, both the percentage of lean mass and the percentage represented by muscle decline rapidly, and these changes have important health and functional consequences. Balance Between Energy Availability and Energy demand Release of phosphate from ATP provides every living cell with the energy required for life. However, the storage of ATP is only enough for 6 sec; therefore, ATP is constantly resynthesized. Although ATP can be resynthesized by anaerobic glycolysis, most of the energy used in the body is generated through aerobic metabolism. Therefore, energy consumption is usually estimated indirectly by oxygen consumption (indirect calorimetry). There is currently no method to measure true “fitness,” which is the maximal energy that can be produced by an organism over extended periods. Thus, fitness is estimated indirectly from peak oxygen consumption (MVo ), often during a maximal treadmill test. gressively with aging (Fig. 11-10), and the rate of decline is accelerated in persons who are sedentary and in those affected by chronic diseases. A large portion of energy is consumed as the “resting metabolic rate” (RMR)—i.e., the amount of energy expended at rest in a neutral temperature environment and in a postabsorptive state. In healthy men and women, RMR declines with aging, and such decline is only partially explained by the parallel decline in the highly metabolically active tissues that make up lean body mass (Fig. 11-11). However, persons with unstable homeostasis due to illness require additional Approach to Assessment Body Composition Energetics Homeostatic Regulation Neurodegeneration Anthropometrics (weight, height, BMI, waist circumference, arm and leg circumference, skin folds) Imaging CT and MRI, DEXA Other Hydrostatic weighing Self-reported questionnaires investigating physical activity, sense of fatigue/exhaustion, exercise tolerance Performance-based tests of physical function Treadmill testing of oxygen consumption during walking Objective measures of physical activity (accelerometers, double-labeled water) Nutritional biomarkers (e.g., vitamins, antioxidants) Baseline levels of biomarkers and hormone levels Inflammatory markers (e.g., ESR, CRP, IL-6, TNF-α) Response to provocative tests, such as oral glucose tolerance test, dexamethasone test, and others Objective assessment of gait, balance, reaction time, coordination Standard neurologic exam, including assessment of global cognitiona MRI, fMRI, PET, and other dynamic imaging techniques aMini Mental State; Montreal Cognitive Assessment. Abbreviations: BMI, body mass index; CRP, C-reactive protein; DEXA, dual-energy x-ray absorptiometry; ESR, erythrocyte sedimentation rate; fMRI, functional MRI; IL-6, interleukin 6; PET, positron emission tomography; TNFα, tumor necrosis factor α. FIgURE 11-8 Longitudinal changes of weight, body composition, and waist circumference over the life span, estimated in 1167 participants in the Baltimore Longitudinal Study of Aging. Lean body mass (LBM) and fat mass were estimated with dual-energy x-ray absorptiometry. (Source: The Baltimore Longitudinal study of Aging 2010; unpublished data.) Survivors, n = 3680 Non-survivors, n = 3680 FIgURE 11-9 Cross-sectional differences and longitudinal changes in muscle strength over a 27-year follow-up. Note that persons who died during the follow-up had lower baseline muscle strength. (From T Rantanen et al: J Appl Physiol 85:2047, 1998.) 1.8 1.6 1.4 1.2 1.0 0.8 0.6 FIgURE 11-10 Longitudinal changes in aerobic capacity in participants in the Baltimore Longitudinal Study of Aging. (From JL Fleg: FIgURE 11-11 Changes in resting metabolic rate with aging. Circulation 112:674, 2005.) (Unpublished data from the Baltimore Longitudinal Study of Aging.) taBLe 11-3 horMones that deCrease, reMaIn staBLe, and InCrease WIth aGInG Chapter 11 Clinical Problems of Aging FIgURE 11-12 Longitudinal trajectory of bioavailable testosterone plasma concentration in the Baltimore Longitudinal Study of Aging (BLSA). The plot is based on 584 men who were 50 years or older with a total of 1455 data points. The average follow up for each subject was 3.2 years. (Figure created using unpublished data from the BLSA.) energy for compensatory mechanisms. Indeed, observational studies have demonstrated (1) that older persons with poor health status and substantial morbidity have a higher RMR than healthier individuals of the same age and sex and (2) that a high RMR is an independent risk factor for mortality and may contribute to the weight loss that often accompanies severe illness. Finally, for reasons that are not yet completely clear but certainly involve changes in the biomechanical characteristics of movement, older age, pathology, and physical impairment increase the energy cost of motor activities such as walking. Overall, older individuals with multiple chronic conditions have low available energy levels and require more energy both at rest and during physical activity. Thus, sick older people may consume all their available energy performing the most basic ADLs, and consequent fatigue and restriction may lead to a sedentary existence. Energy status can be assessed clinically by simply asking patients about their perceived level of fatigue during daily activities such as walking or dressing. Energy capacity can be assessed more precisely by exercise tolerance during a walking test or a treadmill test coupled with spirometry. The main signaling pathways that control homeostasis involve hormones, inflammatory mediators, and antioxidants; all are profoundly affected by aging. Sex hormone levels, such as testosterone in men (Fig. 11-12) and estrogen in women, decrease with age, while other hormone systems may change more subtly (Table 11-3). Most aging individuals, even those who remain healthy and fully functional, tend to dysregulation. For example, taken one at a time, levels of testosterone, dehydroepiandrosterone (DHEA), and insulin-like growth factor 1 (IGF-1) do not predict mortality, but in combination they are highly predictive of longevity. This combination effect is especially strong in the setting of congestive heart failure. Similarly, several micronutrients, such as vitamins (especially vitamin D), minerals (selenium and magnesium), and antioxidants (vitamins D and E), also regulate aspects of metabolism. Low levels of these micronutrients have been associated with accelerated aging and a high risk of adverse outcomes. However, except for vitamin D, no clear evidence suggests that supplementation has positive effects on health. Unfortunately, no standard criteria exist that allow the detection and quantification of homeostatic dysregulation as a general phenomenon. Neurodegeneration It was long generally believed that neurons stop reproducing shortly after birth and that their number declines throughout life. However, results from animal models and even some studies in humans suggest that neurogenesis in the hippocampus continues at low levels throughout life. Brain atrophy occurs with aging after the age of 60 years. Atrophy proceeds at varying rates in different parts of the brain (Fig. 11-14) and is often accompanied by an inflammatory response and microglial activation. Age-associated brain atrophy may contribute to age-related declines in cognitive and motor function. Atrophy may also be a factor in some brain diseases that can occur with aging, such as mild cognitive impairment, in which persons have mild but detectable impairments on tests of cognition but no severe disability in daily activities. In mild cognitive impairment, atrophy has been found mostly in the prefrontal cortex and hippocampus, develop a mild proinflammatory state characterized by high levels of proinflammatory markers, including interleukin 6 (IL-6) and C-reactive protein (CRP) (Fig. 11-13). Aging is also thought to be associated with increased oxidative stress damage, either because the production of reactive oxygen species increases or because antioxidant buffers are less effective. Since hormones, inflammatory markers, and antioxidants are integrated into complex signaling networks, levels FIgURE 11-13 Change in interleukin 6 (IL-6) and C-reactive protein (CRP) with aging. Values are expressed as Z-scores to make them comparable. (From L Ferrucci et al: Blood 105:2294, 2005.) 1.5 1 0.5 0 –0.5 –1 20-39 40-49 50-64 65-74 Age groups Age groups 75-84 85+ 20-39 40-49 50-64 65-74 75-84 n of SD from the sex-specific meanMenIL-6 CRP causative factors. Thus, the therapeutic strategy of sin gle-molecule replacement may be ineffective or even 85+counterproductive. The presence of such signaling networks and feedback loops may help explain why single-hormone “replacement therapy” for problems of aging has demonstrated little benefit. The focus of research in this area is now on multiple-hormone 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Five-year decrease in regional cortical size (SD) FIgURE 11-14 Five-year decline in mean volumes of different brain regions, measured in standard deviation (SD) units (Cohen’s d ). The primary visual cortex shows the least average shrinkage, and the prefrontal and inferior parietal cortex and hippocampus show the most average shrinkage. (From N Raz et al: Ann N Y Acad Sci 1097:84, 2007.) in number at a rate of ∼1% per year, starting after the third decade. These larger motor units contribute to reductions in fine-motor control and manual dexterity. Age-related changes also occur in the autonomic nervous system, affecting cardiovascular and splanchnic function. Systemic Changes Coexisting with and Affecting one Another • the phenotype of agIng: the fInal common pathway of systemIc InteractIon While age-related system changes have been described individually, in reality, these changes develop in parallel and affect one another through many feed-forward and feedback loops. Some systemic interactions are well understood, while others are under investigation. For example, body composition interacts with energy balance and signaling. Higher lean body mass increases energy consumption and improves insulin sensitivity and carbohydrate metabolism. Higher fat mass, especially visceral fat mass, is the culprit in the metabolic syndrome and is associated with low testosterone levels, high sex hormone–binding globulin levels, and increased levels of proinflammatory markers such as CRP and IL-6. Altered signaling can affect neurode generation; insulin resistance and adipokines such as leptin and adiponectin are associated with declines in but these findings are not specific and their diagnostic utility is unclear (Fig. 11-15). Other neurophysiologic changes in the brain frequently occur with aging and may contribute to cognitive decline. Functional imaging studies have shown that some older people have diminished coordination between the brain regions responsible for higher-order cognitive functions and that such diminished coordination is correlated with poor cognitive performance. In young healthy individuals, the brain activity associated with executive cognitive functions (e.g., problem-solving, decision-making) is very well localized; in contrast, in healthy older individuals, the pattern of cortical activation is more diffuse. Brain pathology has typically been associated with specific diseases; amyloid plaques and neurofibrillary tangles are considered the pathologic hallmarks of Alzheimer’s disease. However, these pathologic markers have been found at autopsy in many older individuals who had normal cognition, as assessed by extensive testing in the year before death. Taken together, trends in brain changes with aging suggest that some neurophysiologic manifestations are compensatory adaptations rather than primary contributors to age-related declines. Because the brain is capable of reorganization and compensation, extensive neurodegeneration may not be clinically evident. Therefore, early detection requires careful testing. Clinically, cortical and subcortical changes are reflected in the high prevalence of “soft,” nonspecific neurologic signs, often reflected in slow and unstable gait, poor balance, and slow reaction times. These movement changes can be elicited more overtly with “dual tasks,” in which a 1050 cognitive function. Combined with loss of motor neurons and dysfunction of the motor unit, a state of inflammation and reduced levels of testosterone and IGF-1 have been linked to accelerated decline of muscle mass and strength. Normal intersystem coordination is also affected by aging. The hypothalamus normally functions as a central regulator of metabolism and energy use and coordinates physiologic responses of the entire organism through hormonal signaling; aging-related changes in the hypothalamus alter this control. The central nervous system (CNS) also controls adaptive sympathetic/parasympathetic activity, so that age-related CNS degeneration may have implications for autonomic function. The phenotype that results from the aging process is characterized by increased susceptibility to diseases, high risk of multiple coexisting diseases, impaired response to stress (including limited ability to heal or recover after an acute disease), emergence of “geriatric syndromes” (characterized by stereotyped clinical manifestations but multifactorial causes), altered response to treatment, high risk of disability, and loss of personal autonomy with all its psychological and social consequences. In addition, these key aging processes may interfere with the typical pathophysiology of specific diseases, thereby altering expected clinical manifestations and confounding diagnosis. Clinically, patients may present with obvious problems within only one of these domains, but, since systems interact, all four main domains should be evaluated 7.5 cognitive and a motor task are performed simultaneously. In a simple version of a dual task, when an older adult has to stop walking in order to talk, an increased risk of falls can be predicted. Poor dual-task per formance has been interpreted as a marker cessing, so that simultaneous processing 7.0 6.5 6.0 5.5 is more constrained. Beyond the brain, 5.0 4.5 the spinal cord also experiences changes after the age of 60 years, including reduced numbers of motor neurons and damage to myelin. The motor neurons that survive plexity and by service to larger motor units. FIgURE 11-15 Longitudinal changes of regional brain volumes in normal aging and mild As motor units become larger, they decline cognitive impairment (MCI). (From I Driscoll et al: Neurology 72:1906, 2009.) and considered potential therapeutic targets. When patients present with obvious problems in multiple main systems affected by aging, they tend toward extreme degrees of susceptibility and loss of resilience, a condition that is globally referred to as frailty. bIologIc underpInnIngs of the domaIns of the agIng phenotype The changes that occur with aging encompass multiple physiologic systems. Although they are often described in isolation, they are likely attributable to the progressive dysfunction of a unique mechanism that affects some fundamental housekeeping mechanism of cellular physiology. An important goal of future research is to connect the aging phenotype in humans to theories of aging that have largely been developed from studies in cell or animal models. If the main theories of aging could be operationalized into assessments that are feasible in humans, it would be possible to test the hypothesis that some of these processes are correlated with all the domains of the aging phenotype, above and beyond chronologic age. Review of the biologic theories (hallmarks) of aging provides an excellent template for a working hypothesis that, at least theoretically, could be tested in longitudinal studies. Candidate mechanisms of mammalian aging include genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. Frailty Frailty has been described as a physiologic syndrome that is characterized by decreased reserve and diminished resistance to stressors, that results from cumulative decline across multiple physiologic systems, and that causes vulnerability to adverse outcomes and a high risk of death. A proposed “phenotype” definition characterized by weight loss, fatigue, impaired grip strength, diminished physical activity, and slow gait has shown good internal consistency and strong predictive validity and has been used in many clinical and epidemiologic studies. An alternative approach, the Frailty Index, assesses cumulative physiologic and functional burden. When combined with a structured clinical assessment (the Comprehensive Geriatric Assessment), the Frailty Index can be applied in clinical settings and has low rates of missing data; it predicts survival in community-dwelling older people as well as survival, length of stay and discharge location in acute-care settings. Regardless of the definition, an extensive body of literature shows that older persons who are considered frail by any definition have overt changes in the same four main processes: body composition, homeostatic dysregulation, energetic failure, and neurodegeneration—the characteristics of the aging phenotype. A classic clinical case would be an older woman with sarcopenic obesity characterized by increased body fat and decreased muscle (body composition changes); extremely low exercise tolerance and extreme fatigue (energetic failure); high insulin levels, low IGF-1 levels, inadequate intake of calories, and low levels of vitamins D and E and carotenoids (signal dysregulation); and memory problems, slow gait, and unstable balance (neurodegeneration). This woman is likely to exhibit all the manifestations of frailty, including a high risk of multiple diseases, disability, urinary incontinence, falls, delirium, depression, and other geriatric syndromes. It is expected that the biologic process underlying a particular “aging theory” would be more advanced in this woman than would be expected on the basis of chronologic age. A goal of future research in geriatric medicine that has strong potential for clinical translation is to demonstrate that the hypothetical patient described above is biologically older, according to some robust biomarkers of biologic aging, than would be estimated from chronologic age alone. Conceptualizing frailty through the four main underlying processes is a step in this direction that stems from accumulated evidence and recognizes the heterogeneity and dynamic nature of the aging phenotype. Aging is universal but proceeds at highly variable rates, with wide heterogeneity in the emergence of the aging phenotype. Thus, the question is not whether an older patient is frail, but rather whether the severity of frailty is beyond the threshold of clinical and behavioral relevance. Understanding frailty through the lens of four interacting underlying processes also provides an interface with diseases that, like aging itself, affect the aging phenotype. For example, congestive heart failure is associated with low energy availability, multiple hormonal derangements, and a proinflammatory state, thereby contributing to frailty severity. Parkinson’s disease provides an example of neurodegeneration that, in an advanced state, affects body composition, energy metabolism, and homeostatic signaling, resulting in a syndrome that closely resembles frailty. Diabetes is especially important to aging and frailty because it harms body composition, energy metabolism, homeostatic dysregulation, and neuronal integrity. Accordingly, a number of studies have found that type 2 diabetes is a strong risk factor for frailty and for many of its consequences. Since disease and aging interact, careful and appropriate treatment of disease is critical to prevent or reduce frailty. CoNSEQUENCES oF AgINg PRoCESSES, THE AgINg PHENoTYPE, ANd FRAILTY While the pathophysiology of frailty is still being elucidated, its consequences have been well characterized in prospective studies. Four main consequences are important for clinical practice: (1) ineffective or incomplete homeostatic response to stress, (2) multiple coexisting diseases (multior comorbidity) and polypharmacy, (3) physical disability, and (4) the so-called geriatric syndromes. We will briefly address each one of these consequences. Low Resistance to Stress Frailty can be considered a progressive loss of reserve in multiple physiologic functions. At an early stage and in the absence of stress, mildly frail older individuals may appear to be normal. However, they have reduced ability to cope with challenges, such as acute diseases, traumas, surgical procedures, or chemotherapy. Acute illness involving a hospital stay is associated with undernutrition and inactivity, which sometimes may be of such magnitude that the residual muscle mass fails to meet the minimal requirement for walking. Even when nutrition is reinstated, energy reserves may be insufficient to adequately rebuild muscle mass. Older persons have a reduced ability to tolerate infections, in part because they are less able than younger people to build a dynamic inflammatory response to vaccination or infectious exposure; thus, infections are more likely to become severe and systemic and to resolve more slowly. In the context of tolerance to stress, assessing aspects of frailty can help estimate the individual’s ability to withstand the rigors of aggressive treatments and to respond to interventions aimed at infection as well as the caregiver’s ability to anticipate and prevent complications of hospitalization and generally to estimate prognosis. Accordingly, treatment plans may be adjusted to improve tolerance and safety; bed rest and hospitalization should be used sparingly; and infections should be prevented, anticipated, and managed assertively. Comorbidity and Polypharmacy Older age is associated with high rates of many chronic diseases (Fig. 11-4). Thus, not unexpectedly, the percentage of individuals affected by multiple medical conditions (coor multimorbidity) also increases with age. In frail older individuals, comorbidity occurs at higher rates than would be expected from the combined probability of the component conditions. It is likely that frailty and comorbidity affect each other, so that multiple diseases contribute to frailty and frailty increases susceptibility to diseases. Clinically, patients with multiple conditions present unique diagnostic and treatment challenges. Standard diagnostic criteria may not be informative because there are additional confusing signs and symptoms. A classic example is the coexistence of deficiencies in iron and vitamin B12, creating an apparently normocytic anemia. The risk/ benefit ratio for many medical and surgical treatment options may be reduced in the face of other diseases. Drug treatment planning is made more complex because comorbid diseases may affect the absorption, volume of distribution, protein binding, and, especially, elimination of many drugs, leading to fluctuation in therapeutic levels and increased risk of underor overdosing. Drug excretion is affected by renal and hepatic changes with aging that may not be detectable with the usual clinical tests. Formulas for estimating glomerular filtration rate in older patients are available, whereas the estimation of changes in hepatic excretion remains a challenge. Patients with many diseases are usually prescribed multiple drugs, especially when they are cared for by multiple specialists who do not communicate. The risk of adverse drug reactions, drug–drug interactions, and poor compliance increases geometrically with the number Chapter 11 Clinical Problems of Aging of drugs prescribed and with the severity of frailty. Some general rules to minimize the chances of adverse drug events are as follows: (1) Always ask patients to bring in all medications, including prescription drugs, over-the-counter products, vitamin supplements, and herbal preparations (the “brown bag test”). (2) Screen for unnecessary drugs; those without a clear indication should be discontinued. (3) Simplify the regimen in terms of number of agents and schedules, try to avoid frequent changes, and use single-daily-dose regimens whenever possible. (4) Avoid drugs that are expensive or not covered by insurance whenever possible. (5) Minimize the number of drugs to those that are absolutely essential, and always check for possible interactions. (6) Make sure that the patient or an available caregiver understands the administered regimen, and provide legible written instructions. (7) Schedule periodic medication reviews. disability and Impaired Recovery from Acute-onset disability The prevalence of disability in self-care and home management increases steeply with aging and tends to be higher among women than among men (Fig. 11-5). Physical and cognitive function in older persons reflects overall health status and predicts health care utilization, institutionalization, and mortality more accurately than any other known biomedical measure. Thus, assessment of function and disability and prediction of the risk of disability are cornerstones of geriatric medicine. Frailty, regardless of the criteria used for its definition, is a robust and powerful risk factor for disability. Because of this strong relationship, measures of physical function and mobility have been proposed as standard criteria for frailty. However, disability occurs late in the frailty process, after reserve and compensation are exhausted. Early in the development of frailty, body composition changes, reductions in fitness, homeostatic deregulation, and neurodegeneration can begin without affecting daily function. As opposed to disability in younger persons, in which the rule is to look for a clear dominant cause, disability in frail older persons is almost always multifactorial. Multiple disrupted aging processes are usually involved, even when the precipitating cause seems unique. Excess fat mass, poor muscle strength, reduced lean body mass, poor fitness, reduced energy efficiency, poor nutritional intake, low circulating levels of antioxidant micronutrients, high levels of proinflammatory markers, objective signs of neurologic dysfunction, and cognitive impairment all contribute to disability. The multifactorial nature of disability in frail older persons reduces the capacity for compensation and interferes with functional recovery. For example, a small lacunar stroke that causes problems with balance in a young hypertensive individual can be overcome by standing and walking with the feet further apart, a strategy that requires brain adaptation, strong muscles, and a high energy capacity. The same small lacunar stroke may cause catastrophic disability in an older person already affected by neurodegeneration and weakness, who is less able to compensate. As a consequence, interventions aimed at preventing and reducing disability in older persons should have a dual focus on both the precipitating cause and the systems needed for compensation. In the case of the lacunar stroke, interventions to promote mobility function might include stroke prevention, balance rehabilitation, and strength training. As a rule of thumb, the assessment of contributing causes and the design of intervention strategies for disability in older persons should always consider the four main aging processes that contribute to frailty. One of the most popular approaches to disability measurement is a modification of the International Classification of Impairments, Disabilities and Handicaps (World Health Organization, 1980) proposed by the Institute of Medicine (1992). This classification infers a causal pathway in four steps: pathology (diseases), impairment (the physical manifestation of diseases), functional limitation (global functions such as walking, grasping, climbing stairs), and disability (ability to fulfill social roles in the environment). In practice, the assessment of functional limitation and disability is performed either by (1) self-reported questionnaire concerning the degree of ability to perform basic self-care or more complex ADLs or by (2) performance-based measures of physical function that assess specific domains, such as balance, gait, manual dexterity, coordination, flexibility, and endurance. A concise list of standard tools that can be used to assess physical function in older persons is provided in Table 11-4. In 2001, the WHO officially endorsed a new classification system, the International Classification of Functioning, Disability and Health, known more commonly as the ICF. In the ICF, health measures are classified from bodily, individual, and societal perspectives by means of two lists: a list of body functions and structure and a list of domains of activity and participation. Since an individual’s functioning and disability occur in a context, the ICF also includes a list of environmental factors. A detailed list of codes that allow the classification of body functions, activities, and participation is being developed. The ICF system is widely implemented in Europe and is gaining popularity in the United States. Whatever classification system is used, the health care provider should try to identify factors that can be modified to minimize disability. Many of these factors are discussed in this chapter. Important issues related to aging that are not addressed in this chapter but are covered elsewhere include dementia (Chap. 35) and other cognitive disorders including aphasia, memory loss, and other focal cerebral disorders (Chap. 36). geriatric Syndromes The term geriatric syndrome encompasses clinical conditions that are frequently encountered in older persons; have a deleterious effect on function and quality of life; have a multifactorial pathophysiology, often involving systems unrelated to the apparent chief symptom; and are manifested by stereotypical clinical presentations. The list of geriatric syndromes includes incontinence, delirium, falls, pressure ulcers, sleep disorders, problems with eating or feeding, pain, and depressed mood. In addition, dementia and physical disability are sometimes considered to be geriatric syndromes. The term syndrome is somewhat misleading in this context since it is most commonly used to describe a pattern of symptoms and signs that have a single underlying cause. The term geriatric syndromes, by contrast, refers to “multifactorial health conditions that occur when the accumulated effects of impairments in multiple systems render an older person vulnerable to situational challenges.” According to this definition, geriatric syndromes reflect the complex interactions between an individual’s vulnerabilities and exposure to stressors or challenges. This definition aligns well with the concept that geriatric syndromes should be considered as phenotypic consequences of frailty and that a limited number of shared risk factors contribute to their etiology. Indeed, in various combinations and frequencies, virtually all geriatric syndromes are characterized by body composition changes, energy gaps, signaling disequilibria, and neurodegeneration. For example, detrusor (bladder) underactivity is a multifactorial geriatric condition that contributes to urinary retention in the frail elderly. It is characterized by detrusor muscle loss, fibrosis, and axonal degeneration. A proinflammatory state and a lack of estrogen signaling cause bladder muscle loss and detrusor underactivity, while a chronic urinary tract infection may cause detrusor hyperactivity; all of these factors may contribute to urinary incontinence. Because of limited space, only delirium, falls, chronic pain, incontinence, and anorexia are addressed here. Interested readers are referred to textbooks on geriatric medicine for a discussion of other geriatric syndromes. delirium (See also Chap. 34) Delirium is an acute disorder of disturbed attention that fluctuates with time. It affects 15–55% of hospitalized older patients. Delirium has previously been considered to be transient and reversible and a normal consequence of surgery, chronic disease, or infections in older people. Delirium may be associated with a substantially increased risk for dementia and is an independent risk factor for morbidity, prolonged hospitalization, and death. These associations are particularly strong in the oldest old. Fig. 11-16 shows an algorithm for assessment and management of delirium in hospitalized older patients. The clinical presentation of delirium is heterogeneous, but frequent features are (1) a rapid decline in the level of consciousness, with difficulty focusing, shifting, or sustaining attention; (2) cognitive change (rumbling incoherent speech, memory gaps, disorientation, hallucinations) not explained by dementia; and (3) a medical history suggestive of preexisting cognitive impairment, frailty, and comorbidity. The strongest predisposing factors for delirium are dementia, any other condition associated with chronic or transient neurologic dysfunction (neurologic diseases, dehydration, alcohol consumption, psychoactive drugs), and sensory (visual and hearing) deprivation; these associations suggest that delirium is a condition of brain function susceptibility (neurodegeneration or transient neuronal impairment) that precludes the avoidance of decompensation in the face of a stressful event. Many stressful conditions have been implicated as precipitating factors, including surgery; anesthesia; persistent pain; treatment with opiates, narcotics, or anticholinergics; sleep deprivation, immobilization; hypoxia; malnutrition; and metabolic and electrolyte derangements. Both the occurrence and the severity of delirium can be reduced by anticipatory screening and preventive strategies targeting the precipitating causes. The Confusion Assessment Method is a simple, validated tool for screening in the hospital setting. The three pillars of treatment are (1) immediate identification and treatment of precipitating factors, (2) withdrawal of drugs that may have promoted the onset of delirium, and (3) supportive care, including management of hypoxia, hydration and nutrition, mobilization, and environmental modifications. Whether patients who are cared for in special delirium units have better outcomes than those who are not is still in question. Physical restraints should be avoided because they tend to increase agitation and injury. Whenever possible, drug treatment should be avoided because it may prolong or aggravate delirium in some cases. The treatment of choice is low-dose haloperidol. It remains difficult to reduce delirium in patients with acute illness or other stressful conditions. Interventions based on dietary supplementation or careful use of pain medications and sedatives in preand postoperative older patients have been only partially successful. Chapter 11 Clinical Problems of Aging FIgURE 11-16 Algorithm depicting assessment and management of delirium in hospitalized older patients. (Modified from SK Inouye: N Engl J Med 354:1157, 2006.) Hospital admission Assess current and recent changes in mental status Monitor mental status Prevention Address risk factors Improve communication Improve environment Early discharge Avoid psychotropic drugs Acute Impaired mental status Cognitive assessment and delirium evaluation Identify and address predisposing and precipitating risk factors Provide supportive care and prevent complications Manage symptoms of delirium Rule out depression, mania and psychosis Delirium confirmed Falls and Balance disorders Unstable gait and falls are serious concerns in the older adult because they lead not only to injury but also to restricted activity, increased health care utilization, and even death. Like all geriatric syndromes, problems with balance and falls tend to be multifactorial and are strongly connected with the disrupted aging systems that contribute to frailty. Poor muscle strength, neural damage in the basal ganglia and cerebellum, diabetes, and peripheral neuropathy are all recognized risk factors for falls. Therefore, evaluation and management require a structured multisystem approach that spans the entire frailty spectrum and beyond. Accordingly, interventions to prevent or reduce instability and falls usually require a mix of medical, rehabilitative, and environmental modification approaches. Guidelines for the evaluation and management of falls, released by the American Geriatrics Society, recommend asking all older adults about falls and perceived gait instability (Fig. 11-17). Patients with a positive history of multiple falls as well as persons who have sustained one or more injurious falls should undergo an evaluation of gait and balance as well as a targeted history and physical examination to detect Recommend fall prevention, education and exercise program that includes balance, gait and coordination training and strength training Ask all patients about falls in the past year No falls One fall past 6 months Gait or balance problem Report >1 fall, or difficulty with gait or balance, or seeking medical attention because of fall Multifactorial fall risk assessment Check for gait or balance problems History of falls Medications Gait and balance Cognition Visual acuity Lower limb joint function Neurological impairment Muscle strength HR and rhythm Postural hypotension Feet and footwear Environmental hazards Intervene with identified risks Modify medications Prescribe individualized exercise program Treat vision impairment Manage postural hypotension Manage HR and rhythm abnormalities Supplement vitamin D Address foot/shoe problems Reduce environmental hazards Education/ training in self-management and behavioral changes FIgURE 11-17 Algorithm depicting assessment and management of falls in older patients. HR, heart rate. (From American Geriatrics Society and British Geriatrics Society: Clinical Practice Guideline for the Prevention of Falls in Older Persons. New York, American Geriatric Society, 2010.) sensory, nervous system, brain, cardiovascular, and musculoskeletal contributors. Interventions depend on the factors identified but often include medication adjustment, physical therapy, and home modifications. Meta-analyses of strategies to reduce the risk of falls have found that multifactorial risk assessment and management as well as individually targeted therapeutic exercise are effective. Supplementation with vitamin D at 800 IU daily may also help reduce falls, especially in older persons with low vitamin D levels. Persistent Pain Pain from multiple sources is the most common symptom reported by older adults in primary care settings and is also common in acute-care, long-term-care, and palliative-care settings. Acute pain and cancer pain are beyond the scope of this chapter. Persistent pain results in restricted activity, depression, sleep disorders, and social isolation and increases the risk of adverse events due to medication. The most common causes of persistent pain are musculoskeletal problems, but neuropathic pain and ischemic pain occur frequently, and multiple concurrent causes are often found. Alterations in mechanical and structural elements of the skeleton commonly lead to secondary problems in other parts of the body, especially soft tissue or myofascial components. A structured history should elicit information about the quality, severity, and temporal patterns of pain. Physical examination should focus on the back and joints, on trigger points and periarticular areas, and on possible evidence of radicular neurologic patterns and peripheral vascular disease. Pharmacologic management should follow standard progressions, as recommended by the World Health Organization (Chap. 18), and adverse effects on the CNS, which are especially likely in this population, must be monitored. For persistent pain, regular analgesic schedules are appropriate and should be combined with nonpharmacologic approaches such as splints, physical exercise, heat, and other modalities. A variety of adjuvant analgesics such as antidepressants and anticonvulsants may be used; again, however, effects on reaction time and alertness may be dose limiting, especially in older persons with cognitive impairment. Joint or soft tissue injections may be helpful. Education of the patient and mutually agreed-upon goal setting are important since pain usually is not fully eliminated but rather is controlled to a tolerable level that maximizes function while minimizing adverse effects. Urinary Incontinence Urinary incontinence—the involuntary leakage of urine—is highly prevalent among older persons (especially women) and has a profound negative impact on quality of life. Approximately 50% of American women will experience some form of urinary incontinence over a lifetime. Increasing age, white race, childbirth, obesity, and medical comorbidity are all risk factors for urinary incontinence. The three main clinical forms of urinary incontinence are as follows: (1) Stress incontinence is the failure of the sphincteric mechanism to remain closed when there is a sudden increase in intraabdominal pressure, such as a cough or sneeze. In women this condition is due to insufficient strength of the pelvic floor muscles, while in men it is almost exclusively secondary to prostate surgery. (2) Urge incontinence is the loss of urine accompanied by a sudden sensation of need to urinate and inability to control it and is due to detrusor muscle overactivity (lack of inhibition) caused by loss of neurologic control or local irritation. (3) Overflow incontinence is characterized by urinary dribbling, either constantly or for some period after urination. This condition is due to impaired detrusor contractility (due usually to denervation, for example, in Prevalence of urinary incontinence* FIgURE 11-18 Rates of urge, stress, and mixed incontinence, by age group, in a sample of 3552 women. *Based on a sample of 3553 participants. (From JL Melville et al: Arch Intern Med 165:537, 2005.) diabetes) or bladder outlet obstruction (prostate hypertrophy in men and cystocele in women). Thus, it is not surprising that the pathogenesis of urinary incontinence is connected to the disrupted aging systems that contribute to frailty, body composition changes (atrophy of the bladder and pelvic floor muscle), and neurodegeneration (both central and peripheral nervous systems). Frailty is a strong risk factor for urinary incontinence. Indeed, older women are more likely to have mixed (urge + stress) incontinence than any pure form (Fig. 11-18). In analogy with the other geriatric syndromes, urinary incontinence derives from a predisposing condition superimposed on a stressful precipitating factor. Accordingly, treatment of urinary incontinence should address both. The first line of treatment is bladder training associated with pelvic muscle exercise (Kegel exercises) that sometimes should be associated with electrical stimulation. Women with possible vaginal or uterine prolapse should be referred to a specialist. Urinary tract infections should be investigated and eventually treated. A long list of medications can precipitate urinary incontinence, including diuretics, antidepressants, sedative hypnotics, adrenergic agonists or blockers, anticholinergics, and calcium channel blockers. Whenever possible, these medications should be discontinued. Until recently, it was believed that oral or local estrogen treatment alleviated the symptoms of urinary incontinence in postmenopausal women, but this notion is now controversial. Antimuscarinic drugs such as tolterodine, darifenacin, and fesoterodine are modestly effective for mixed-etiology incontinence, but all of these drugs can affect cognition and so must be used with caution and with monitoring of cognitive status. In some cases, surgical treatment should be considered. Chronic catheterization has many adverse effects and should be limited to chronic urinary retention that cannot be managed in any other way. Bacteriuria always occurs and should be treated only if it is symptomatic. Bacterial communities isolated from the urine of women with urinary incontinence appear to differ with the type of incontinence; this observation suggests that the bladder microbiota may play a role in urinary incontinence. If so, this microbial population would be a potential target for treatment. Undernutrition and Anorexia There is strong evidence that the healthy mammalian life span is greatly affected by changes in the activity of central nutrient-sensing mechanisms, especially those that involve the rapamycin (mTOR) network. Polymorphic variations in the gene that encodes mTOR in humans are associated with longevity; this association suggests that the role of nutrient signaling in healthy aging may be conserved in humans. Normal aging is associated with a decline in food intake that is more marked in men than in women. To some extent, food intake is reduced because energy demand declines as a result of the combination of a lower level of physical activity, a decline in lean body mass, and slowed rates of protein turnover. Other contributors to decreased food intake include losses of taste sensation, reduced stomach compliance, higher circulating levels of cholecystokinin, and, in men, low testosterone levels associated with increased leptin. When food intake decreases to a level below the reduced energy demand, the result is energy malnutrition. Malnutrition in older persons should be considered a geriatric syndrome because it is the result of intrinsic susceptibility due to aging, complicated by multiple superimposed precipitating causes. Many older individuals tend to consume a monotonous diet that lacks sufficient fresh food, fruits, and vegetables, so that intake of important micronutrients is inadequate. Undernutrition in older people is associated with multiple adverse health consequences, including impaired muscle function, decreased bone mass, immune dysfunction, anemia, reduced cognitive function, poor wound healing, delayed recovery from surgery, and increased risk of falls, disability, and death. Despite these serious potential consequences, undernutrition often remains unrecognized until it is well advanced because weight loss tends to be ignored by both patients and physicians. Muscle wasting is a frequent feature of weight loss and malnutrition that is often associated with loss of subcutaneous fat. The main causes of weight loss are anorexia, cachexia, sarcopenia, malabsorption, hypermetabolism, and dehydration, almost always in various combinations. Many of these causes can be detected and corrected. Cancer accounts for only 10–15% of cases of weight loss and anorexia in older people. Other important causes include a recent move to a long-term-care setting, acute illness (often with inflammation), hospitalization with bed rest for as little as 1–2 days, depression, drugs that cause anorexia and nausea (e.g., digoxin and antibiotics), swallowing problems, oral infections, dental problems, gastrointestinal pathology, thyroid and other hormonal problems, poverty, and isolation, with reduced access to food. Weight loss may also result from dehydration, possibly related to excess sweating, diarrhea, vomiting, or reduced fluid intake. Early identification is paramount and requires careful weight monitoring. Patients or caregivers should be taught to record weight regularly at home, the patient should be weighed at each clinical encounter, and a record of serial weights should be maintained in the medical record. If malnutrition is suspected, formal assessment should begin with a standardized screening instrument such as the Mini Nutritional Assessment, the Malnutrition Universal Screening Tool, or the Simplified Nutritional Appetite Questionnaire. The Mini Nutritional Assessment includes questions on appetite, timing of eating, frequency of meals, and taste. Its sensitivity and specificity are >75% for future weight loss of ≥5% of body weight in older people. Many nutritional supplements are available, and their use should be initiated early to prevent more severe weight loss and its consequences. When an older patient has malnutrition, the diet should be liberalized and dietary restrictions should be lifted as much as possible. Nutritional supplements should be given between meals to avoid interference with food intake at mealtime. Limited evidence supports the use of any pharmacologic intervention to treat weight loss. The two antianorexic drugs most often prescribed in older persons are megesterol and dronabinol. Both can increase weight; however, the gain is mostly fat, not muscle, and both drugs have serious side effects. Dronabinol is an excellent drug for use in the palliative-care setting. There is little evidence that intentional weight loss in overweight older people prolongs life. Weight loss after the age of 70 should probably be limited to persons with extreme obesity and should always be medically supervised. Common diseases in older adults may have unexpected and atypical clinical features. Most age-related changes in clinical presentation, evolution, and response to treatment are due to interaction of disease pathophysiology with age-related system dysregulation. Some diseases, such as Parkinson’s disease (PD) and diabetes, directly affect aging systems and therefore have a devastating impact on frailty and its consequences. Chapter 11 Clinical Problems of Aging Parkinson’s disease (See also Chap. 449) Most cases of PD begin after the age of 60 years, and the incidence increases up to the age of ∼80 years. Brain aging and PD have long been thought to be related. The nigrostriatal system deteriorates with aging, and many older persons tend to develop a mild form of movement disorder characterized by bradykinesia and stooped posture that mimics mild PD. It is interesting that, in PD, older age at presentation is associated with a more severe and rapid decline in gait, balance, posture, and cognition. These age-related motor and cognitive manifestations of PD tend to be poorly responsive to levodopa or dopamine agonist treatments, especially in the oldest old. In contrast, age at presentation does not correlate with the severity and progression of other classic PD symptoms, such as tremor, rigidity, and bradykinesia, nor does it affect the response of these symptoms to levodopa. The pattern of PD features in older persons suggests that late-life PD may reflect a failure of the normal cellular compensatory mechanisms in vulnerable brain regions and that this vulnerability is increased by age-related neurodegeneration, making PD symptoms particularly resistant to levodopa treatment. In addition to motor symptoms, older PD patients tend to have reduced muscle mass (sarcopenia), eating disorders, and poor levels of fitness. Accordingly, PD is a powerful risk factor for frailty and its consequences, including disability, comorbidity, falls, incontinence, chronic pain, and delirium. Use of levodopa and dopaminergic agonists by older PD patients requires complex dosing schedules; therefore, slow-release preparations are preferred. Both dopaminergic and anticholinergic agents increase the risk of confusion and hallucinations. Use of anticholinergic agents should generally be avoided. For dopaminergic agents, cognitive side effects can be dose limiting. diabetes (See also Chaps. 417–419) Both the incidence and the prevalence of diabetes mellitus increase with aging. Among persons ≥65 years old, the prevalence is ∼12% (with higher figures among African Americans and Hispanics), reflecting the effects of population aging and the obesity epidemic. Diabetes affects all four main aging systems that contribute to frailty. Obesity, especially visceral obesity, is a strong risk factor for insulin resistance, the metabolic syndrome, and diabetes. Diabetes is associated with both reduced muscle mass and accelerated rates of muscle wasting. Diabetic patients have an elevated RMR and a poor degree of fitness. Diabetes is associated with multiple hormone dysregulation, a proinflammatory state, and excess oxidative stress. Finally, diabetes-induced neurodegeneration involves both the central and peripheral nervous systems. Given these characteristics, it is not surprising that patients with diabetes mellitus are more likely to be frail and at high risk of developing physical disability, depression, delirium, cognitive impairment, urinary incontinence, injurious falls, and persistent pain. Thus, the assessment of older diabetic patients should always include screening and risk factor evaluation for these conditions. In young and adult patients, the main treatment goal has been strict glycemic control aimed at bringing the hemoglobin A1c level to within normal values (i.e., ≤6%). However, the risk/benefit ratio is optimized by the use of less aggressive glycemic targets. In fact, in the context of a randomized clinical trial, strict glycemic control was associated with a higher mortality rate. Thus, a more reasonable goal for hemoglobin A1c is 7% or slightly below. Treatment goals are altered further in frail older adults who have a high risk of complications of hypoglycemia and a life expectancy of <5 years. In these cases, an even less stringent target (e.g., 7–8%) should be considered, with A1c monitored every 6 or 12 months. Hypoglycemia is particularly difficult to identify in older diabetic patients because autonomic and nervous system symptoms occur at a lower blood sugar level than in younger diabetics, although the metabolic reactions and neurologic injury effects are similar in the two age groups. The autonomic symptoms of hypoglycemia are often masked by beta blockers. Frail older adults are at even higher risk for serious hypoglycemia than are healthier, higher-functioning older adults. In older patients with type 2 diabetes, a history of severe hypoglycemic episodes is associated with higher mortality risk, more severe microvascular complications, and greater risk of dementia. Thus, patients with suspected or documented episodes of hypoglycemia, especially those who are frail or disabled, need more liberal glucose-control goals, careful education about hypoglycemia, and close follow-up by the health care provider, possibly in the presence of a caregiver. Chlorpropamide has a prolonged half-life, particularly in older adults, and should be avoided because it is associated with a high risk of hypoglycemia. Metformin should be used with caution and only in patients free of severe renal insufficiency. Renal insufficiency should be assessed by a calculated glomerular filtration rate or, in very old patients who have reduced muscle mass, by a direct measure of creatinine clearance from a 24-h urine collection. Lifestyle changes in diet and exercise and a little weight loss can prevent or delay diabetes in high-risk individuals and are substantially more effective than metformin treatment. The risk of type 2 diabetes decreased by 58% in a study of diet and exercise, and this effect was similar at all ages and in all ethnic groups. The risk reduction with standard care plus metformin was 31%. APPRoACH To THE CARE oF oLdER PERSoNS Effects of Altered Pathophysiology and Multimorbidity on Clinical decision-Making The fact that older people are more likely to have atypical manifestations of disease and multiple coexisting conditions has serious consequences for the availability of high-quality evidence for medical practice and clinical decision-making. Randomized clinical trials—the basis for high-quality evidence—have tended to exclude older persons with atypical manifestations of disease, multimorbidity, or functional limitations. Across a wide range of conditions, the average age of a clinical trial participant is 20 years younger than the average age of the population with the condition. Clinical practice guidelines and care-quality metrics are focused on one condition at a time and tend not to consider the impact of comorbid conditions on the safety and feasibility of each set of recommendations. These disease-centric recommendations tend to result in fragmented care. Therefore, clinical decision-making with regard to an older person with multiple chronic conditions must be based on the weighing of several influential factors, including the patient’s priorities and preferences, potential beneficial and harmful interactions among the several conditions and their treatments, life expectancy, and practical issues such as transportation, or ability to cooperate with the test or treatment. organization of Health Care for older Adults The complex underlying physiology of aging leads to multiple coexisting medical problems and functional consequences that are often chronic, with recurrent exacerbations and remissions. Combined with the social consequences of aging (e.g., widowhood or lack of an available caregiver), these medical and functional factors mandate that older adults must sometimes use non-medical services to meet functional needs. The end result of these medical, functional, and social factors is that older adults use many health care and social support services in a variety of settings. Thus, it is incumbent on the internist, whether a generalist or specialist, to be familiar with the scope of settings and services that are used by their patients. For many settings, Medicare reimbursement requires a medical order based on specific indications, so the hospitalist or referring physician must be familiar with eligibility requirements. Table 11-5 summarizes the types of services and payment sources for common settings of care. Older adults who have experienced new disability during a hospitalization are eligible for rehabilitation services. Inpatient rehabilitation requires at least 3 h per day of active rehabilitative activity and is limited to specific diagnoses. More and more rehabilitative services are provided in postacute settings, where the required intensity of service is less stringent. Postacute settings are also used for complex nursing services such as provision and supervision of long-term parenteral medication use or wound care. Under current policy, Medicare covers postacute care only if there is an eligible medical, nursing, or rehabilitation service. Otherwise, nursing home care is not covered by Medicare and must be paid for with personal assets until all resources have been consumed, at which time Medicaid coverage becomes available. Medicaid is a state–federal partnership whose greatest single expenditure is nursing home care. Thus, the need for chronic daily assistance with personal care in a nursing home consumes a large part of most state Medicaid budgets as well as personal assets. Accordingly, alternatives to chronic nursing-home care are of great interest to states, patients, and families. Some states have developed Medicaid-funded day-care programs, sometimes based on the Program for All-Inclusive Day programs Medical, surgical, and psychiatric services that cannot be provided in less complex settings Resuscitation, stabilization, triage, disposition Hospital-based residential program providing team-based, physician-supervised, intensive therapeutic rehabilitation for specific diagnoses Chronic, urgent, and preventive services Medical, nursing, and rehabilitative services after hospitalization, often based in hospitals or nursing homes Residential program with daily nursing and aide care for persons who are dependent in self-care Residential program with daily aide care and housing for persons who are dependent in household management Nursing and rehabilitative services for episodes of care provided to persons in the community Supervised settings providing nursing and aide care for scheduled hours Medicare, Medicaid, and private insurance Medicare, Medicaid, and private insurance Medicare, Medicaid, and private insurance Medicare, Medicaid, and private insurance Medicare up to 100 days with eligibility requirements Medicaid, private payment, long-term care insurance Medicare, Medicaid Private payment, Medicaid Care of the Elderly (PACE) model. In this situation, older adults who are eligible for both Medicare and Medicaid and who are otherwise eligible for chronic nursing-home care can receive coordinated medical and functional services in conjunction with a day-care program. For most older adults, a caregiver must be available to provide assistance on weeknights and weekends. Under current policy, home health services do not provide chronic functional assistance in the home but rather are targeted at episodes of care supplied by medical or rehabilitative services for older adults who are considered home bound. Some community agencies, whether private or public, can provide homemaker and home aide services to assist the home-bound older adult with functional needs, but there may be income requirements or expensive private payment may be needed. Within the past decade, there has been tremendous growth in a broad spectrum of assisted-living settings. Such settings do not offer the degree of 24-h nursing supervision or personal aide care that is provided in traditional nursing homes, although distinctions are becoming blurred. Most assisted-living settings provide meals, medication supervision, and homemaking services, but they often require that residents be capable of transporting themselves to a congregate meal site. Moreover, most of these settings accept only private payment from residents and their families and thus are hard to access for older adults with limited resources. Some states are exploring coverage for lower-cost residential-care services such as family care homes. Models of Care Coordination The complexity and fragmentation of care for older adults results in both increased costs and increased risk of iatrogenic complications such as missed diagnoses, adverse medication events, further worsening of function, and even death. These serious consequences have led to a strong interest in care coordination through teams of providers, with the goals to reduce unnecessary costs and to prevent adverse events. Table 11-6 lists examples of evidence-based models of care coordination that were recommended in a 2009 Chapter 11 Clinical Problems of Aging taBLe 11-6 evIdenCe-Based ModeLs of Care CoordInatIon for oLder patIents (InstItute of MedICIne, 2009) Source: Reproduced with permission from C Boult et al: J Am Geriatr Soc 57:2328, 2009. Institute of Medicine report. While not mentioned as a specific type of team care, modern information technology offers substantial promise in providing consistent, readily available information across settings and providers. All such team programs are targeted at prevention and management of chronic and complex problems. Evidence from clinical trials or quasi-experimental studies supports the benefit of each model, and for some models data are sufficient to support meta-analyses. The evidence for benefit is not always consistent between studies or types of care but includes some support for improved quality of care, quality of life, function, survival, and health care costs and use. Some models of care are disease-specific and focus on common chronic conditions such as diabetes mellitus, congestive heart failure, chronic obstructive pulmonary disease, and stroke. One challenge in the use of these models is that a majority of older adults will have multiple simultaneous conditions and thus will need services from multiple programs that may not communicate among themselves. Most models of care are difficult to implement in today’s health care system because nonphysician services are not reimbursed, nor is physician effort that is not incorporated into “face-to-face” time. Thus, several models have been developed largely by the Department of Veterans Affairs Health Care System, Medicare Managed Care providers, and other sponsoring agencies. Medicare has developed a series of demonstration projects that can expand the evidence base and serve policy makers. More recently, there has been an effort to promote coordinated care through Accountable Care Organizations and patient-centered “medical homes.” However, the processes and outcomes of such care must evolve from disease-specific indicators to more general markers, such as optimizing functional status, focusing on outcomes that are important to patients, and minimizing inappropriate care. In older adults, prevention tests and interventions are less consistently recommended for all asymptomatic patients. The guidelines fail to address the influence of health status and life expectancy on recommendations, although the benefits of prevention are clearly affected by life expectancy. For example, in most types of cancer, screening provides no benefit in patients with a life expectancy of ≤5 years. More research is needed to build an appropriate evidence base for ageand life expectancy–adapted preventive services. Health behavior modification, especially increasing physical activity and improving nutrition, probably has the greatest potential to promote healthy aging. Osteoporosis: Bone mineral density (BMD) should be measured at least once after the age of 65 years. There is little evidence that regular monitoring of BMD improves the prediction of fractures. Because of limitations in the precision of dual-energy x-ray absorptiometry, the minimal interval between evaluations should be 2–3 years. Hypertension: Blood pressure should be determined at least once a year or more often in patients with hypertension. Diabetes: Serum glucose and hemoglobin A1c should be checked every 3 years or more often in patients who are obese or hypertensive. Lipid disorders: A lipid panel should be done every 5 years or more often in patients with diabetes or any cardiovascular disease. Colorectal cancer: A fecal occult blood test and a sigmoidoscopy or colonoscopy should be done on a regular schedule up to the age of 75 years. No consensus guidelines exist for these tests >75 years of age. Breast cancer: Mammography should be done every 2 years between the ages of 50 and 74 years. No consensus guidelines exist for mammography after the age of 75 years. Cervical cancer: A Pap smear should be done every 3 years up to the age of 65 years. Influenza: Immunize annually. Shingles: Administer herpes zoster vaccine once after the age of 50 years. Pneumonia: Administer pneumococcal vaccine once at the age of 65 years. Myocardial infarction: Prescribe daily aspirin for patients with prevalent cardiovascular disease or with a poor cardiovascular risk profile. Osteoporosis: Prescribe calcium at 1200 mg daily and vitamin D at ≥800 IU daily. Exercise Rates of regular physical activity decrease with age and are lowest in older persons. This situation is unfortunate because increased physical activity has clear benefits in older adults, improving physical function, muscle strength, mood, sleep, and metabolic risk profile. Some studies suggest that exercise can improve cognition and prevent dementia, but this association is still controversial. Exercise programs, both aerobic and strength training, are feasible and beneficial even in very old and frail individuals. Regular, moderate-intensity exercise can reduce the rate of age-associated decline in physical function. The Centers for Disease Control and Prevention recommends that older persons should spend at least 150 min per week in moderate-intensity aerobic activity (e.g., brisk walking) and should engage in muscle-strengthening activities that work all major muscle groups (legs, hips, back, abdomen, chest, shoulders, and arms) at least 2 days a week. In the absence of contraindications, more intense and prolonged physical activity provides greater benefits. Frail and sedentary persons may need supervision, at least at the start of the exercise program, to avoid falls and exercise-related injuries. Nutrition Older persons are particularly vulnerable to malnutrition, and many problems that affect older patients can be addressed by dietary modification. As mentioned above, nutrient sensing is the major factor associated with differential longevity in several animal models, including mammals. Treatment with rapamycin, the only pharmacologic intervention that has been associated with longevity, affects nutrient sensing. Nevertheless, there are almost no evidence-based guidelines for individualizing dietary modifications based on differing health outcomes in the elderly. Even when guidelines exist, older people tend to be poorly compliant with dietary recommendations. Basic principles of a healthy diet that are also valid for older persons are as follows: Encourage the consumption of fruits and vegetables; they are rich in micronutrients, mineral, and fibers. Whole grains are also a good source of fiber. Keep in mind that some of these foods are costly and thus less accessible to low-income persons. Emphasize that good hydration is essential. Fluid intake should be at least 1000 mL daily. Encourage the use of fat-free and low-fat dairy products, legumes, poultry, and lean meats. Encourage consumption of fish at least once a week, since there is strong epidemiologic evidence that fish consumption is associated with a lowered risk of Alzheimer’s disease. Match intake of energy (calories) to overall energy needs in order to maintain a healthy weight and BMI (20–27). Recommend moderate (5–10%) caloric restriction only when the BMI is >27. Limit consumption of foods with high caloric density, high sugar content, and high salt content. Limit the intake of foods with a high content of saturated fatty acids and cholesterol. Limit alcohol consumption (one drink per day or less). Introduce vitamin D–fortified foods and/or vitamin D supplements into the diet. Older persons who have little exposure to UVB radiation are at risk of vitamin D insufficiency. Make sure that the diet includes adequate food-related intake of magnesium, vitamin A, and vitamin B12. Monitor daily protein intake, which, in healthy older persons, should be in the range of 1.0–1.2 g/kg of body weight. Higher daily protein intake (i.e., ≥1.2–1.5 g/kg) is advised for those who are exercising or are affected by chronic diseases, especially if these conditions are associated with chronic inflammation. Older people with severe kidney disease (i.e., an estimated glomerular filtration rate of <30 mL/min per 1.73 m2) who are not on dialysis should limit protein intake. • For constipation, increase dietary fiber intake to 10–25 g/d and fluid intake to 1500 mL/d. A bulk laxative (methylcellulose or psyllium) can be added. Novel Interventions to Modify Aging Processes Aging is a complex process with multiple manifestations at the molecular, cellular, organ, and whole-organism level. The nature of the aging process is still not fully understood, but aging and its effects may be modulated by appropriate interventions. Dietary and genetic alterations can increase healthy life span and prevent the development of dysregulated systems and the aging phenotype in laboratory model organisms. The mechanisms responsible for life span expansion are “food” sensors typically activated in situations of food shortage, such as IGF/insulin and the TOR (target of rapamycin) pathways. Accordingly, a reduction in food intake without malnutrition extends the life span by 10–50% in diverse organisms, from yeasts to rhesus monkeys. Mechanisms that mediate the effects of caloric restriction are under intensive study because they are potential targets for interventions aimed at counteracting the emergence of the aging phenotype and its deleterious effects in humans. For example, resveratrol, a natural compound found in grape skin that mimics some of the effects of dietary restriction, increases longevity and improves health in mice fed a high-fat diet but has little effect on mice fed a standard diet. Other compounds that potentially mimic caloric restriction are being developed and tested. A high prevalence of IGF-1 receptor gene mutation has been found in Ashkenazi Jewish centenarians and in other long-lived individuals, suggesting that the downregulation of IGF-1 signaling may promote human longevity. A 20-year period of 30% dietary restriction applied to adult rhesus monkeys was associated with reduced cardiovascular and cancer morbidity, reduced signs of aging, and greater longevity, although a second such study did not find increased longevity. In humans, dietary restriction is effective against obesity and reduces insulin resistance, inflammation, blood pressure, CRP level, and intima-media thickness of the carotid arteries. However, the beneficial effects of dietary restriction in humans are still controversial, and some potential negative effects have not been sufficiently studied. An interesting effect of caloric restriction in humans is mitochondrial biogenesis. Mitochondrial dysfunction has emerged as a potentially important underlying contributor to aging. Reduced expression of mitochondrial genes is a strongly conserved feature of aging across different species. Mitochondria are the machinery for chemical energy production, and brain and muscle are particularly susceptible to defective mitochondrial function. Thus, declining mitochondrial function may be a direct cause of at least three of the main dysregulated systems contributing to the phenotype of aging. This chapter has touched on some of the fundamental aspects of human aging, focusing mostly on those that are relevant to the care of older patients. Many aspects of geriatric medicine have not been addressed because of space limitations. Valuable topics not considered include details of comprehensive geriatric assessment, depression and anxiety, hypertension, orthostatic hypotension, dementia, vision and hearing impairment, osteoporosis, palliative care, prostate disorders, foot problems, and women’s health. Some of these topics are treated extensively elsewhere in this text, sometimes with comments on age-specific issues. The universal process of aging is becoming better understood. There appear to be shared underlying cellular and molecular processes that induce widespread dysregulation in key systems. This dysregulation contributes to clinical manifestations of a frailty phenotype and can be used to understand how to evaluate and manage the older patient. We would like to thank our colleagues who provided criticisms and suggestions for improvement of this chapter. We are particularly indebted to Dr. John Morley for his valuable suggestions regarding the section on undernutrition and anorexia. Chapter 11 Clinical Problems of Aging Factors that Increase the Likelihood of Errors Many factors ubiquitous 12e-1 The Safety and Quality of Health in health care systems can increase the likelihood of errors, including fatigue, stress, interruptions, complexity, and transitions. The effects of Care fatigue in other industries are clear, but its effects in health care have David W. Bates Safety and quality are two of the central dimensions of health care. In recent years it has become easier to measure safety and quality, and it is increasingly clear that performance in both dimensions could be much better. The public is—with good justification—demanding measurement and accountability, and payment for services will increasingly be based on performance in these areas. Thus, physicians must learn about these two domains, how they can be improved, and the relative strengths and limitations of the current ability to measure them. Safety and quality are closely related but do not completely overlap. The Institute of Medicine has suggested in a seminal series of reports that safety is the first part of quality and that the health care system must first and foremost guarantee that it will deliver safe care, although quality is also pivotal. In the end, it is likely that more net clinical benefit will be derived from improving quality than from improving safety, though both are important and safety is in many ways more tangible to the public. The first section of this chapter will address issues relating to the safety of care and the second will cover quality of care. SAFETY IN HEALTH CARE Safety Theory and Systems Theory Safety theory clearly points out that individuals make errors all the time. Think of driving home from the hospital: you intend to stop and pick up a quart of milk on the way home but find yourself entering your driveway without realizing how you got there. Everybody uses low-level, semiautomatic behavior for many activities in daily life; this kind of error is called a slip. Slips occur often during care delivery—e.g., when people intend to write an order but forget because they have to complete another action first. Mistakes, by contrast, are errors of a higher level; they occur in new or nonstereotypic situations in which conscious decisions are being made. An example would be dosing of a medication with which a physician is not familiar. The strategies used to prevent slips and mistakes are often different. Systems theory suggests that most accidents occur as the result of a series of small failures that happen to line up in an individual instance so that an accident can occur (Fig. 12e-1). It also suggests that most individuals in an industry such as health care are trying to do the right thing (e.g., deliver safe care) and that most accidents thus can be seen as resulting from defects in systems. Systems should be designed both to make errors less likely and to identify those that do inevitably occur. Hazards Some holes due to active failures Other holes due to latent conditions (resident "pathogens") Successive layers of defenses, barriers and safeguards FIguRE 12e-1 “Swiss cheese” diagram. Reason argues that most accidents occur when a series of “latent failures” are present in a system and happen to line up in a given instance, resulting in an accident. Examples of latent failures in the case of a fall might be that the unit is unusually busy and the floor happens to be wet. (Adapted from J Reason: BMJ 320:768, 2000; with permission.) been more controversial until recently. For example, the accident rate among truck drivers increases dramatically if they work over a certain number of hours in a week, especially with prolonged shifts. A recent study of house officers in the intensive care unit demonstrated that they were about one-third more likely to make errors when they were on a 24-h shift than when they were on a schedule that allowed them to sleep 8 h the previous night. The American College of Graduate Medical Education has moved to address this issue by putting in place the 80-h workweek. Although this stipulation is a step forward, it does not address the most important cause of fatigue-related errors: extended-duty shifts. High levels of stress and heavy workloads also can increase error rates. Thus, in extremely high-pressure situations, such as cardiac arrests, errors are more likely to occur. Strategies such as using protocols in these settings can be helpful, as can simple recognition that the situation is stressful. Interruptions also increase the likelihood of error and occur frequently in health care delivery. It is common to forget to complete an action when one is interrupted partway through it by a page, for example. Approaches that may be helpful in this area include minimizing interruptions and setting up tools that help define the urgency of an interruption. Complexity represents a key issue that contributes to errors. Providers are confronted by streams of data (e.g., laboratory tests and vital signs), many of which provide little useful information but some of which are important and require action or suggest a specific diagnosis. Tools that emphasize specific abnormalities or combinations of abnormalities may be helpful in this area. Transitions between providers and settings are also common in health care, especially with the advent of the 80-h workweek, and generally represent points of vulnerability. Tools that provide structure in exchanging information—for example, when transferring care between providers—may be helpful. The Frequency of Adverse Events in Health Care Most large studies focusing on the frequency and consequences of adverse events have been performed in the inpatient setting; some data are available for nursing homes, but much less information is available about the outpatient setting. The Harvard Medical Practice Study, one of the largest studies to address this issue, was performed with hospitalized patients in New York. The primary outcome was the adverse event: an injury caused by medical management rather than by the patient’s underlying disease. In this study, an event either resulted in death or disability at discharge or prolonged the length of hospital stay by at least 2 days. Key findings were that the adverse event rate was 3.7% and that 58% of the adverse events were considered preventable. Although New York is not representative of the United States as a whole, the study was replicated later in Colorado and Utah, where the rates were essentially similar. Since then, other studies using analogous methodologies have been performed in various developed nations, and the rates of adverse events in these countries appear to be ~10%. Rates of safety issues appear to be even higher in developing and transitional countries; thus, this is clearly an issue of global proportions. The World Health Organization has focused on this area, forming the World Alliance for Patient Safety. In the Harvard Medical Practice Study, adverse drug events (ADEs) were most common, accounting for 19% of all adverse events, and were followed in frequency by wound infections (14%) and technical complications (13%). Almost half of adverse events were associated with a surgical procedure. Among nonoperative events, 37% were ADEs, 15% were diagnostic mishaps, 14% were therapeutic mishaps, 13% were procedure-related mishaps, and 5% were falls. ADEs have been studied more than any other error category. Studies focusing specifically on ADEs have found that they appear to be much more common than was suggested by the Harvard Medical Practice Study, although most other studies use more inclusive criteria. Detection approaches in the research setting include chart review and CHAPTER 12e The Safety and Quality of Health Care the use of a computerized ADE monitor, a tool that explores the database and identifies signals that suggest an ADE may have occurred. Studies that use multiple approaches find more ADEs than does any individual approach, and this discrepancy suggests that the true underlying rate in the population is higher than would be identified by a single approach. About 6–10% of patients admitted to U.S. hospitals experience an ADE. Injuries caused by drugs are also common in the outpatient setting. One study found a rate of 21 ADEs per every 100 patients per year when patients were called to assess whether they had had a problem with one of their medications. The severity level was lower than in the inpatient setting, but approximately one-third of these ADEs were preventable. The period immediately after a patient is discharged from the hospital appears to be very risky. A recent study of patients hospitalized on a medical service found an adverse event rate of 19%; about one-third of those events were preventable, and another one-third were ameliorable (i.e., they could have been made less severe). ADEs were the single leading error category. Prevention Strategies Most work on strategies to prevent adverse events has targeted specific types of events in the inpatient setting, with nosocomial infections and ADEs having received the most attention. Nosocomial infection rates have been reduced greatly in intensive care settings, especially through the use of checklists. For ADEs, several strategies have been found to reduce the medication error rate, although it has been harder to demonstrate that they reduce the ADE rate overall, and no studies with adequate power to show a clinically meaningful reduction have been published. Implementation of checklists to ensure that specific actions are carried out has had a major impact on rates of catheter-associated bloodstream infection and ventilator-associated pneumonia, two of the most serious complications occurring in intensive care units. The checklist concept is based on the premise that several specific actions can reduce the frequency of these issues; when these actions are all taken for every patient, the result has been an extreme reduction in the frequency of the associated complication. These practices have been disseminated across wide areas, in particular in the state of Michigan. Computerized physician order entry (CPOE) linked with clinical decision support reduces the rate of serious medication errors, defined as those that harm someone or have the potential to do so. In one study, CPOE, even with limited decision support, decreased the serious medication error rate by 55%. CPOE can prevent medication errors by suggesting a default dose, ensuring that all orders are complete (e.g., that they include dose, route, and frequency), and checking orders for allergies, drug–drug interactions, and drug–laboratory issues. In addition, clinical decision support can suggest the right dose for a patient, tailoring it to level of renal function and age. In one study, patients with renal insufficiency received the appropriate dose only one-third of the time without decision support, whereas that fraction increased to approximately two-thirds with decision support; moreover, with such support, patients with renal insufficiency were discharged from the hospital half a day earlier. As of 2009, only ~15% of U.S. hospitals had implemented CPOE, but many plan to do so and will receive major financial incentives for achieving this goal. Another technology that can improve medication safety is bar coding linked with an electronic medication administration record. Bar coding can help ensure that the right patient gets the right medication at the right time. Electronic medication administration records can make it much easier to determine what medications a patient has received. Studies to assess the impact of bar coding on medication safety are under way, and the early results are promising. Another technology to improve medication safety is “smart pumps.” These pumps can be set according to which medication is being given and at what dose; the health care professional will receive a warning if too high a dose is about to be administered. The National Safety Picture Several organizations, including the National Quality Forum and the Joint Commission, have made recommendations for improving safety. In particular, the National Quality Forum has released recommendations to U.S. hospitals about what practices will most improve the safety of care, and all hospitals are expected to implement these recommendations. Many of these practices arise frequently in routine care. One example is “readback,” the practice of recording all verbal orders and immediately reading them back to the physician to verify the accuracy of what was heard. Another is the consistent use of standard abbreviations and dose designations; some abbreviations and dose designations are particularly prone to error (e.g., 7U may be read as 70). Measurement of Safety Measuring the safety of care is difficult and expensive, since adverse events are, fortunately, rare. Most hospitals rely on spontaneous reporting to identify errors and adverse events, but the sensitivity of this approach is very low, with only ~1 in 20 ADEs reported. Promising research techniques involve searching the electronic record for signals suggesting that an adverse event has occurred. These methods are not yet in wide use but will probably be used routinely in the future. Claims data have been used to identify the frequency of adverse events; this approach works much better for surgical care than for medical care and requires additional validation. The net result is that, except for a few specific types of events (e.g., falls and nosocomial infections), hospitals have little idea about the true frequency of safety issues. Nonetheless, all providers have the responsibility to report problems with safety as they are identified. All hospitals have spontaneous reporting systems, and, if providers report events as they occur, those events can serve as lessons for subsequent improvement. Conclusions about Safety It is abundantly clear that the safety of health care can be improved substantially. As more areas are studied closely, more problems are identified. Much more is known about the epidemiology of safety in the inpatient setting than in outpatient settings. A number of effective strategies for improving inpatient safety have been identified and are increasingly being applied. Some effective strategies are also available for the outpatient setting. Transitions appear to be especially risky. The solutions to improving care often entails the consistent use of systematic techniques such as checklists and often involves leveraging of information technology. Nevertheless, solutions will also include many other domains, such human factors techniques, team training, and a culture of safety. Assessment of quality of care has remained somewhat elusive, although the tools for this purpose have increasingly improved. Selection of health care and measurement of its quality are components of a complex process. Quality Theory Donabedian has suggested that quality of care can be categorized by type of measurement into structure, process, and outcome. Structure refers to whether a particular characteristic is applicable in a particular setting—e.g., whether a hospital has a catheterization laboratory or whether a clinic uses an electronic health record. Process refers to the way care is delivered; examples of process measures are whether a Pap smear was performed at the recommended interval or whether an aspirin was given to a patient with a suspected myocardial infarction. Outcome refers to what actually happens—e.g., the mortality rate in myocardial infarction. It is important to note that good structure and process do not always result in a good outcome. For instance, a patient may present with a suspected myocardial infarction to an institution with a catheterization laboratory and receive recommended care, including aspirin, but still die because of the infarction. Quality theory also suggests that overall quality will be improved more in the aggregate if the performance level of all providers is raised rather than if a few poor performers are identified and punished. This view suggests that systems changes are especially likely to be helpful in improving quality, since large numbers of providers may be affected simultaneously. The theory of continuous quality improvement suggests that organizations should be evaluating the care they deliver on an ongoing basis and continually making small changes to improve their individual processes. This approach can be very powerful if embraced over time. and many believe that they will prove to be a key to improving quality, 12e-3 especially if pay-for-performance with sufficient incentives is broadly implemented (see below). Penalties produce provider resentment and are rarely used in health care. Another set of strategies for improving quality involves changing the systems of care. An example would be introducing reminders about which specific actions needed to be taken at a visit for a specific patient—a strategy that has been demonstrated to improve performance in certain situations, such as the delivery of preventive services. Another approach that has been effective is the development of “bundles” or groups of quality measures that can be implemented together with a high degree of fidelity. A number of hospitals have CHAPTER 12e The Safety and Quality of Health Care implemented a bundle for ventilator-associated pneumonia in the FIguRE 12e-2 Plan-Do-Check-Act cycle. This approach can be used to improve a specific process rapidly. First, planning is undertaken, intensive care unit that includes five measures (e.g., ensuring that the head of the bed is elevated). These hospitals have been able to improve performance substantially. Perhaps the most pressing need is to improve the quality of care and several potential improvement strategies are identified. Next, these strategies are evaluated in small “tests of change.” “Checking” for chronic diseases. The Chronic Care Model has been developed by Wagner and colleagues (Fig. 12e-3); it suggests that a combination of entails measuring whether the strategies have appeared to make a strategies is necessary (including self-management support, changes difference, and “acting” refers to acting on the results. A number of specific tools have been developed to help improve in delivery system design, decision support, and information systems) and that these strategies must be delivered by a practice team composed of several providers, not just a physician. Available evidence about the relative efficacy of strategies in reduc process performance. One of the most important is the Plan-Do Check-Act cycle (Fig. 12e-2). This approach can be used for “rapid cycle” improvement of a process—e.g., the time that elapses between a diagnosis of pneumonia and administration of antibiotics to the patient. Specific statistical tools, such as control charts, are often used in conjunction to determine whether progress is being made. Because most medical care includes one or many processes, this tool is espe cially important for improvement. Factors Relating to Quality Many factors can decrease the level of quality, including stress to providers, high or low levels of produc tion pressure, and poor systems. Stress can have an adverse effect on quality because it can lead providers to omit important steps, as can a high level of production pressure. Low levels of production pressure sometimes can result in worse quality, as providers may be bored or have little experience with a specific problem. Poor systems can have a tremendous impact on quality, and even extremely dedicated providers typically cannot achieve high levels of performance if they are operating within a poor system. Data about the Current State of Quality A study published by the RAND Corporation in 2006 provided the most complete picture of quality of care delivered in the United States to date. The results were sobering. this general premise. It is especially notable that the outcome was the HbA1c level, as it has generally been much more difficult to improve outcome measures than process measures (such as whether HbA1c was measured). In this meta-analysis, a variety of strategies were effective, but the most effective ones were the use of team changes and the use of a case manager. When cost-effectiveness is considered in addition, it appears likely that an amalgam of strategies will be needed. However, the more expensive strategies, such as the use of case managers, probably will be implemented widely only if pay-for performance takes hold. National State of Quality Measurement In the inpatient setting, quality measurement is now being performed by a very large proportion of hospitals for several conditions, including myocardial infarction, congestive heart failure, pneumonia, and surgical infection prevention; 20 measures are included in all. This is the result of the Hospital Quality Initiative, which represents a collaboration among many entities, Productive interactions Informed, activated patient Prepared, proactive practice team Improved Outcomes Self-management Support Delivery system design Decision support Clinical information systems Community Resources and policies Health System Organization of health care The authors found that, across a wide range of quality parameters, patients in the United States received only 55% of recommended care overall; there was little variation by subtype, with scores of 54% for preventive care, 54% for acute care, and 56% for care of chronic conditions. The authors concluded that, in broad terms, the chances of getting high-quality care in the United States were little better than those of winning a coin flip. Work from the Dartmouth Atlas of Health Care evaluating geographic variation in use and quality of care demonstrates that, despite large variations in utilization, there is no positive correlation between the two variables at the regional level. An array of data demonstrate, however, that providers with larger volumes for specific conditions, especially for surgical conditions, do have better outcomes. Strategies for Improving Quality and Performance A number of specific strategies can be used to improve quality at the individual level, including rationing, education, feedback, incentives, and penalties. Rationing has been effective in some specific areas, such as persuading physicians to prescribe within a formulary, but it generally has been resisted. Education is effective in the short run and is necessary for changing FIguRE 12e-3 The Chronic Care Model, which focuses on improvopinions, but its effect decays fairly rapidly with time. Feedback on ing care for chronic diseases, suggests that (1) delivery of high-quality performance can be given at either the group or the individual level. care requires a range of strategies that must closely involve and Feedback is most effective if it is individualized and is given in close engage the patient and (2) team care is essential. (From EH Wagner et al: temporal proximity to the original events. Incentives can be effective, Eff Clin Pract 1:2, 1998.) including the Hospital Quality Alliance, the Joint Commission, the National Quality Forum, and the Agency for Healthcare Research and Quality. The data are housed at the Center for Medicare and Medicaid Services, which publicly releases performance data on the measures on a website called Hospital Compare (www.cms.gov/Medicare/QualityInitiatives-Patient-Assessment-Instruments/HospitalQualityInits/ HospitalCompare.html). These data are reported voluntarily and are available for a very high proportion of the nation’s hospitals. Analyses demonstrate substantial regional variation in quality and important differences among hospitals. Analyses by the Joint Commission for similar indicators reveal that performance on measures by hospitals has improved over time and that, as might be hoped, lower performers have improved more than higher performers. Public Reporting Overall, public reporting of quality data is becoming increasingly common. There are now commercial websites that have quality-related data for most regions of the United States, and these data can be accessed for a fee. Similarly, national data for hospitals are available. The evidence to date indicates that patients have not made much use of such data but that the data have had an important effect on provider and organization behavior. Instead, patients have relied on provider reputation to make choices, partly because little information was available until very recently and the information that was available was not necessarily presented in ways that were easy for patients to access. Many authorities think that, as more information about quality becomes available, it will become increasingly central to patients’ choices about where to access care. Pay-for-Performance Currently, providers in the United States get paid exactly the same amount for a specific service, regardless of the quality of care delivered. The pay-for-performance theory suggests that, if providers are paid more for higher-quality care, they will invest in strategies that enable them to deliver that care. The current key issues in the pay-for-performance debate relate to (1) how effective it is, (2) what levels of incentives are needed, and (3) what perverse consequences are produced. The evidence on effectiveness is fairly limited, although a number of studies are ongoing. With respect to incentive levels, most quality-based performance incentives have accounted for merely 1–2% of total payment in the United States to date. In the United Kingdom, however, 40% of general practitioners’ salaries have been placed at risk according to performance across a wide array of parameters; this approach has been associated with substantial improvements in reported quality performance, although it is still unclear to what extent this change represents better performance versus better reporting. The potential for perverse consequences exists with any incentive scheme. One problem is that, if incentives are tied to outcomes, there may be a tendency to transfer the sickest patients to other providers and systems. Another concern is that providers will pay too much attention to quality measures with incentives and ignore the rest of the quality picture. The validity of these concerns remains to be determined. Nonetheless, it appears likely that, under health care reform, the use of various pay-for-performance schemes is likely to increase. The safety and quality of care in the United States could be improved substantially. A number of available interventions have been shown to improve the safety of care and should be used more widely; others are undergoing evaluation or soon will be. Quality also could be dramatically better, and the science of quality improvement continues to mature. Implementation of pay-for-performance should make it much easier for organizations to justify investments in improving safety and quality parameters, including health information technology. However, many improvements will also require changing the structure of care—e.g., moving to a more team-oriented approach and ensuring that patients are more involved in their own care. Health care reform is likely to result in increased use of pay-for-performance. Measures of safety are still relatively immature and could be made much more robust; it would be particularly useful if organizations had measures they could use in routine operations to assess safety at a reasonable cost. Although the quality measures available are more robust than those for safety, they still cover a relatively small proportion of the entire domain of quality, and more measures need to be developed. The public and payers are demanding better information about safety and quality as well as better performance in these areas. The clear implication is that these domains will have to be addressed directly by providers. Primary Care in Lowand Middle-Income Countries Tim Evans, Kumanan Rasanathan The twentieth century witnessed the rise of an unprecedented global health divide. Industrialized or high-income countries experienced 13e rapid improvement in standards of living, nutrition, health, and health care. Meanwhile, in lowand middle-income countries with much less favorable conditions, health and health care progressed much more slowly. The scale of this divide is reflected in the current extremes of life expectancy at birth, with Japan at the high end (83 years) and Sierra Leone at the low end (47 years). This nearly 40-year difference reflects the daunting range of health challenges faced by lowand middle-income countries. These nations must deal not only with a complex mixture of diseases (both infectious and chronic) and illness-promoting conditions but also, and more fundamentally, with the fragility of the foundations underlying good health (e.g., sufficient food, water, sanitation, and education) and of the systems necessary for universal access to good-quality health care. In the last decades of the twentieth century, the need to bridge this global health divide and establish health equity was increasingly recognized. The Declaration of Alma Ata in 1978 crystallized a vision of justice in health, regardless of income, gender, ethnicity, or education, and called for “health for all by the year 2000” through primary health care. While much progress has been made since the declaration, at the end of the first decade and a half of the twenty-first century, much remains to be done to achieve global health equity. This chapter looks first at the nature of the health challenges in low-and middle-income countries that underlie the health divide. It then outlines the values and principles of a primary health care approach, with a focus on primary care services. Next, the chapter reviews the experience of lowand middle-income countries in addressing health challenges through primary care and a primary health care approach. Finally, the chapter identifies how current challenges and global context provide an agenda and opportunities for the renewal of primary health care and primary care. The term primary care has been used in many different ways: to describe a level of care or the setting of the health system, a set of treatment and prevention activities carried out by specific personnel, a set of attributes for the way care is delivered, or an approach to organizing health systems that is synonymous with the term primary health care. In 1996, the U.S. Institute of Medicine encompassed many of these different usages, defining primary care as “the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of personal health care needs, developing a sustained partnership with patients, and practicing in the context of family and community.”1 We use this definition of primary care in this chapter. Primary care performs an essential function for health systems, providing the first point of contact when people seek health care, dealing with most problems, and referring patients onward to other services when necessary. As is increasingly evident in countries of all income levels, without strong primary care, health systems cannot function properly or address the health challenges of the communities they serve. Primary care is only one part of a primary health care approach. The Declaration of Alma Ata, drafted in 1978 at the International Conference on Primary Health Care in Alma Ata (now Almaty in Kazakhstan), identified many features of primary care as being essential to achieving the goal of “health for all by the year 2000.” However, it also identified the need to work across different sectors, address 1Institute of Medicine. Primary Care: America’s Health in a New Era (1996). the social and economic factors that determine health, mobilize the 13e-1 participation of communities in health systems, and ensure the use and development of technology that was appropriate in terms of setting and cost. The declaration drew from the experiences of lowand middle-income countries in trying to improve the health of their people following independence. Commonly, these countries had built hospital-based systems similar to those in high-income countries. This effort had resulted in the development of high-technology services in urban areas while leaving the bulk of the population without access to health care unless they traveled great distances to these urban facilities. Furthermore, much of the population lacked access to basic public health measures. Primary health care efforts aimed to move care closer to where people lived, to ensure their involvement in decisions about their own health care, and to address key aspects of the physical and social environment essential to health, such as water, sanitation, and education. After the Declaration of Alma Ata, many countries implemented reforms of their health systems based on primary health care. Most progress involved strengthening of primary care services; unexpectedly, however, much of this progress was seen in high-income countries, most of which constructed systems that made primary care available at low or no cost to their entire populations and that delivered the bulk of services in primary care settings. This endeavor also saw the reinforcement of family medicine as a specialty to provide primary care services. Even in the United States (an obvious exception to this trend), it became clear that the populations of states with more primary care physicians and services were healthier than those with fewer such resources. Progress was also made in many lowand middle-income countries. However, the target of “health for all by the year 2000” was missed by a large margin. The reasons were complex but partly entailed a general failure to implement all aspects of the primary health care approach, particularly work across sectors to address social and economic factors that affect health and provision of sufficient human and other resources to make possible the access to primary care attained in high-income countries. Furthermore, despite the consensus in Alma Ata in 1978, the global health community rapidly became fractured in its commitment to the far-reaching measures called for by the declaration. Economic recession tempered enthusiasm for primary health care, and momentum shifted to programs concentrating on a few priority measures such as immunization, oral rehydration, breastfeeding, and growth monitoring for child survival. Success with these initiatives supported the continued movement of health development efforts away from the comprehensive approach of primary health care and toward programs that targeted specific public health priorities. This approach was reinforced by the need to address the HIV/AIDS epidemic. By the 1990s, primary health care had fallen out of favor in many global-health policy circles, and lowand middle-income countries were being encouraged to reduce public sector spending on health and to focus on cost-effectiveness analysis to provide a package of health care measures thought to offer the greatest health benefits. Lowand middle-income countries, defined by a per capita gross national income of <$12,476 (U.S.) per person per year, account for >80% of the world’s population. Average life expectancy in these countries lags far behind that in high-income countries: whereas the average life expectancy at birth in high-income countries is 74 years, it is only 68 years in middle-income countries and 58 years in low-income countries. This discrepancy has received growing attention over the past 40 years. Initially, the situation in poor countries was characterized primarily in terms of high fertility and high infant, child, and maternal mortality rates, with most deaths and illnesses attributable to infectious or tropical diseases among remote, largely rural populations. With growing adult (and especially elderly) populations and changing lifestyles linked to global forces of urbanization, a new set of health challenges characterized by chronic diseases, environmental overcrowding, and road traffic injuries has emerged rapidly 35 30 25 20 15 10 5 0 Deaths (millions) Intentional injuries Other unintentional injuries Road traffic accidents Other noncommunicable diseases Cancers Cardiovascular disease Maternal, perinatal, and nutritional conditions Other infectious diseases HIV/AIDS, TB, and malaria FIGURE 13e-1 Projections of disease burden to 2030 for high-, middle-, and low-income countries (left, center, and right, respectively). TB, tuberculosis. (Source: World Health Organization: The Global Burden of Disease 2004 Update, 2008.) (Fig. 13e-1). The majority of tobacco-related deaths globally now occur settings such as Cuba and Kerala State in India. Analyses conducted in lowand middle-income countries, and the risk of a child’s dying over the past three decades indeed show that rapid health improve-from a road traffic injury in Africa is more than twice that in Europe. ment is possible in very different contexts. That some countries Hence, lowand middle-income countries in the twenty-first century continue to lag far behind can be understood through a comparison face a full spectrum of health challenges—infectious, chronic, and of regional differences in progress in terms of life expectancy over injury-related—at much higher incidences and prevalences than are this period (Fig. 13e-3). While most regions have made impressive documented in high-income countries and with many fewer resources to address these challenges. Addressing these challenges, however, does 85 2005 not mean simply waiting for economic growth. Analysis of the association between wealth and health across countries reveals that, for any given level of wealth, there is a substantial variation in life expectancy at birth that has persisted despite overall global progress in life expectancy during the past 30 years (Fig. 13e-2). Health status in low-and middle-income countries varies enormously. Nations such as Cuba and Costa Rica have life expectancies and childhood mortality rates similar to or even better than those in high-income countries; in contrast, countries in sub-Saharan Africa and the former Soviet bloc have experienced significant reverses in these health markers in the past 20 years. As Angus Deaton stated in the World Institute for Development Economics Research annual lec ture on September 29, 2006, “People in poor 35 countries are sick not primarily because they are 0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000 poor but because of other social organizational GDP per capita, constant 2000 international $ failures, including health delivery, which are not automatically ameliorated by higher income.” This FIGURE 13e-2 Gross domestic product (GDP) per capita and life expectancy at birth analysis concurs with classic studies of the array in 169 countries, 1975 and 2005. Only outlying countries are named. (Source: World of societal factors explaining good health in poor Health Organization: Primary Health Care: Now More Than Ever. World Health Report 2008.) 52.1 66.9 60.5East Asia and Pacific 70.4 61.1 Caribbean 71.7 50.1 63.2 45.8 46.1 68.1 71.6 78.8 FIGURE 13e-3 Regional trends in life expectancy. CEE and CIS, Central and Eastern Europe and the Commonwealth of Independent States; OECD, Organization for Economic Co-operation and Development. (Source: World Health Organization: Closing the Gap in a Generation: Health Equity Through Action on the Social Determinants of Health. Commission on Social Determinants of Health Final Report, 2008.) progress, sub-Saharan Africa and the former Soviet states have seen stagnation and even reversals. As average levels of health vary across regions and countries, so too do they vary within countries (Fig. 13e-4). Indeed, disparities within countries are often greater than those between high-income and low-income countries. For example, if lowand middle-income countries could reduce their overall childhood mortality rate to that of the richest one-fifth of their populations, global childhood mortality could be decreased by 40%. Disparities in health are mostly a result of social and economic factors such as daily living conditions, access to resources, and ability to participate in life-affecting decisions. In most countries, the health care sector actually tends to exacerbate health inequalities (the “inverse-care law”); because of neglect and discrimination, poor and marginalized communities are much less likely to benefit from public health services than those that are better off. Reforming health systems toward people-centered primary care provides an opportunity to reverse these negative trends. Health services have failed to make their contribution to reducing these pervasive social inequalities by ensuring universal access to existing, scientifically validated, low-cost interventions such as insecticide-treated bed nets for malaria, taxes on cigarettes, short-course chemotherapy for tuberculosis, antibiotic treatment for pneumonia, 13e-3 dietary modification and secondary prevention measures for high blood pressure and high cholesterol levels, and water treatment and oral rehydration therapy for diarrhea. Despite decades of “essential packages” and “basic” health campaigns, the effective implementation of what is already known to work appears (deceptively) to be difficult. Recent analyses have begun to focus on “the how” (as opposed to “the what”) of health care delivery, exploring why health progress is slow and sluggish despite the abundant availability of proven interventions for health conditions in lowand middle-income countries. Three general categories of reasons are being identified: (1) shortfalls in performance of health systems; (2) stratifying social conditions; and (3) skews in science. Specific health problems often require the development of specific health interventions (e.g., tuberculosis requires short-course chemotherapy). However, the delivery of different interventions is often facilitated by a common set of resources or functions: money or financing, trained health workers, and facilities with reliable supplies fit for multiple purposes. Unfortunately, health systems in most low-and middle-income countries are largely dysfunctional at present. In the large majority of lowand middle-income countries, the level of public financing for health is woefully insufficient: whereas high-income countries spend, on average, 7% of the gross domestic product on health, middle-income countries spend <4% and low-income countries <3%. External financing for health through various donor channels has grown significantly over time. While these funds for health are significant (~$20 billion [U.S.] in 2008 for lowand middle-income countries), they represent <2% of total health expenditures in low-and middle-income countries and hence are neither a sufficient nor a long-term solution to chronic underfinancing. In Africa, 70% of health expenditures come from domestic sources. The predominant form of health care financing—charging patients at the point of service—is the least efficient and the most inequitable, tipping millions of households into poverty annually. Health workers, who represent another critical resource, are often inadequately trained and supported in their work. Recent estimates indicate a shortage of >4 million health workers, constituting a crisis that is greatly exacerbated by the migration of health workers from lowand middle-income countries to high-income countries. Sub-Saharan Africa carries 24% of the global disease burden but has only 3% of the health workforce (Fig. 13e-5). The International Organization for Migration estimated in 2006 that there were more Ethiopian physicians practicing in Chicago than in Ethiopia itself. FIGURE 13e-4 A. Mortality of children under 5 years old, by place of residence, in five countries. (Source: Data from the World Health Organization.) B. Full basic immunization coverage (%), by income group. (Source: Primary Health Care: Now More Than Ever. World Health Report 2008.) % of global burden of disease % of global workforce FIGURE 13e-5 Global burden of disease and health workforce. (Source: World Health Organization: Working Together for Health, 2006.) Critical diagnostics and drugs often do not reach patients in need because of supply-chain failures. Moreover, facilities fail to provide safe care: new evidence suggests much higher rates of adverse events among hospitalized patients in lowand middle-income countries than in high-income countries. Weak government planning, regulatory, monitoring, and evaluation capacities are associated with rampant, unregulated commercialization of health services and chaotic fragmentation of these services as donors “push” their respective priority programs. With such fragile foundations, it is not surprising that low-cost, affordable, validated interventions are not reaching those who need them. Health care delivery systems do not exist in a vacuum but rather are embedded in a complex of social and economic forces that often stratify opportunities for health unfairly. Most worrisome are the pervasive forces of social inequality that serve to marginalize populations with disproportionately large health needs (e.g., the urban poor; illiterate mothers). Why should a poor slum dweller with no income be expected to come up with the money for a bus fare needed to travel to a clinic to learn the results of a sputum test for tuberculosis? How can a mother living in a remote rural village and caring for an infant with febrile convulsions find the means to get her child to appropriate care? Shaky or nonexistent social security systems, dangerous work environments, isolated communities with little or no infrastructure, and systematic discrimination against minorities are among the myriad forces with which efforts for more equitable health care delivery must contend. While science has yielded enormous breakthroughs in health in high-income countries, with some spillover to lowand middle-income countries, many important health problems continue to affect primarily lowand middle-income countries whose research and development investments are deplorably inadequate. The past decade has seen growing efforts to right this imbalance with research and development investment for new drugs, vaccines, and diagnostics that effectively cater to the specific health needs of populations in lowand middle-income countries. For example, the Medicines for Malaria Venture has revitalized a previously “dry” pipeline for new malaria drugs. This is but one of many such efforts, but much more needs to be done. As discussed above, the primary constraint on better health in lowand middle-income countries is related less to the availability of health technologies and more to their effective delivery. Underlying these systems and social challenges to greater equity in health is a major bias regarding what constitutes legitimate “science” to improve health equity. The lion’s share of health research financing is channeled toward the development of new technologies—drugs, vaccines, and diagnostics; in contrast, virtually no resources are directed toward research on how health care delivery systems can become more reliable and overcome adverse social conditions. The complexity of systems and social context is such that this issue of delivery requires an enormous investment in terms not only of money but also of scientific rigor, with the development of new research methods and measures and the attainment of greater legitimacy in the mainstream scientific establishment. These common challenges to lowand middle-income countries partly explain the resurgence of interest in the primary health care approach. In some countries (mostly middle-income), significant progress has been made in expanding coverage by health systems based on primary care and even in improving indicators of population health. More countries are embarking on the creation of primary care services despite the challenges that exist, particularly in low-income countries. Even when these challenges are acknowledged, there are many reasons for optimism that lowand middle-income countries can accelerate progress in building primary care. The new millennium has seen a resurgence of interest in primary health care as a means of addressing global health challenges. This interest has been driven by many of the same issues that led to the Declaration of Alma Ata: rapidly increasing disparities in health between and within countries, spiraling costs of health care at a time when many people lack quality care, dissatisfaction of communities with the care they are able to access, and failure to address changes in health threats, especially noncommunicable disease epidemics. These challenges require a comprehensive approach and strong health systems with effective primary care. Global health development agencies have recognized that sustaining gains in public health priorities such as HIV/AIDS requires not only robust health systems but also the tackling of social and economic factors related to disease incidence and progression. Weak health systems have proved a major obstacle to delivering new technologies, such as antiretroviral therapy, to all who need them. Changing disease patterns have led to a demand for health systems that can treat people as individuals whether or not they present to a health facility with the public health “priority” (e.g., HIV/AIDS or tuberculosis) to which that facility is targeted. We discuss experiences in lowand middle-income countries in relation to primary care in greater detail below. First, we consider the features of primary health care and primary care as currently understood. At the 2009 World Health Assembly (an annual meeting of all countries to discuss the work of the World Health Organization [WHO]), a resolution was passed reaffirming the principles of the Declaration of Alma Ata and the need for national health systems to be based on primary health care. This resolution did not suggest that nothing had changed in the intervening 30 years since the declaration, nor did it dispute that its prescription needed reframing in light of changing public health needs. The 2008 WHO World Health Report describes how a primary health care approach is necessary “now more than ever” to address global health priorities, especially in terms of disparities and new health challenges. As discussed below, this report highlights four broad areas in which reform is required (Fig. 13e-6). One of these areas—the need to organize health care so that it places the needs of people first—essentially relates to the necessity for strong primary care in health systems and what this requirement entails. The other three areas also relate to primary care. All four areas require action to move health systems in a direction that will reduce disparities and increase the satisfaction of those they serve. The World Health Report’s recommendations present a vision of primary health care that is based on the principles of Alma Ata but that differs from many attempts to implement primary health care in the 1970s and 1980s. Universal Coverage Reforms to Improve Health Equity Despite progress in many countries, most people in the world can receive health care services only if they can pay at the point of service. Disparities in health are caused not only by a lack of access to necessary health FIGURE 13e-6 The four reforms of primary health care renewal. (Source: World Health Organization: Primary Health Care: Now More Than Ever. World Health Report 2008.) services but also by the impact of expenditure on health. More than 100 million people are driven into poverty each year by health care costs, with countless others deterred from accessing services at all. Moving toward prepayment financing systems for universal coverage, which ensure access to a comprehensive package of services according to need without precipitating economic ruin, is therefore emerging as a major priority in lowand middle-income countries. Increasing coverage of health services can be considered in terms of three axes: the proportion of the population covered, the range of services underwritten, and the percentage of costs paid (Fig. 13e-7). Moving toward universal coverage requires ensuring the availability of health care services to all, eliminating barriers to access, and organizing pooled financing mechanisms, such as taxation or insurance, to remove user fees at the point of service. It also requires measures beyond financing, including expansion of health services in poorly served areas, improvement in the quality of services provided to marginalized communities, and increased coverage of other social services that significantly affect health (e.g., education). Service Delivery Reforms to Make Health Systems People-Centered Health systems have often been organized around the needs of those who provide health care services, such as clinicians and policymakers. The result is a centralization of services or the provision of vertical programs that target single diseases. The principles of primary health Extend to uninsured Depth: which benefits Height: what proportion of the costs is covered? Public expenditure on health are covered? Breadth: who is insured? FIGURE 13e-7 Three ways of moving toward universal coverage. care, including the development of primary care, reorient care around 13e-5 the needs of the people to whom services cater. This “people-centered” approach aims to provide health care that is both more effective and appropriate. The increase in noncommunicable diseases in lowand middle-income countries offers a further stimulus for urgent reform of service delivery to improve chronic disease care. As discussed above, large numbers of people currently fail to receive relatively low-cost interventions that have reduced the incidence of these diseases in high-income countries. Delivery of these interventions requires health systems that can address multiple problems and manage people over a long period within their own communities, yet many lowand middle-income countries are only now starting to adapt and build primary care services that can address noncommunicable diseases and communicable diseases requiring chronic care. Even some countries (e.g., Iran) that have had significant success in reducing communicable diseases and improving child survival have been slow to adapt their health systems to rapidly accelerating noncommunicable disease epidemics. People-centered care requires a safe, comprehensive, and integrated response to the needs of those presenting to health systems, with treatment at the first point of contact or referral to appropriate services. Because no discrete boundary separates people’s needs for health promotion, curative interventions, and rehabilitation services across different diseases, primary care services must address all presenting problems in a unified way. Meeting people’s needs also involves improved communication between patients and their clinicians, who must take the time to understand the impact of the patients’ social context on the problems they present with. This enhanced understanding is made possible by improvements in the continuity of care so that responsibility transcends the limited time people spend in health care facilities. Primary care plays a vital role in navigating people through the health system; when people are referred elsewhere for services, primary care providers must monitor the resulting consultations and perform follow-up. All too often, people do not receive the benefit of complex interventions undertaken in hospitals because they lose contact with the health care system once discharged. Comprehensiveness and continuity of care are best achieved by ensuring that people have an ongoing personal relationship with a care team. Public Policy Reforms to Promote and Protect the Health of Communities Public policies in sectors other than health care are essential to reduce disparities in health and to make progress toward global public health targets. The 2008 final report of the WHO Commission on Social Determinants of Health provides an exhaustive review of the inter-sectoral policies required to address health inequities at the local, national, and global levels. Advances against major challenges such as HIV/AIDS, tuberculosis, emerging pandemics, cardiovascular disease, cancers, and injuries require effective collaboration with sectors such as transport, housing, labor, agriculture, urban planning, trade, and energy. While tobacco control provides a striking example of what is possible if different sectors work together toward health goals, the lack of implementation of many evidence-based tobacco control measures in most countries just as clearly illustrates the difficulties encountered in such intersectoral work and the unrealized potential of public policies to improve health. At the local level, primary care services can help enact health-promoting public policies in other sectors. Leadership Reforms to Make Health Authorities More Responsive The Declaration of Alma Ata emphasized the importance of participation by people in their own health care. In fact, participation is important at all levels of decision-making. Contemporary health challenges require new models of leadership that acknowledge the role of government in reducing disparities in health but that also recognize the many types of organizations that provide health care services. Governments need to guide and negotiate among these different groups, including nongovernmental organizations (NGOs) and the private sector, and to provide strong regulation where necessary. This difficult task requires a massive reinvestment in leadership and governance capacity, especially if action by different sectors is to be effectively implemented. Moreover, disadvantaged groups and other actors are increasingly expecting that their voices and health needs will be included in the 6 decision-making process. The complex landscape for leadership at the national level is mirrored in many ways at the international or global level. The transnational character of health and the increasing interdependence of countries with respect to outbreak diseases, climate change, security, migration, and agriculture place a premium on more effective global health governance. Aspects of the primary health care approach described above, with an emphasis on primary care services, have been implemented to varying degrees in many lowand middle-income countries over the past half-century. As discussed above, some of these experiences inspired and informed the Declaration of Alma Ata, which itself led many more countries to attempt to implement primary health care. This section describes the experiences of a selection of lowand middle-income countries in improving primary care services that have enhanced the health of their populations. Before Alma Ata, few countries had attempted to develop primary health care on a national level. Rather, most focused on expanding primary care services to specific communities (often rural villages), making use of community volunteers to compensate for the absence of facility-based care. In contrast, in the post–World War II period, China invested in primary care on a national scale, and life expectancy doubled within roughly 20 years. The Chinese expansion of primary care services included a massive investment in infrastructure for public health (e.g., water and sanitation systems) linked to innovative use of community health workers. These “barefoot doctors” lived in and expanded care to rural villages. They received a basic level of training that enabled them to provide immunizations, maternal care, and basic medical interventions, including the use of antibiotics. Through the work of the barefoot doctors, China brought low-cost universal basic health care coverage to its entire population, most of which had previously had no access to these services. In 1982, the Rockefeller Foundation convened a conference to review the experiences of China along with those of Costa Rica, Sri Lanka, and the state of Kerala in India. In all of these locations, good health care at low cost appeared to have been achieved. Despite lower levels of economic development and health spending, all of these jurisdictions, along with Cuba, had health indicators approaching—or in some cases exceeding—those of developed countries. Analysis of these experiences revealed a common emphasis on primary care services, with expansion of care to the entire population free of charge or at low cost, combined with community participation in decision-making about health services and coordinated work in different sectors (especially education) toward health goals. During the three decades since the Rockefeller meeting, some of these countries have built on this progress, while others have experienced setbacks. Recent experiences in developing primary care services show that the same combination of features is necessary for success. For example, Brazil—a large country with a dispersed population—has made major strides in increasing the availability of health care in the past quarter century. In this millennium, the Brazilian Family Health Program has expanded progressively across the country, with almost all areas now covered. This program provides communities with free access to primary care teams made up of primary care physicians, community health workers, nurses, dentists, obstetricians, and pediatricians. These teams are responsible for the provision of primary care to all people in a specified geographic area—not only those who access health clinics. Moreover, individual community health workers are responsible for a named list of people within the area covered by the primary care team. Problems with access to health care persist in Brazil, especially in isolated areas and urban slums. However, solid evidence indicates that the Family 3.96 –2.08 –4.24 –6.82 –6.97 –6.77 –8.38 –5.64 High HDI Low HDI FIGURE 13e-8 Improvements in childhood mortality following the Family Health Program in Brazil. HDI, Human Development Index; PSF, Program Sae da Família (Family Health Program). (Source: Ministry of Health, Brazil.) Health Program has already contributed to impressive gains in population health, particularly in terms of childhood mortality and health inequities. In fact, this program has already had an especially marked impact on childhood mortality reduction in less developed areas (Fig. 13e-8). Chile has also built on its existing primary care services in the past decade, aiming to improve the quality of care and the extent of coverage in remote areas, above all for disadvantaged populations. This effort has been made in concert with measures aimed at reducing social inequalities and fostering development, including social welfare benefits for families and disadvantaged groups and increased access to early-childhood educational facilities. As in Brazil, these steps have improved maternal and child health and have reduced health inequities. In addition to directly enhancing primary care services, Brazil and Chile have instituted measures to increase both the accountability of health providers and the participation of communities in decision-making. In Brazil, national and regional health assemblies with high levels of public participation are integral parts of the health policy– making process. Chile has instituted a patient’s charter that explicitly specifies the rights of patients in terms of the range of services to which they are entitled. Other countries that have made recent progress with primary health care include Bangladesh, one of the poorest countries in the world. Since achieving its independence from Pakistan in 1971, Bangladesh has seen a dramatic increase in life expectancy, and childhood mortality rates are now lower than those in neighboring nations such as India and Pakistan. The expansion of access to primary health care services has played a major role in these achievements. This progress has been spearheaded by a vibrant NGO community that has focused its attention on improving the lives and livelihoods of poor women and their families through innovative and integrated microcredit, education, and primary care programs. The above examples, along with others from the past 30 years in countries such as Thailand, Malaysia, Portugal, and Oman, illustrate how the implementation of a primary health care approach, with a greater emphasis on primary care, has led to better access to health care services—a trend that has not been seen in many other lowand middle-income countries. This trend, in turn, has contributed to improvements in population health and reductions in health inequities. However, as these nations have progressed, other countries have shown how previous gains in primary care can easily be eroded. In sub-Saharan Africa, undermining of primary care services has contributed to catastrophic reversals in health outcomes catalyzed by the HIV/AIDS epidemic. Countries such as Botswana and Zimbabwe implemented primary health care strategies in the 1980s, increasing access to care and making impressive gains in child health. Both countries have since been severely affected by HIV/AIDS, with pronounced Percentage of total health expenditure FIGURE 13e-9 Changes in source of health expenditure in China over the past 40 years. (Source: World Health Organization: Primary Health Care: Now More Than Ever. World Health Report 2008.) decreases in life expectancy. However, Zimbabwe has also seen political turmoil, a decline of health and other social services, and the flight of health personnel, whereas Botswana has maintained primary care services to a greater extent and has managed to organize widespread access to antiretroviral therapy for people living with HIV/AIDS. Zimbabwe’s health situation has therefore become more desperate than that in Botswana. China provides a particularly striking example of how changes in health policy relevant to the organization of health systems (Fig. 13e-9) can have rapid, far-reaching consequences for population health. Even as the 1982 Rockefeller conference was celebrating China’s achievements in primary care, its health system was unraveling. The decision to open up the economy in the early 1980s led to rapid privatization of the health sector and the breakdown of universal health coverage. As a result, by the end of the 1980s, most people, especially the poorer segments of the population, were paying directly out of pocket for health care, and almost no Chinese had insurance—a dramatic transformation. The “barefoot doctor” schemes collapsed, and the population either turned to care paid for at hospitals or simply became unable to access care. This undermining of access to primary care services in the Chinese system and the resulting increase in impoverishment due to illness contributed to the stagnation of progress in health in China at the same time that incomes in that country increased at an unprecedented rate. Reversals in primary care have meant that China now increasingly faces health care issues similar to those faced by India. In both countries, rapid economic growth has been linked to lifestyle changes and noncommunicable disease epidemics. The health care systems of the two nations share two negative features that are common when primary care is weak: a disproportionate focus on specialty services provided in hospitals and unregulated commercialization of health services. China and India have both seen expansion of private hospital services that cater to middle-class and urban populations who can afford care; at the same time, hundreds of millions of people in rural areas now struggle to access the most basic services. Even in the former groups, a lack of primary care services has been associated with late presentation with illness and with insufficient investment in primary prevention approaches. This neglect of prevention poses a risk of large-scale epidemics of cardiovascular disease, which could endanger continued economic growth. In addition, the health systems of both countries now depend for the majority of their funding on out-of-pocket payments by people when they use services. Thus substantial proportions of the population must sacrifice other essential goods as a result of health expenditure and may even be driven into poverty by this cost. The commercial nature of health services with inadequate or no regulation has also led to the proliferation of charlatan providers, inappropriate care, and pressure for people to pay for expensive and sometimes unnecessary care. Commercial providers have limited incentives to use interventions (including public health measures) that cannot be charged for or that are what the person who is paying can afford. Faced with these problems, China and India have implemented measures to strengthen primary health care. China has increased government funding of health care, has taken steps toward restoring health insurance, and has enacted a target of universal access to primary care services. India has similarly mobilized funding to greatly expand primary care services in rural areas and is now duplicating this process in urban settings. Both countries are increasingly using public resources from their growing economies to fund primary care services. These encouraging trends are illustrative of new opportunities to implement a primary health care approach and strengthen primary care services in lowand middle-income countries. Brazil, India, China, and Chile are being joined by many other lowand middle-income countries, including Indonesia, Mexico, the Philippines, Turkey, Rwanda, Ethiopia, South Africa, and Ghana, in ambitious initiatives mobilizing new resources to move toward universal coverage of health services at affordable cost. Global public health targets will not be met unless health systems are significantly strengthened. More money is currently being spent on health than ever before. In 2005, global health spending totaled $5.1 trillion (U.S.)—double the amount spent a decade earlier. Although most expenditure occurs in high-income countries, spending in many emerging middle-income countries has rapidly accelerated, as has the allocation of monies for this purpose by both governments in and donors to low-income countries. These twin trends—greater emphasis on building health systems based on primary care and allotment of more money for health care—provide opportunities to address many of the challenges discussed above in lowand middle-income countries. Accelerating progress requires a better understanding of how global health initiatives can more effectively facilitate the development of primary care in low-income countries. A review by the WHO Maximizing Positive Synergies Collaborative Group looked at programs funded by the Global Fund to Fight AIDS, Tuberculosis and Malaria; the Global Alliance for Vaccines and Immunisation (GAVI); the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR); and the World Bank (on HIV/AIDS). This group found that global health initiatives had improved access to and quality of the targeted health services and had led to better information systems and more adequate financing. The review also identified the need for better alignment of global health initiatives with other national health priorities and systematic exploitation of potential synergies. If global health initiatives implement programs that work in tandem with other components of national health systems without undermining staffing and procurement of supplies, they have the potential to contribute substantially to the capacity of health systems to provide comprehensive primary care services. Even in the aftermath of the global financial crisis, global health initiatives continue to draw significant funding. In 2009, for example, U.S. President Barack Obama announced increasing development assistance from the United States for global health, earmarking $63 billion over the period 2009–2014 for a Global Health Initiative. New funding is also promised through a range of other initiatives focusing particularly on maternal and child health in low-income countries. The general trend is to coordinate this funding in order to reduce fragmentation of national health systems and to concentrate more on strengthening these systems. Comprehensive primary care in low-income countries must inevitably deal with the rapid emergence of chronic diseases and the growing prominence of injury-related health problems; thus, international health development assistance must become more responsive to these needs. Beyond the new streams of funding for health services, other opportunities exist. Increased social participation in health systems can help build primary care services. In many countries, political pressure from community advocates for more holistic and accountable care as well as entrepreneurial initiatives to scale up community-based services through NGOs have accelerated progress in primary care without major increases in funding. Participation of the population in the provision of health care services and in relevant decision-making often drives services to cater to people’s needs as a whole rather than to narrow public health priorities. Participation and innovation can help address critical issues related to the health workforce in lowand middle-income countries by establishing effective people-centered primary care services. Many primary care services do not need to be delivered by a physician or a nurse. Multidisciplinary teams can include paid community workers who have access to a physician if necessary but who can provide a range of health services on their own. In Ethiopia, more than 30,000 community health workers have been trained and deployed to improve access to primary care services, and there is increasing evidence that this measure is contributing to better health outcomes. In India, more than 600,000 community health advocates have been recruited as part of expanded rural primary care services. In Niger, the deployment of community health workers to deliver essential child health interventions (as a component of integrated community case management) has had impressive results in reducing childhood mortality and decreasing disparities. After the Declaration of Alma Ata, experiences with community health workers were mixed, with particular problems about levels of training and lack of payment. Current endeavors are not immune from these concerns. However, with access to physician support and the deployment of teams, some of these concerns may be addressed. Growing evidence from many countries indicates that shifting appropriate tasks to primary care workers who have had shorter, less expensive training than physicians will be essential to address the human resources crisis. Finally, recent improvements in information and communication technologies, particularly mobile phone and Internet systems, have created the potential for systematic implementation of e-health, telemedicine, and improved health data initiatives in lowand middle-income countries. These developments raise the tantalizing possibility that health systems in these countries, which have long lagged behind those in high-income countries but are less encumbered by legacy systems that have proved hard to modernize in many settings, could leapfrog their wealthier counterparts in exploiting these technologies. Although the challenges posed by poor or absent infrastructure and investment in many lowand middle-income countries cannot be underestimated and will need to be addressed to make this possibility a reality, the rapid rollout of mobile networks and their use for health and other social services in many low-income countries where access to fixed telephone lines was previously very limited offer great promise in building primary care services in lowand middle-income countries. As concern continues to mount about glaring inequities in global health, there is a growing commitment to redress these egregious shortfalls, as exemplified by global mobilization around the United Nations’ Millennium Development Goals and the early discussions on what targets should build on these goals in the post-2015 era. This commitment begins first and foremost with a clear vision of the fundamental importance of health in all countries, regardless of income. The values of health and health equity are shared across all borders, and primary health care provides a framework for their effective translation across all contexts. The translation of these fundamental values has its roots in four types of reforms that reflect the distinct but interlinked challenges of (re)orienting a society’s resources on the basis of its citizens’ health needs: (1) organizing health care services around the needs of people and communities; (2) harnessing services and sectors beyond health care to promote and protect health more effectively; (3) establishing sustainable and equitable financing mechanisms for universal coverage; and (4) investing in effective leadership of the whole of society. This common primary health care agenda highlights the striking similarity, despite enormous differences in context, in the nature and direction of the reforms that national health systems must undertake to promote greater equity in health. This shared agenda is complemented by the growing reality of global health interconnectedness due, for example, to shared microbial threats, bridging of ethnolinguistic diversity, flows in migrant health workers, and mobilization of global funds to support the neediest populations. Embracing solidarity in global health while strengthening health systems through a primary health care approach is fundamental to sustained progress in global health. Complementary, alternative, and Integrative health practices Josephine P. Briggs The search for health includes many beliefs and practices that are outside conventional medicine. Physicians are important sources for 14e Acupuncture and acupressure A family of procedures involving stimulation of defined anatomic points, a component of the major Asian medical traditions; most common application involves the insertion and manipulation of thin metallic needles Alexander technique A movement therapy that uses guidance and education to improve posture, movement, and efficient use of muscles for improvement of overall body functioning Guided imagery The use of relaxation techniques followed by the visualization of images, usually calm and peaceful in nature, to invoke specific images to alter neurologic function or physiologic states Hypnosis The induction of an altered state of consciousness characterized by increased responsiveness to suggestion Massage Manual therapies that manipulate muscle and connective tissues to promote muscle relaxation, healing, and sense of well-being Meditation A group of practices, largely based in Eastern spiritual traditions, intended to focus or control attention and obtain greater awareness of the present moment, or mindfulness Reflexology Manual stimulation of points on hands or feet that are believed to affect organ function Rolfing/structural integration A manual therapy that attempts to realign the body by deep tissue manipulation of fascia Spinal manipulation A range of manual techniques, employed by chiropractors and osteopaths, for adjustments of the spine to affect neuromuscular function and other health outcomes Tai chi A mind-body practice originating in China that involves slow, gentle movements and sometimes is described as “moving meditation” Therapeutic touch Secular version of the laying on of hands, described as “healing meditation” Yoga An exercise practice, originally East Indian, that combines breathing exercises, physical postures, and meditation Ayurvedic medicine The major East Indian traditional medicine system; treatment includes meditation, diet, exercise, herbs, and elimination regimens (using emetics and diarrheals) Curanderismo A spiritual healing tradition common in Latin American communities that uses ritual cleansing, herbs, and incantations Native American medicine Diverse traditional systems that incorporate chanting, shaman healing ceremonies, herbs, laying on of hands, and smudging (ritual cleansing with smoke from sacred plants) Tibetan medicine A medical system that uses diagnosis by pulse and urine examination; therapies include herbs, diet, and massage Traditional Chinese medicine A medical system that uses acupuncture, herbal mixtures, massage, exercise, and diet Unani medicine An East Indian medical system, derived from Persian medicine, practiced primarily in the Muslim community; also called “hikmat” Anthroposophic medicine A spiritually based system of medicine that incorporates herbs, homeopathy, diet, and a movement therapy called eurythmy Chiropractic Chiropractic care involves the adjustment of the spine and joints to alleviate pain and improve general health; primarily used to treat back problems, musculoskeletal complaints, and headaches Homeopathy A medical system with origins in Germany that is based on a core belief in the theory of “like cures like”—compounds that produce certain syndromes, if administered in very diluted solutions, will be curative Naturopathy A clinical discipline that emphasizes a holistic approach to the patient, herbal medications, diet, and exercise; practitioners have degrees as doctors of naturopathy Osteopathy A clinical discipline, now incorporated into mainstream medicine, that historically emphasized spinal manipulative techniques to relieve pain, restore function, and promote overall health information and guidance about health matters, but our patients also rely on a wide range of other sources including family and friends, cultural traditions, alternative practitioners, and increasingly the Internet, popular media, and advertising. It is essential for physicians to understand what patients are doing to seek health, as this understanding is important to harness potential benefits and to help patients avoid harm. The phrase complementary and alternative medicine is used to describe a group of diverse medical and health care systems, practices, and products that have historic origins outside mainstream medicine. Most of these practices are used together with conventional therapies and therefore have been called complementary to distinguish them from 14e-1 alternative practices, those used as a substitute for standard care. Use of dietary supplements; mind-body practices such as acupuncture, massage, meditation, and hypnosis; and care from a traditional healer all fall under this umbrella. Brief definitions for some of the common complementary and alternative health practices are provided in Table 14e-1. Although some complementary health practices are implemented by a complementary health care provider such as a chiropractor, acupuncturist, or naturopathic practitioner, or by a physician, many of these practices are undertaken as “self-care.” Most are paid for out of pocket. In the last decade or so, the terms integrative care and integrative medicine have entered the dialogue. A 2007 national survey conducted by the Centers for Disease Control and Prevention’s National Center for Health Statistics found that 42% of hospices had integrated complementary health practices into the care they provide. Integration of select complementary approaches is also common in Veterans Administration and Department of Defense facilities, particularly as part of management of pain and post-traumatic stress disorder. The term integrative medicine is usually used to refer to a style of practice that places strong emphasis on a holistic approach to patient care while focusing on reduced use of technology. Physicians advocating Chapter 14e Complementary, Alternative, and Integrative Health Practices this approach generally include selected complementary health practices in the care they offer patients, and many have established practice settings that include complementary health practitioners. Although this approach appears to be attractive to many patients, the heavy use of dietary supplements and the weaknesses in the evidence base for a number of the interventions offered in integrative practices continue to attract substantial concern and controversy. Until a decade ago or so, “complementary and alternative medicine” could be defined as practices that are neither taught in medical schools nor reimbursed, but this definition is no longer workable, since medical students increasingly seek and receive some instruction about complementary health practices, and some practices are reimbursed by third-party payers. Another definition, practices that lack an evidence base, is also not useful, since there is a growing body of research on some of these modalities, and some aspects of standard care do not have a strong evidence base. By its nature, the demarcation between mainstream medicine and complementary health practices is porous, varying from culture to culture and over time. Traditional Chinese medicine and the Indian practice of Ayurvedic medicine were once the dominant health teachings in those cultures. Certain health practices that arose as challenges to the mainstream have been integrated gradually into conventional care. Examples include the teachings of Fernand Lamaze that led to the widespread use of relaxation techniques during childbirth, the promotion of lactation counseling by the La Leche League, and the teaching of Cicely Saunders and Elizabeth Kler-Ross that established the hospice movement. The late nineteenth century saw the development of a number of healing philosophies by care providers who were critical of the medicine of the time. Of these, naturopathy and homeopathy, which arose in Germany, and chiropractic and osteopathy, which developed in the United States, have continued to endure. Osteopathic medicine is currently thoroughly integrated into conventional medicine, although the American Medical Association (AMA) labeled it a cult as late as 1960. The other three traditions have remained resolutely separate from mainstream medicine, although chiropractic care is available in some conventional care settings. The first large survey of use of these practices was performed by David Eisenberg and associates in 1993. It surprised the medical community by showing that more than 30% of Americans use complementary or alternative health approaches. Many studies since that time have extended those conclusions. Subsequently, the National Health Interview Survey (NHIS), a large, national survey conducted by the National Center for Health Statistics, a component of the Centers for Disease Control and Prevention, has addressed the use of complementary health practices and largely confirmed those results. The NHIS is a household survey of many kinds of health practices in the civilian population; it uses methods that create a nationally representative sample and has a sample size large enough to permit valid estimates about some subgroups. In 2002, 2007, and 2012, the survey included a set of questions that addressed complementary and alternative health approaches. Information was obtained from 31,000 adults in 2002 and 23,300 adults and 9400 children in 2007. Only preliminary data are available from the 2012 survey. In all three surveys, approximately 40% of adults report using some form of complementary therapy or health practice. In the 2007 study, 38% of adults and 12% of children had used one or more modalities. These surveys yield the estimate that nonvitamin, nonmineral dietary supplements are used by approximately 18% of the population. The most prevalent mind-body practices are relaxation techniques and meditation, chiropractic, and therapeutic massage. Americans are willing to pay for these services; the estimated out-of-pocket expenditure for complementary health practices in 2007 was $34 billion, representing 1.5% of total health expenditures and 11% of out-of-pocket costs. The appeal of unproven complementary health approaches continues to perplex many physicians. Many factors contribute to these choices. Some patients seek out complementary health practitioners because they offer optimism or greater personal attention. For others, alternative approaches reflect a “self-help” approach to health and wellness or satisfy a search for “natural” or less invasive alternatives, since dietary supplements and other natural products are believed, correctly or not, to be inherently healthier and safer than standard pharmaceuticals. In NHIS surveys, the most common health conditions cited by patients for use of complementary health practices involve management of symptoms often poorly controlled by conventional care, particularly back pain and other painful musculoskeletal complaints, anxiety, and insomnia. PRACTITIONER-BASED DISCIPLINES Licensure and Accreditation At present, six fields of complementary health practice—osteopathic manipulation, chiropractic, acupuncture and traditional Chinese medicine, therapeutic massage, naturopathy, and homeopathy—are subject to some form of educational accreditation and state licensure. Accreditation of educational programs is the responsibility of professional organizations or commissions under federal oversight by the Department of Education. Licensure, in contrast, is strictly a state matter, generally determined by state legislatures. Legal recognition establishes public access to therapies even when there is no scientific consensus about their clinical value. Osteopathic Manipulative Therapy Founded in 1892 by the physician Andrew Taylor Still, osteopathic medicine was originally based on the belief that manipulation of soft tissue and bone can correct a wide range of diseases of the musculoskeletal and other organ systems. Over the ensuing century, the osteopathic profession has welcomed increasing integration with conventional medicine. Today, the postgraduate training, practice, credentialing, and licensure of osteopathic physicians are virtually indistinguishable from those of allopathic physicians. Osteopathic medical schools, however, include training in manual therapies, particularly spinal manipulation. Approximately 70% of family practice osteopathic physicians perform manipulative therapies on some of their patients. Chiropractic The practice of chiropractic care, founded by David Palmer in 1895, is the most widespread practitioner-based complementary health practice in the United States. Chiropractic practice emphasizes manual therapies for treatment of musculoskeletal complaints, although the scope of practice varies widely, and in some rural areas, chiropractors may serve a primary care role, due in part to the lack of other providers. According to the NHIS, approximately 8% of Americans receive chiropractic manipulation in a given year. Since the mid-1970s, chiropractors have been licensed in all 50 states and reimbursed by Medicare. Chiropractic educational standards mandate 2 years of undergraduate training, 4 years of training at an accredited school of chiropractic, and in most states, successful completion of a standardized board examination. Postgraduate training is not required. The U.S. Department of Labor estimates that there are 52,000 licensed chiropractors (2010 figure). There is substantial geographic variation, with greater numbers of practitioners and greater use in the midwest, particularly in rural areas, and lower use in the southeast. Historically, the relationship between the medical and chiropractic professions has been strained. Extending through the 1970s, the AMA set forth standards prohibiting physicians consulting or entering into professional relationships with chiropractors, but in 1987, after a decade of complex litigation, the U.S. District Court found the AMA in violation of antitrust laws. An uneasy truce has followed, with continued physician skepticism, but also evidence for robust patient demand and satisfaction. The role of both osteopathic and chiropractic spinal manipulative therapies (SMTs) in back pain management has been the subject of a number of carefully performed trials and many systematic reviews. Conclusions are not consistent, but the most recent guidelines from the American College of Physicians and the American Pain Society conclude that SMT is associated with small to moderate benefit for low-back pain of less than 4 weeks in duration (evidence level B/C) and moderate benefit (evidence level B) for subacute or chronic lowback pain. The evidence of benefit for neck pain is not as extensive, and continued concern that cervical manipulation may occasionally precipitate vascular injury clouds a contentious debate. Naturopathy Naturopathy is a discipline that emerged in central Europe in the nineteenth century as part of the Natural Cure movement and was introduced to the United States in the early twentieth century by Benjamin Lust. Fifteen states currently license naturopathic physicians, with considerable variation in the scope of practice. The naturopathic profession is actively seeking licensure in other states. There are estimated to be approximately 3000 licensed naturopathic physicians in the United States. There is also a robust naturopathy presence in Canada. Conventional and unconventional diagnostic tests and medications are prescribed, with an emphasis on relatively low doses of drugs, herbal medicines, healthy diet, and exercise. While there is some support for success of naturopathic practitioners in motivating healthy behaviors, concern exists about the heavy promotion of dietary supplements, most with little rigorous evidence. Homeopathy Homeopathy was widespread in the United States in the late nineteenth and early twentieth centuries and continues to be a common alternative practice in many European countries, but estimates from the NHIS suggest that less than 1.5% of Americans visit a homeopathic practitioner in any given year. In the United States, licensure as a homeopathic physician is only possible in three states (Arizona, Connecticut, and Nevada) where it is restricted to licensed physicians. The number of practitioners is uncertain, however, because some states include homeopathy within the scope of practice of other fields, including chiropractic and naturopathy, and some practitioners may self-identify as homeopathic practitioners. As discussed below, the regulatory framework for homeopathic remedies differs from other dietary supplements. Homeopathic remedies are widely available and commonly recommended by naturopathic physicians, chiropractors, and other licensed and unlicensed practitioners. Therapeutic Massage The field of therapeutic massage is growing rapidly, as use by the public is increasing. According to U.S. Department of Labor statistics, there are approximately 155,000 licensed massage therapists employed in the United States, and by 2020, this number is projected to grow by 20%. Forty-three states and the District of Columbia currently have laws regulating massage therapy; however, there is little consistency, and in some states, regulation is by town ordinance. States that do provide licensure for massage therapists typically require a minimum of 500 hours of training at an accredited institution, as well as meeting specific continuing education requirements and carrying malpractice insurance. Massage training programs generally are approved by a state board, but some may also be accredited by an independent agency, such as the Commission on Massage Therapy Accreditation (COMTA). The development of regulatory standards for therapeutic massage has not yet caught up with the evolution of the field or the high demand. Many techniques used are also employed by physical therapists. Acupuncture and Traditional Chinese Medicine A venerable component of traditional Chinese medicine, with a history of use that extends at least 2000 years, acupuncture became better known in the United States in 1971, when New York Times reporter James Reston wrote about how doctors in China used needles to ease his pain after surgery. More than 3 million adults in the United States use acupuncture, according to NHIS data. In a number of European countries, acupuncture is performed primarily by physicians. In the United States, the training and licensure processes for physicians and nonphysicians differ. Currently, acupuncture is licensed in 42 states and the District of Columbia, with licensure standards varying within the scope of practice of each state. Licensure for nonphysicians generally requires 3 years of accredited training and the successful completion of a standardized examination. The main accrediting organization is the Accreditation Commission for Acupuncture and Oriental Medicine. Acupuncture is included in doctor of medicine (MD) and doctor of osteopathic medicine (DO) licensure in 31 states, with 11 states requiring additional training for physicians performing acupuncture. Mind-body practices are a large and diverse group of techniques that are administered or taught to others by a trained practitioner or teacher. Examples include acupuncture, massage therapy, meditation, relaxation techniques, spinal manipulation, and yoga. These approaches are being used more frequently in mainstream health care facilities for both patients and health care providers. Mind-body practices such as meditation and yoga are not licensed in any state, and training in those practices is not subject to national accreditation. Americans often turn to complementary approaches for help in managing health conditions associated with physical and psychological pain—especially back pain, headache, musculoskeletal complaints, and functional pain syndromes. Chronic pain management is often refractory to conventional medical approaches, and standard pharmacologic approaches have substantial drawbacks. Health care guidelines of the American Pain Society and other professional organizations recognize the value of certain complementary approaches as adjuncts to pharmacologic management. The evidence base for the effectiveness of these modalities is still relatively incomplete, but a few rigorous examples where there is promise of usefulness and safety include acupuncture for osteoarthritis pain; tai chi for fibromyalgia pan; and massage, yoga, and spinal manipulation for chronic back pain. In addition, new research is shedding light on the effects of meditation and acupuncture on central mechanisms of pain processing and perception and regulation of emotion and attention. Although many unanswered questions remain about these effects, findings are pointing to scientifically plausible mechanisms by which these modalities might yield benefit. DIETARY SUPPLEMENTS Regulation The Dietary Supplements Health and Education Act (DSHEA), passed in 1994, gives authority to the U.S. Food and Drug Administration (FDA) to regulate dietary supplements, but with expectations that differ in many respects from the regulation of drugs or food additives. Purveyors of dietary supplements cannot claim that they prevent or treat any disease. They can, however, claim that they maintain “normal structure and function” of body systems. For example, a product cannot claim to treat arthritis, but it can claim to maintain “normal joint health.” Homeopathic products predate FDA drug regulations and are sold with no requirement that they be proved effective. Although homeopathic products are widely believed to be safe because they are highly dilute, one product, a nasal spray called Zicam, was withdrawn from the market when it was found to produce anosmia, probably because of a significant zinc content. Homeopathic products, and indeed other complementary health products and practices, also convey the very significant risk that individuals will use them instead of effective conventional modalities. Regulation of advertising and marketing claims is the purview of the Federal Trade Commission (FTC). The FTC does take legal action against promoters or websites that advertise or sell dietary supplements with false or deceptive statements. Inherent Toxicity Although the public may believe that “natural” equates with “safe,” it is abundantly clear that natural products can be toxic. Misidentification of medicinal mushrooms has led to liver failure. Contamination of tryptophan supplements caused the eosinophiliamyalgia syndrome. Herbal products containing particular species of Aristolochia were associated with genitourinary malignancies and interstitial nephritis. In 2013, dietary supplements containing 1,3-dimethylamylamine (DMAA), often touted as a “natural” stimulant, led to cardiovascular problems, including heart attacks. Among the most controversial dietary supplements is Ephedra sinica, or ma huang, a product used in traditional Chinese medicine for short-term treatment of asthma and bronchial congestion. The scientific basis for these indications was revealed when ephedra was shown to contain the ephedrine alkaloids, especially ephedrine and pseudoephedrine. With the promulgation of the DSHEA regulations, supplements containing ephedra and herbs rich in caffeine sold widely in the U.S. marketplace because of their claims to promote weight loss and enhance athletic Chapter 14e Complementary, Alternative, and Integrative Health Practices performance. Reports of severe and fatal adverse events associated with use of ephedra-containing products led to an evidence-based review of the data surrounding them, and in 2004, the FDA banned their sale in the United States. Another major current concern with dietary supplements is adulteration with pharmacologic active compounds. Multi-ingredient products marketed for weight loss, body building, “sexual health,” and athletic performance are of particular concern. Recent FDA recalls have involved contamination with steroids, diuretics, stimulants, and phosphodiesterase type 5 inhibitors. Herb-Drug Interactions A number of herbal products have potential impact on the metabolism of drugs. This effect was illustrated most compellingly with the demonstration in 2000 that consumption of St. John’s wort interferes with the bioavailability of the HIV protease inhibitor indinavir. Later studies showed its similar interference with metabolism of topoisomerase inhibitors such as irinotecan, with cyclosporine, and with many other drugs. The breadth of interference stems from the ability of hyperforin in St. John’s wort to upregulate expression of the pregnane X receptor, a promiscuous nuclear regulatory factor that promotes the expression of many hepatic oxidative, conjugative, and efflux enzymes involved in drug and food metabolism. Because of the large number of compounds that alter drug metabolism and the large number of agents some patients are taking, identification of all potential interactions can be a daunting task. Several useful Web resources are available as information sources (Table 14e-2). Clearly, attention to this problem is particularly important with drugs with a narrow therapeutic index, such as anticoagulants, antiseizure medications, antibiotics, immunosuppressants, and cancer chemotherapeutic agents. Physicians regularly face difficult challenges in providing patients with advice and education about complementary practices. Of particular concern to all physicians are practices of uncertain safety and practices that raise inappropriate hopes. Cancer therapies, antiaging regimens, weight-loss programs, sexual function, and athletic performance are frequently targeted for excessive claims and irresponsible marketing. A number of Internet resources provide critical tools for patient education (Table 14e-3). Because many complementary health products and practices are used as self-care and because many patients research these approaches extensively on the Internet, directing patients to responsible websites can often be very helpful. http://www.medscape.com/druginfo/druginterchecker?cid=med This website is maintained by WebMD and includes a free drug interaction checker tool that provides information on interactions between two or more drugs, herbals, and/or dietary supplements. http://naturaldatabase.therapeuticresearch.com This website provides an interactive natural product–drug interaction checker tool that identifies interactions between drugs and natural products, including herbals and dietary supplements. This service is available by subscription. A PDA version is available. http://www.naturalstandard.com/tools/ This website provides an interactive tool for checking drug and herb/supplement interactions. This service is available by subscription. A PDA version is available. Abbreviation: PDA, personal digital assistant. The Cochrane Collaboration Complementary Medicine Reviews This website offers rigorous systematic reviews of mainstream and complementary health interventions using standardized methods. It includes more than 300 reviews of complementary health practices. Complete reviews require institutional or individual subscription, but summaries are available to the public. http://www.cochrane.org/cochrane-reviews MedlinePlus All Herbs and Supplements, A–Z List NLM FAQ: Dietary Supplements, Complementary or Alternative Medicines These National Library of Medicine (NLM) Web pages provide an A–Z database of science-based information on herbal and dietary supplements; basic facts about complementary health practices; and federal government sources on information about using natural products, dietary supplements, medicinal plants, and other complementary health modalities. http://www.nlm.nih.gov/medlineplus/druginfo/herb_All.html http://www.cochrane.org/cochrane-reviews http://www.nlm.nih.gov/medlineplus/dietarysupplements.html National Institutes of Health National Center for Complementary and Alternative Medicine (NCCAM) This National Institutes of Health NCCAM website contains information for consumers and health care providers on many aspects of complementary health products and practices. Downloadable information sheets include short summaries of complementary health approaches, uses and risks of herbal therapies, and advice on wise use of dietary supplements. http://www.nccam.nih.gov Resources for Health Care Providers: http://www.nccam.nih.gov/health/providers NCCAM Clinical Digest e-Newsletter: http://www.nccam.nih.gov/health/ providers/digest Continuing medical education lectures: http://www.nccam.nih.gov/training/ videolectures The scientific evidence regarding complementary therapies is fragmentary and incomplete. Nonetheless, in some areas, particularly pain management, it is increasingly possible to perform the kind of rigorous systematic reviews of complementary health approaches that are the cornerstone of evidence-based medicine. A particularly valuable resource in this respect is the Cochrane Collaboration, which has performed more than 300 systematic reviews of complementary health practices. Practitioners will find this a valuable source to answer patient questions. Practice guidelines, particularly for pain management, are also available from several professional organizations. Links to these resources are provided in Table 14e-3. The use of complementary and alternative health practices reflects an active interest in improved health. An array of unproven modalities will always be used by our patients. While some of these choices need to be actively discouraged, many are in fact innocuous and can be accommodated. Some may be genuinely helpful, particularly in the management of troublesome symptoms. The dialogue with patients about complementary health practices is an opportunity to understand patients’ beliefs and expectations and use those insights to help guide health-seeking practices in a constructive way. The late Dr. Stephen Straus contributed this chapter in prior editions, and some material from his chapter has been retained here. The Economics of Medical Care Joseph P. Newhouse The purpose of this chapter is to explain to physicians how economists think about physicians’ decision-making with regard to the treatment of patients. Economists’ mode of thinking has shaped health care policy and institutions and thus the environment in which physicians 15e practice, not only in the United States but in many other countries as well. It may prove useful for physicians to understand some aspects of the economists’ way of thinking, even if it sometimes seems foreign or uncongenial. Physicians see themselves as professionals and as healers, assisting their patients with their health care needs. When economists are patients, they probably see physicians the same way, but when they view doctors through the lens of economics as a discipline, they see physicians—and their patients as well—as economic agents. In other words, economists are interested in the degree to which physicians and patients respond to various incentives in deciding how to deploy the resources over which they exercise choice. Examples of issues that would concern an economist include how much time physicians devote to seeing a patient; which tests they order; what drugs, if any, they prescribe; whether they recommend a procedure; whether they refer a patient; and whether they admit a patient to the hospital. In addition, patients consider the cost when they make a decision about whether to seek care. To say that economists view physicians and patients as economic agents is not to say that economists consider financial incentives the predominant factor in the decisions that either physicians or patients make about treatment; it is to say only that these incentives have some influence on these decisions. In fact, the role played by financial incentives in medical decision-making may often be dwarfed by the roles played by scientific knowledge, by professional norms and ethics, and by the influence of peers. However, economic policy greatly influences financial incentives, and economists tend to focus on this domain. Their interest stems from fundamental economic questions: What goods and services are produced and consumed? In particular, how much medical care is available, and how much of other goods and services? How is medical care produced? For example, what mix of specific services is used in treating a particular episode of illness? Who receives particular treatments? Physicians in all societies live and function in economic markets, although those markets differ greatly both from the simple competitive markets depicted in introductory economics textbooks and from country to country, depending on national institutions. Many of the differences between actual medical markets and textbook competitive markets cause what economists term market failure, a condition in which some individuals can be made better off without making anyone else worse off. This chapter explains two features of health care financing that cause market failure: selection and moral hazard. A common response to market failure in medical care is what economists refer to as administered prices, which this chapter also describes. Unfortunately, administered prices exact a cost, leading to what economists call regulatory or government failure. All societies seek a balance between market failure and regulatory failure, a topic addressed in this chapter’s conclusion. In the idealized competitive market found in economic textbooks, buyers and sellers have the same knowledge about the goods or services they are buying and selling. When one party knows more—or when goods of different quality are being sold at the same price, which is analytically similar—markets can break down in the following sense: There may be a price at which an equally well informed buyer and seller could make a transaction that would make them both better off, but that transaction does not occur because one party knows more than the other. Hence, both the potential buyer and the potential seller are worse off. The used car market is a classic example of differential information. 15e-1 Owners of used cars (potential sellers) know more about the quality of their cars than do potential buyers. At any specific market price for a certain make and model of car, the only used cars offered will be those whose sellers value them at less than that price. Assuming that quality varies among used cars, those that are offered for sale will differentially be of low quality (“lemons”) relative to the given price. Owners of higher-quality cars may simply hold on to them. If, however, a potential buyer knew that the car was of higher quality, the buyer might be willing to pay enough so that the owner of the higher-quality car would sell. It is for this reason that sellers may offer warranties and guarantees, although these are uncommon (but not unknown) in medical care. The same thing happens when goods of different quality are sold at the supermarket at the same price. Shoppers are happy to take any box of a particular brand of breakfast cereal or any bottle of a particular soft drink on the shelf because the quality of the contents of any box or bottle is the same; however, that is not the case in the produce section, where shoppers will inspect the fruit they pick up to ensure that the apple is not bruised or the banana overly ripe. At the end of the day, it is the bruised apples and overly ripe bananas that are left in the store. In effect, the seller has not used all the available information in pricing the produce, and buyers exploit that information differential. Selection affects markets for individual—and, to some degree, small-group—health insurance in a fashion similar to that seen in the used car market and at the produce stand, but in this case it is the buyer of insurance who has more knowledge than the seller. Individuals who use above-average amounts of care—for example, those with a chronic disease or a strong proclivity to seek care for a symptom—will value health insurance more than will those who are healthy or who for various reasons shun medical care even if they are symptomatic. However, the insurer does not necessarily know the risk of those it insures, and so it gears insurance premiums to an average risk, which in some instances is conditional on certain observable characteristics, such as age. Just as shoppers do not want the bruised apples and used car buyers do not want lemons, many healthy people will not want to buy insurance voluntarily if its price mainly reflects its use by those who are sick. (Healthy but very risk-averse individuals still may be willing to pay premiums well above their expected use.) In an extreme case, healthy people drop out of the insurance pool, premiums rise because the average person left in the pool is sicker, that rise causes still more people to drop out of the pool, premiums rise further, and so forth, until few people remain to buy insurance. For this reason, no developed country relies primarily on voluntary individual insurance to finance health care, although many countries use it in the supplemental insurance market, and selection is, in fact, often a feature of that market. Instead, governments and/or employers provide or heavily subsidize the purchase of either mandated or voluntary health insurance (e.g., in Canada or Germany, the Medicare and Medicaid programs in the United States and the purchase of insurance in exchanges by lower-income persons) or provide health services directly (e.g., the United Kingdom and the U.S. Veterans Health Administration). In addition, governments or third parties administering individual insurance markets with competing insurers may “risk-adjust” payments to insurers; that is, transfer monies from insurers who enroll better risks (as measured by observable features, such as diagnoses that are not used to rate premiums) to insurers who enroll worse risks. This feature is found in the American Medicare Advantage program and American insurance exchanges as well as in Germany and the Netherlands. The idea is to reduce insurers’ incentives to structure their products in order to appeal to good risks, especially when insurers are making choices about networks and formularies. Moreover, countries that rely on employment-based health insurance, such as the United States and Germany, either mandate taxes to finance that insurance or provide large tax subsidies for its purchase; otherwise, many healthy employees would prefer that the employer give them the money the employer uses to subsidize the insurance as cash wages. Because an employer that offers health insurance will pay lower cash wages than an otherwise equivalent employer that does CHAPTER 15e The Economics of Medical Care not, larger American employers that, before the Affordable Care Act was implemented in 2014, were not required to offer insurance may not, in fact, have offered it if they had many low-wage employees; the reason is that, if they had offered insurance, the cash wage they could afford to pay would have been below the minimum wage. (For the same reason, these employers typically do not offer a pension benefit.) Many low-wage employers, however, are small businesses that might not be viable if they had to subsidize health insurance. As a result, the Affordable Care Act exempted firms with fewer than 50 employees from any penalties if their employees received a public subsidy and purchased insurance in the exchange. Some self-employed individuals or those who work at small firms may belong to a trade association or a professional society through which they can purchase insurance, but because that purchase is voluntary, it is subject to selection. How does this situation affect the practice of medicine? Prior to the Affordable Care Act, individual and small-group insurance policies typically had preexisting condition clauses to protect the insurer against selection—that is, to protect the insurer against a person’s purchasing insurance (or more complete insurance) after that person had been diagnosed with a disease that is expensive to treat. Even though there is now a penalty for remaining uninsured, some individuals still choose to do so, and others purchase insurance with substantial amounts of cost sharing that they may not be able to pay if they become sick. Caring for such patients may give the physician a choice between making do with less than clinically optimal treatment and proceeding in a clinically optimal way but leaving the patient with a large bill and possible bankruptcy—and potentially leaving the physician with bill collection issues or unpaid bills. Selection can arise in a different guise when physicians are reimbursed a fixed amount per patient (i.e., capitation) rather than receiving fee-for-service payments. Depending on the adequacy of any adjustments in the capitated amount for the resources that a specific patient will require (“risk adjustment”), physicians who receive a fixed amount have a financial incentive to avoid caring for sicker patients. Similarly, physicians who receive a capitated amount for their own services but are not financially responsible for hospital care or the services of other physicians may make an excessive number of referrals, just as physicians reimbursed in a fee-for-service arrangement may make too few. The term moral hazard comes from the actuarial literature; it originally referred to the weaker incentives of an insured individual to prevent the loss against which he or she is insured. A classic example is failure of homeowners in areas prone to brush fires to cut the brush around their houses or possibly install fire-resistant shingles on their roofs because of their expectation that insurance will compensate them if their houses burn down. In some lines of insurance, however, moral hazard is not a substantive issue. Persons who buy life insurance on their own lives are not likely to commit suicide so that their heirs can receive the proceeds. Moreover, despite the brush fire example, homeowner’s insurance probably has little moral hazard associated with it because individuals often cannot replace some valued personal items when a house burns down. In short, if moral hazard is negligible, insured persons take appropriate precautions against the potential loss. In the context of health insurance, this classic form of moral hazard refers to potentially reduced incentives to prevent illness, but that is probably not a major issue. Sickness and disease generally imply some pain and suffering, not to mention possibly shortened life expectancy. Because there is no insurance for pain and suffering, individuals have strong incentives to try to remain healthy regardless of how much health insurance they have. Put another way, having better health insurance probably does not alter those incentives much. Instead of weakened incentives to prevent illness, moral hazard in the health insurance context typically refers to the incentives for better-insured individuals to use more medical services. For instance, a patient with back pain or shoulder pain might seek an MRI if it costs him or her little or nothing, even if the physician feels the clinical value of the MRI is negligible. Conversely, the physician may be more cautious in ordering a test that seems likely to produce little information if there are severe financial consequences for the patient. Some of the strongest evidence on this point comes from the randomized RAND Health Insurance Experiment conducted in the late 1970s and early 1980s. Families whose members were under 65 years of age were randomized to insurance plans in which the amount they had to pay when using services (“cost sharing”) varied from nothing (fully insured care) to a large deductible (catastrophic insurance). All the plans capped families’ annual out-of-pocket payments, with a reduced cap for low-income families. Families with complete insurance used ~40% more services in a year than did families with catastrophic insurance, a figure that did not vary much across the six geographically dispersed sites in which the experiment was run. Although these data come from the era before managed care in the United States, subsequent observational studies in this country and elsewhere have largely confirmed the experiment’s findings with respect to the relationship between variations in care use and variations in patient payment at the point of service. The difference among the plans was almost entirely related to the likelihood that a patient would seek care. Once care was sought, there appeared to be little difference in how physicians treated their patients in different plans. One might assume that the additional care provided to fully insured patients would have resulted in improved outcomes, but by and large it did not. In fact, there was little or no difference in average health outcomes among the different health plans, with the important exception that hypertension, especially in patients with low incomes, was better controlled when care was free. A possible explanation for the paucity of beneficial effects attributable to the additional medical services used by fully insured patients lies in the observations that (1) the additional care targeted both problems for which care can be efficacious and those for which it is not and (2) the population in the experiment, which consisted of non-elderly community-dwelling individuals, was mostly healthy. Perhaps the additional two visits and the greater number of hospitalizations when care was free were as likely to lead to poor outcomes as to good outcomes in that population. Certainly, the subsequent literature on quality of care and medical error rates has implied that a good deal of inappropriate care was—and is—provided to patients. For example, more than half of the antibiotics prescribed to the experiment’s participants were prescribed for viral conditions. Moreover, about one-quarter of patients who were hospitalized (in all plans) were admitted for procedures that could have been performed equally well outside the hospital, in line with the substantial decrease in hospital use over the last three decades. In short, the additional inappropriate care provided when care was free was not necessarily innocuous; if a mainly healthy person saw a physician, he or she could have been made worse off. The literature on inappropriate care is mostly American in origin, but the finding probably holds elsewhere as well. Finally, patients’ health habits did not change in response to insurance status. This finding is consistent with the intuition that moral hazard does not much affect incentives to prevent illness. Recently, another randomized experiment was conducted in Oregon among low-income, childless adults who were uninsured. Many people who gained insurance coverage in 2014 when the United States implemented the Affordable Care Act are from this group. Some of the uninsured childless adults won a lottery that made them eligible for Medicaid; those who did not win became the comparison group. After ~2 years, the results suggested that the use of services by persons on Medicaid had increased by ~25–35%. Medicaid served its purpose of providing protection against large medical bills; there was an 81% reduction in the proportion of families who spent >30% of their income on medical care, and there were large reductions in both medical debt and borrowing to pay for medical care. Turning to health outcomes, there was a 30% reduction in depression among the uninsured who received Medicaid relative to the comparison group as well as an increase in the numbers of diagnosed diabetics and of diabetics taking medication. Although there were no statistically significant changes in measures of blood pressure, lipids, or glycosylated hemoglobin, confidence intervals were sufficiently large that clinically meaningful effects could not be ruled out. In sum, insurance is certainly desirable to protect families against the financial risk of large medical expenses and in some instances to address underuse of valuable medical services (e.g., by a patient with cardiovascular disease who fails to take medications for financial reasons). Thus, the remedy for moral hazard is not to abolish insurance but instead to strike the right balance between financial protection and incentives to seek care. Moreover, it is probably useful to vary the amounts that patients pay out of pocket, depending on the specific service and the patient’s clinical condition. Health outcomes after myocardial infarction, for example, were better among patients who were randomized to have no copayments for statins, beta blockers, angiotensin-converting enzyme inhibitors, and angiotensin receptor blockers than among those who had to pay for these drugs. Because insurers, whether public or private, cannot pay any price that a physician sets, prices in medical markets with widespread insurance are either set administratively or negotiated. In the simple textbook model of a competitive market, prices approximate the cost of production, but this is not necessarily the case when prices are administered. In the traditional American Medicare program, for example, the government sets a take-it-or-leave-it price. Because of the market share represented by the program, the great majority of physicians choose to take the government’s price rather than leave the program. In some countries (e.g., Canada and Germany), medical societies negotiate fees for all physicians in the nation or in a subnational area. In the United States, commercial insurers negotiate fees with individual physicians or groups of physicians. The principal problem with administered prices arises because someone must set them and that person has an imperfect knowledge of cost. If the price that is set departs markedly from incremental cost, distortions inevitably result. Because the price-setter typically has little information about incremental cost, the set price could be, and often is, far from the actual cost. If the regulator sets the price below cost, the service may not be available or, if it is, will have to be cross-subsidized from a profitable service. If the price is set above cost, there may well be excess entry and too many services being offered on too small a scale. A beneficent regulator in theory could approximate a competitive price by trial and error if technology did not change, but that condition clearly does not hold in medical care. Not only do new goods and services appear continually, but physicians often become more skilled at delivering a service that is already available or at developing new tools to deliver that service at a different (and frequently lower) cost. For example, cataract surgery, which took upward of 8 h when first introduced, can now be completed in <30 min. The distortions between price and cost when prices are administered have consequences for the way medical care is produced. There may well be too much capacity in “profitable” areas of medicine, such as cardiac services and sports medicine, and too little in less profitable areas, such as primary care. A fee above incremental cost for a procedure encourages more procedures. Conversely, payment methods that attempt to cover many services with one fixed payment, such as capitation and a per-admission payment, pay nothing for doing more and therefore may result in too few services and in choices by providers to reduce the number of unprofitable patients under their care, much as a hospital may shutter an emergency room if it becomes a magnet for the uninsured. These phenomena also reflect the asymmetry of information between patients and physicians and, in the case of fee-for-service payment, the incentive for insured patients to go along with a recommendation for additional services (“I am pretty sure I know what the problem is, but 15e-3 let’s just carry out this additional test to be sure”). There is good evidence that physicians respond to prices that are set. For example, if there is a general reduction in fees that, other things being equal, would lower practice income, some physicians order more services, whereas the opposite pertains if all fees increase. This behavior is sufficiently well established empirically that the U.S. Medicare program’s actuaries account for it in their estimates of what various changes in fees will cost or save. The outcome is different if the fee for one procedure or service changes and that procedure accounts for a modest proportion of income. In that case, another service can be substituted. For example, if the fee for a mastectomy increases relative to that for breast-conserving surgery, there will be a higher proportion of mastectomies. A particularly striking example of this type of behavior occurred when Medicare sharply reduced the fees it paid oncologists for chemotherapy in 2005. The proportion of lung cancer patients who received chemotherapy rose by 10%. Margins for some chemotherapeutic agents, however, were cut more than those for other agents, and thereafter oncologists made less use of the agents whose margins had fallen more. Negotiated prices may get closer to cost than administered prices that are set, but they are not a panacea. First, negotiated prices may well exceed cost when there is no effective competition among similar physicians in a particular market. Because medical care markets are typically local, there may only be one group or a few groups in any particular specialty in a smaller market, in which case physicians will have considerable market power to obtain more favorable reimbursement. Further increasing physicians’ market power is the fact that many, and probably most, patients are reluctant to change physicians because their current physician knows their medical history, because they are uncertain whether a new physician would be an improvement, and because insurance may shield them from most of the cost differences among physicians. Finally, in the United States, commercial insurers often negotiate fees as a multiple of the Medicare fee schedule; thus, any distortion in administratively determined relative fees is carried over into negotiated fees. For example, in the Medicare fee schedule, procedures generally are more profitable than cognitive services known as “evaluation and management,” and this difference probably plays a role in the numerical insufficiency of primary care physicians in the United States. One branch of economics—positive economics—seeks to explain actual phenomena without making a judgment about the desirability of those phenomena. Another branch—normative economics—seeks to prescribe what should happen and, in particular, what public policy should be developed to ensure that it happens. The main result of the application of normative economics is that, under certain very special assumptions, competitive markets result in a system in which no one can be made better off without another person’s being made worse off. These assumptions do not hold in medical care, in part because of selection and moral hazard; economists term the result a market failure. By contrast, as the discussion of administered prices in this chapter indicates, even a beneficent regulator will introduce distortions from lack of sufficient information; moreover, there is no guarantee that a regulator will be beneficent, as periodic corruption scandals show. Economists term this phenomenon regulatory or government failure. Economists see decisions about the proper form and amount of public intervention and regulation in medical care as a matter of finding the right balance between various types of market failures and various types of regulatory failures—a balance that different societies may choose to strike differently. CHAPTER 15e The Economics of Medical Care Racial and Ethnic Disparities in Health Care Joseph R. Betancourt, Alexander R. Green Over the course of its history, the United States has experienced dramatic improvements in overall health and life expectancy, largely 16e Native American, Alaskan Native 329.8 300 293.7 258.0 228.3 222.3 192.4 179.6 173.2 168.2 155.2 152.7 147.6 141.1 133.0 129.1 115.9 105.9 102.0 94.5 91.9 28.2 12.0 4.0 7.5 3.6 1.5 0 1.9 1.0 0.8 as a result of initiatives in public health, health promotion, disease prevention, and chronic care management. Our ability to prevent, detect, and treat diseases in their early stages has allowed us to target and reduce rates of morbidity and mortality. Despite interventions that have improved the overall health of the majority of Americans, racial and ethnic minorities (blacks, Hispanics/Latinos, Native Americans/ Alaskan Natives, Asian/Pacific Islanders) have benefited less from these advances than whites and have suffered poorer health outcomes from many major diseases, including cardiovascular disease, cancer, and diabetes. Research has revealed that minorities may receive less care and lower-quality care than whites, even when confounders such as stage of presentation, comorbidities, and health insurance are controlled. These differences in quality are called racial and ethnic disparities in health care. Addressing these disparities has taken on greater importance with the significant transformation of the U.S. health care system and the implementation of health care reform and value-based purchasing. The shift toward creating financial incentives and disincentives to achieve quality goals makes focusing on those who receive lower-quality care more important than ever before. This chapter will provide an overview of racial and ethnic disparities in health and health care, identify root causes, and provide key recommendations to address these disparities at both the clinical and health system levels. Minority Americans have poorer health outcomes than whites from preventable and treatable conditions such as cardiovascular disease, diabetes, asthma, cancer, and HIV/AIDS (Fig. 16e-1). Multiple factors contribute to these racial and ethnic disparities in health. First and foremost, social determinants—such as lower socioeconomic status (SES; e.g., lower income, less wealth, and lower educational attainment), inadequate and unsafe housing, and racism—are strongly linked to poor health outcomes. These factors disproportionately impact minority populations. In fact, SES has consistently been found to be one of the strongest predictors of health outcomes. While the mechanisms are complex (i.e., does poverty cause poor health, or does poor health cause poverty?), it is clear that low-SES populations expe-16e-1 rience disparities in health and that low SES is a major factor in racial/ ethnic disparities. Racial/ethnic disparities are documented globally, although their assessment has centered more on the comparison of individuals by SES in other countries than in the United States. Similar to the U.S. pattern, low-SES residents of other nations tend to have poorer health outcomes. It is noteworthy that results are mixed when the health status of nations is compared by SES. High-SES nations such as the United States do not necessarily have health outcomes that correlate with their high expenditures for health care. For example, as of 2011, the United States ranks 34th in the world—just behind Cuba—on basic public health measures�such as infant mortality. This ranking may be due in part to the correlation between wealth distribution and SES rather than just absolute SES. This area of active research is outside the scope of this chapter. Racism has more recently been shown to predict poorer health outcomes. The physiologic impact of the stress imposed by racism (and poverty), including increased cortisol levels, can lead to chronic adverse effects on health. Lack of access to care also takes a significant toll. Uninsured individuals are less likely to have a regular source of care and are more likely to delay seeking care and to go without needed care; this limited access results in avoidable hospitalizations, emergency hospital care, and adverse health outcomes. In addition to racial and ethnic disparities in health, there are racial and ethnic disparities in the quality of care for persons with access to the health care system. For instance, disparities have been found in the treatment of pneumonia (Fig. 16e-2) and congestive heart failure, with blacks receiving less optimal care than whites when hospitalized for these conditions. Moreover, blacks with end-stage renal disease are referred less often to the transplant list than are their white counterparts (Fig. 16e-3). Disparities have been found, for example, in the use of cardiac diagnostic and therapeutic procedures (with blacks being referred less often than whites for cardiac catheterization and bypass grafting), prescription of analgesia for pain control (with blacks and Latinos receiving less pain medication than whites for long-bone fractures and cancer), and surgical treatment of lung cancer (with blacks receiving less curative surgery than whites for non-small-cell lung cancer). Again, many of these disparities have occurred even when variations in factors such as insurance status, income, age, comorbid conditions, and symptom expression are taken into account. However, one additional factor—disparities in the quality of care provided at the sites where minorities tend to receive care—has been shown to be an important contributor to overall disparities. FIgURE 16e-1 Age-adjusted death rates for selected causes by race and Hispanic origin, 2005. (From the U.S. Census Bureau, 2009.) 81.5 80 76.9 76.8 75.8 74.2 73.3 69.5 74.6 68.7 66.2 FIgURE 16e-2 Recommended hospital care received by Medicare patients with pneumonia, by race/ethnicity, 2006. The reference population consisted of Medicare beneficiaries with pneumonia who were hospitalized. The composite was calculated by averaging the percentage of the population that received each of the five incorporated components of care. (Adapted from Agency for Healthcare Research and Quality: The 2008 National Health Care Disparities Report.) Little progress has been made in addressing racial/ethnic disparities in cardiovascular procedures and other advanced surgical procedures, whereas some progress has been made in eliminating disparities in primary-care process measures. Data from the National Registry of Myocardial Infarction found evidence of continued disparities in guideline-based admission, procedural, and discharge therapy use from 1994 to 2006. Black patients were less likely than white patients to receive percutaneous coronary intervention/coronary artery bypass grafting (PCI/CABG), a disparity that has improved little since 1994. Further, compared with whites, black patients were less likely to receive lipid-lowering medications at discharge, with a gap that has widened since 1998 (Fig. 16e-4). A 2009 study showed that blacks had worse post–myocardial infarction outcomes than whites but that the difference could be explained by site of care and patient factors (such as socioeconomic status and comorbid conditions). The Centers for Disease Control and Prevention (CDC) analyzed national and state rates of total knee replacement (TKR) for Medicare enrollees for the period 2000–2006, with patients stratified by sex, age, and black or white race. TKR rates overall in the United States 82.2 80.3 68.9 67.9 59.6 57.9 40.6 40 40.3 FIgURE 16e-3 Referral for evaluation at a transplantation center or placement on a waiting list/receipt of a renal transplant within 18 months after the start of dialysis among patients who wanted a transplant, according to race and sex. The reference population consisted of 239 black women, 280 white women, 271 black men, and 271 white men. Racial differences were statistically significant among both the women and the men (p<.0001 for each comparison). (From JZ Ayanian et al: N Engl J Med 341:1661, 1999.) Percentage of patients increased 58%, with similar increases among whites (61%) and blacks (56%). However, the TKR rate for blacks was 37% lower than the rate for whites in 2000 and 39% lower in 2006; i.e., the disparity not only did not improve but even worsened slightly (Fig. 16e-5). More recent data (up to 2010) show no apparent change in these figures. Data for enrollees in Medicare managed-care plans provides evidence for a narrowing in racial disparities between 1997 and 2003 in several “report card” preventive care measures, such as mammography and glucose and cholesterol testing. However, racial disparities in more complex measures, such as glucose control in diabetic patients and cholesterol levels in patients after a heart attack, actually worsened during that interval. The 2012 National Healthcare Disparities Report, released by the Agency for Healthcare Research and Quality, found little improvement in disparities for core measures of quality over the previous decade. In fact, for blacks, Asians, Native Americans/Alaskan Natives, Hispanics, and poor people, the vast majority of core quality measures (87–92%) stayed the same, and a small proportion (2–8%) got worse, including measures of effectiveness, patient safety, and timeliness of care. While a small number of these measures improved, disparities were eliminated in no measured area. This annual report is particularly important, given that most studies of disparities have not been repeated with the same methodology used to document possible trends. Some studies have tracked disparities using specific disease and treatment registries. For example, by 2008, the use of acute and discharge medications for myocardial infarction had largely been equalized among racial and ethnic groups; however, African-American and Hispanic patients still experienced longer delays before reperfusion, with door-to-balloon times of <90 min for 83% of white patients as opposed to 75% and 76% of black and Hispanic patients, respectively. The Institute of Medicine (IOM) report Unequal Treatment, released in March 2002, remains the preeminent study of racial and ethnic disparities in health care in the United States. The IOM was charged with assessing the extent of racial/ethnic differences in health care that are not otherwise attributable to known factors, such as access to care. To provide recommendations regarding interventions aimed at eliminating health care disparities, the IOM studied health system, provider, and patient factors. The study found the following: Racial and ethnic disparities in health care exist and, because they are associated with worse health outcomes, are unacceptable. Racial and ethnic disparities in health care occur in the context of (1) broader historic and contemporary social and economic inequality and (2) evidence of persistent racial and ethnic discrimination in many sectors of American life. Many sources—including health systems, health care providers, patients, and utilization managers—may contribute to racial and ethnic disparities in health care. Bias, stereotyping, prejudice, and clinical uncertainty on the part of health care providers may contribute to racial and ethnic disparities in health care. A small number of studies suggest that minority patients may be more likely to refuse treatments, yet these refusal rates are generally small and do not fully explain health care disparities. Unequal Treatment went on to identify a set of root causes that included the following: Health system factors: These include issues related to the complexity of the health care system, the difficulty that minority patients may have in navigating this complex system, and the lack of availability of interpreter services to assist patients with limited English proficiency. In addition, health care systems are generally ill prepared to identify and address disparities. Provider-level factors: These include issues related to the health care provider, including stereotyping, the impact of race/ethnicity on clinical decision-making, and clinical uncertainty due to poor communication. 0.75 1 1.25 1994–1996 0.53 (0.51, 0.54) 1997–1999 0.52 (0.50, 0.53) 2000–2002 0.55 (0.53, 0.56) 2003–2006 0.54 (0.52, 0.56) Whites more likely to receive 0.25 0.5 0.75 1998–1999 0.92 (0.89, 0.96) 2000–2002 0.84 (0.82, 0.86) 2003–2006 0.76 (0.75, 0.81) likely to receive likely to receive 1 1.25 0.25 0.5 0.75 1 1.25 FIgURE 16e-4 Racial differences in guideline-based treatments for acute myocardial infarction (AMI). The reference population consisted of 2,515,106 patients with AMI admitted to U.S. hospitals between July 1990 and December 2006. CABG, coronary artery bypass grafting; PCI, percutaneous coronary intervention. (From ED Peterson et al: Am Heart J 156:1045, 2008.) • Patient-level factors: These include patients’ mistrust of the health care system leading to refusal of services, poor adherence to treatment, and delay in seeking care. A more detailed analysis of these root causes is presented below. Health System Factors • HEALTH SYSTEM COMPLEXITY Even among persons who are insured and educated and who have a high degree of health literacy, navigating the U.S. health care system can be complicated and confusing. Some individuals may be at higher risk for receiving substandard care because of their difficulty navigating the system’s complexities. These individuals may include those from cultures unfamiliar with the Western model of health care delivery, those with limited English proficiency, those with low health literacy, and those who are mistrustful of the health care system. These individuals may have difficulty knowing how and where to go for a referral to a specialist; how to prepare for a procedure such as a colonoscopy; or how to follow up on an abnormal test result such as a mammogram. Since people of color in the United States tend to be overrepresented among the groups listed above, the inherent complexity of navigating the health care system has been seen as a root cause for racial/ethnic disparities in health care. OTHER HEALTH SYSTEM FACTORS Racial/ethnic disparities are due not only to differences in care provided within hospitals but also to where and FIgURE 16e-5 Racial trends in age-adjusted total knee replacement in Medicaid enrollees from 2000 to 2006. The reference population consisted of Medicaid part A enrollees ≥65 years of age who were not members of a managed-care plan. (From the Centers for Disease Control and Prevention, 2009.) from whom minorities receive their care; i.e., certain specific providers, geographic regions, or hospitals are lower-performing on certain aspects of quality. For example, one study showed that 25% of hospitals cared for 90% of black Medicare patients in the United States and that these hospitals tended to have lower performance scores on certain quality measures than other hospitals. That said, health systems generally are not well prepared to measure, report, and intervene to reduce disparities in care. Few hospitals or health plans stratify their quality data by race/ethnicity or language to measure disparities, and even fewer use data of this type to develop disparity-targeted interventions. Similarly, despite regulations concerning the need for professional interpreters, research demonstrates that many health care organizations and providers fail to routinely provide this service for patients with limited English proficiency. Despite the link between limited English proficiency and health-care quality and safety, few providers or institutions monitor performance for patients in these areas. Provider-Level Factors • PROVIDER–PATIENT COMMUNICATION Significant evidence highlights the impact of sociocultural factors, race, ethnicity, and limited English proficiency on health and clinical care. Health care professionals frequently care for diverse populations with varied perspectives, values, beliefs, and behaviors regarding health and wellbeing. The differences include variations in the recognition of symptoms, thresholds for seeking care, comprehension of management strategies, expectations of care (including preferences for or against diagnostic and therapeutic procedures), and adherence to preventive measures and medications. In addition, sociocultural differences between patient and provider influence communication and clinical decision-making and are especially pertinent: evidence clearly links provider–patient communication to improved patient satisfaction, regimen adherence, and better health outcomes (Fig. 16e-6). Thus, when sociocultural differences between patient and provider are not appreciated, explored, understood, or communicated effectively during the medical encounter, patient dissatisfaction, poor adherence, poorer health outcomes, and racial/ethnic disparities in care may result. A survey of 6722 Americans ≥18 years of age is particularly relevant to this important link between provider–patient communication and health outcomes. Whites, blacks, Hispanics/Latinos, and Asian Americans who had made a medical visit in the past 2 years were asked whether they had trouble understanding their doctors; whether they felt the doctors did not listen; and whether they had medical questions they were afraid to ask. The survey found that 19% of all patients experienced one or more of these problems, yet whites experienced them 16% of the time as opposed to 23% of the time for blacks, 33% for Hispanics/Latinos, and 27% for Asian Americans (Fig. 16e-7). How do we link communication to outcomes? Communication FIgURE 16e-6 The link between effective communication and patient satisfaction, adherence, and health outcomes. (From the Institute of Medicine: Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Washington, DC, National Academy Press, 2002.) In addition, in the setting of even a minimal language barrier, provider–patient communication without an interpreter is recognized as a major challenge to effective health care delivery. These communication barriers for patients with limited English proficiency lead to frequent misunderstanding of diagnosis, treatment, and follow-up plans; inappropriate use of medications; lack of informed consent for surgical procedures; high rates of serious adverse events; and a lower-quality health care experience than is provided to patients who speak fluent English. Physicians who have access to trained interpreters report a significantly higher quality of patient–physician communication than physicians who use other methods. Communication issues related to discordant language disproportionately affect minorities and likely contribute to racial/ethnic disparities in health care. CLINICAL DECISION-MAKING Theory and research suggest that variations in clinical decision-making may contribute to racial and ethnic disparities in health care. Two factors are central to this process: clinical uncertainty and stereotyping. First, a doctor’s decision-making process is nested in clinical uncertainty. Doctors depend on inferences about severity based on what they understand about illness and the information obtained from the patient. A doctor caring for a patient whose symptoms he or she has difficulty understanding and whose “signals”—the set of clues and indications that physicians rely on to make clinical decisions—are hard to read may make a decision different from the one that would be made Percent of adults with one or more communication problems* *Problems include understanding doctor, feeling doctor listened, had questions but did not ask. FIgURE 16e-7 Communication difficulties with physicians, by race/ethnicity. The reference population consisted of 6722 Americans ≥18 years of age who had made a medical visit in the previous 2 years and were asked whether they had had trouble understanding their doctors, whether they felt that the doctors had not listened, and whether they had had medical questions they were afraid to ask. (From the Commonwealth Fund Health Care Quality Survey, 2001.) for another patient who presents with exactly the same clinical condition. Given that the expression of symptoms may differ among cultural and racial groups, doctors—the overwhelming majority of whom are white—may understand symptoms best when expressed by patients of their own racial/ethnic groups. The consequence is that white patients may be treated differently from minority patients. Differences in clinical decisions can arise from this mechanism even when the doctor has the same regard for each patient (i.e., is not prejudiced). Second, the literature on social cognitive theory highlights how natural tendencies to stereotype may influence clinical decision-making. Stereotyping can be defined as the way in which people use social categories (e.g., race, gender, age) in acquiring, processing, and recalling information about others. Faced with enormous information loads and the need to make many decisions, people often subconsciously simplify the decision-making process and lessen cognitive effort by using “categories” or “stereotypes” that bundle information into groups or types that can be processed more quickly. Although functional, stereotyping can be systematically biased, as people are automatically classified into social categories based on dimensions such as race, gender, and age. Many people may not be aware of their attitudes, may not consciously endorse specific stereotypes, and paradoxically may consider themselves egalitarian and not prejudiced. Stereotypes may be strongly influenced by the messages presented consciously and unconsciously in society. For instance, if the media and our social/professional contacts tend to present images of minorities as being less educated, more violent, and nonadherent to health care recommendations, these impressions may generate stereotypes that unnaturally and unjustly impact clinical decision-making. As signs of racism, classism, gender bias, and ageism are experienced (consciously or unconsciously) in our society, stereotypes may be created that impact the way doctors manage patients from these groups. On the basis of training or practice location, doctors may develop certain perceptions about race/ethnicity, culture, and class that may evolve into stereotypes. For example, many medical students and residents are often trained—and minorities cared for—in academic health centers or public hospitals located in socioeconomically disadvantaged areas. As a result, doctors may begin to equate certain races and ethnicities with specific health beliefs and behaviors (e.g., “these patients” engage in risky behaviors, “those patients” tend to be noncompliant) that are more associated with the social environment (e.g., poverty) than with a patient’s racial/ethnic background or cultural traditions. This “conditioning” phenomenon may also be operative if doctors are faced with certain racial/ethnic patient groups who frequently do not choose aggressive forms of diagnostic or therapeutic intervention. The result over time may be that doctors begin to believe that “these patients” do not like invasive procedures; thus they may not offer these procedures as options. A wide range of studies have documented the potential for provider biases to contribute to racial/ethnic disparities in health care. For example, one study measured physicians’ unconscious (or implicit) biases and showed that these were related to differences in decisions to provide thrombolysis for a hypothetical black or white patient with a myocardial infarction. It is important to differentiate stereotyping from prejudice and discrimination. Prejudice is a conscious prejudgment of individuals that may lead to disparate treatment, and discrimination is conscious and intentional disparate treatment. All individuals stereotype subconsciously, yet, if left unquestioned, these subconscious assumptions may lead to lower-quality care for certain groups because of differences in clinical decision-making or differences in communication and patientcenteredness. For example, one study tested physicians’ unconscious racial/ethnic biases and showed that patients perceived more biased physicians as being less patient-centered in their communication. What is particularly salient is that stereotypes tend to be activated most in environments where the individual is stressed, multitasking, and under time pressure—the hallmarks of the clinical encounter. Patient-Level Factors Lack of trust has become a major concern for many health care institutions today. For example, an IOM report, To Err Is Human: Building a Safer Health System, documented alarming rates of medical errors that made patients feel vulnerable and less trustful of the U.S. health care system. The increased media and academic attention to problems related to quality of care (and of disparities themselves) has clearly diminished trust in doctors and nurses. Trust is a crucial element in the therapeutic alliance between patient and health care provider. It facilitates open communication and is directly correlated with adherence to the physician’s recommendations and the patient’s satisfaction. In other words, patients who mistrust their health care providers are less satisfied with the care they receive, and mistrust of the health care system greatly affects patients’ use of services. Mistrust can also result in inconsistent care, “doctor-shopping,” self-medication, and an increased demand by patients for referrals and diagnostic tests. On the basis of historic factors such as discrimination, segregation, and medical experimentation, blacks may be especially mistrustful of providers. The exploitation of blacks by the U.S. Public Health Service during the Tuskegee syphilis study from 1932 to 1972 left a legacy of mistrust that persists even today among this population. Other populations, including Native Americans/Alaskan Natives, Hispanics/ Latinos, and Asian Americans, also harbor significant mistrust of the health care system. A national Kaiser Family Foundation survey of 3884 individuals found that 36% of Hispanics and 35% of blacks (compared with 15% of whites) felt they had been treated unfairly in the health care system in the past because of their race/ethnicity. Perhaps even more alarming, 65% of blacks and 58% of Hispanics (compared with 22% of whites) were afraid of being treated unfairly in the future on that basis (Fig. 16e-8). This mistrust may contribute to wariness in accepting or following recommendations, undergoing invasive procedures, or participating in clinical research, and these choices, in turn, may lead to misunderstanding and the perpetuation of stereotypes among health professionals. The publication Unequal Treatment provides a series of recommendations to address racial and ethnic disparities in health care, focusing on a broad set of stakeholders. These recommendations include health system interventions, provider interventions, patient interventions, and general recommendations, which are described in more detail below. Health System Interventions • COLLECTION AND REPORTING OF DATA ON HEALTH CARE ACCESS AND USE, bY PATIENTS’ RACE/ETHNICITY Unequal Treatment found that the appropriate systems to track and monitor racial and ethnic disparities in health care are lacking and that less is known about the disparities affecting minority groups other than African FIgURE 16e-8 Patient perspectives regarding unfair treatment (Tx) based on race/ethnicity. The reference population consisted of 3884 individuals surveyed about how fairly they had been treated in the health care system in the past and how fairly they felt they would be treated in the future on the basis of their race/ethnicity. (From Race, Ethnicity & Medical Care: A Survey of Public Perceptions and Experiences. Kaiser Family Foundation, 2005.) Americans (Hispanics, Asian Americans, Pacific Islanders, Native 16e-5 Americans, and Alaskan Natives). For instance, only in the mid-1980s did the Medicare database begin to collect data on patient groups outside the standard categories of “white,” “black,” and “other.” Federal, private, and state-supported data-collection efforts are scattered and unsystematic, and many health care systems and hospitals still do not collect data on the race, ethnicity, or primary language of enrollees or patients. A survey by Regenstein and Sickler in 2006 found that 78% of 501 U.S. hospitals collected information on race, 50% collected data on ethnicity, and 50% collected data on primary language. However, the information was not collected by standard categories or collection methods and thus was of questionable accuracy. Surveys by America’s Health Insurance Plans in 2003 and 2006 showed that the proportion of enrollees in plans that collected race/ethnicity data of some type increased from 54% to 67%; however, the total percentage of plan enrollees whose race/ethnicity and language are recorded is still much lower than these figures. ENCOURAGEMENT OF THE USE OF EVIDENCE-bASED GUIDELINES AND QUALITY IMPROVEMENT Unequal Treatment highlights the subjectivity of clinical decision-making as a potential cause of racial and ethnic disparities in health care by describing how clinicians—despite the existence of well-delineated practice guidelines—may offer (consciously or unconsciously) different diagnostic and therapeutic options to different patients on the basis of their race or ethnicity. Therefore, the widespread adoption and implementation of evidence-based guidelines is a key recommendation in eliminating disparities. For instance, evidence-based guidelines are now available for the management of diabetes, HIV/AIDS, cardiovascular diseases, cancer screening and management, and asthma—all areas where significant disparities exist. As part of ongoing quality-improvement efforts, particular attention should be paid to the implementation of evidence-based guidelines for all patients, regardless of their race and ethnicity. SUPPORT FOR THE USE OF LANGUAGE INTERPRETATION SERVICES IN THE CLINICAL SETTING As described previously, a lack of efficient and effective interpreter services in a health care system can lead to patient dissatisfaction, to poor comprehension and adherence, and thus to ineffective/lower-quality care for patients with limited English proficiency. Unequal Treatment’s recommendation to support the use of interpretation services has clear implications for delivery of quality health care by improving doctors’ ability to communicate effectively with these patients. INCREASES IN THE PROPORTION OF UNDERREPRESENTED MINORITIES IN THE HEALTH CARE WORKFORCE Data for 2004 from the Association of American Medical Colleges indicate that, of the 72.4% of U.S. physicians whose race and ethnicity are known, Hispanics make up 2.8%, blacks 3.3%, and Native American and Alaskan Natives 0.3%. Furthermore, U.S. national data show that minorities (excluding Asians) compose just 7.5% of medical school faculty. In addition, minority faculty in 2007 were more likely to be at or below the rank of assistant professor, while whites composed the highest proportion of full professors. Despite representing ∼26% of the U.S. population (a number projected to almost double by 2050), minority students are still underrepresented in medical schools. In 2007, matriculants to U.S. medical schools were 7.2% Latino, 6.4% African American, 0.2% Native Hawaiian or Other Pacific Islander, and 0.3% Native American or Alaskan Native. These percentages have decreased or remained the same since 2007. It will be difficult to develop a diverse health-care workforce that can meet the needs of an increasingly diverse population without dramatic changes in the racial and ethnic composition of medical student bodies. Provider Interventions • INTEGRATION OF CROSS-CULTURAL EDUCATION INTO THE TRAINING OF ALL HEALTH CARE PROFESSIONALS The goal of cross-cultural education is to improve providers’ ability to understand, communicate with, and care for patients from diverse backgrounds. Such education focuses on enhancing awareness of sociocultural influences on health beliefs and behaviors and on building skills to facilitate understanding and management of these factors in the medical encounter. Cross-cultural education includes curricula on health care disparities, use of interpreters, and effective communication and negotiation across cultures. These curricula can be incorporated into health-professions training in medical schools, residency programs, and nursing schools and can be offered as a component of continuing education. Despite the importance of this area of education and the attention it has attracted from medical education accreditation bodies, a national survey of senior resident physicians by Weissman and colleagues found that up to 28% felt unprepared to deal with cross-cultural issues, including caring for patients who have religious beliefs that may affect treatment, patients who use complementary medicine, patients who have health beliefs at odds with Western medicine, patients who mistrust the health care system, and new immigrants. In a study at one medical school, 70% of fourth-year students felt inadequately prepared to care for patients with limited English proficiency. Efforts to incorporate cross-cultural education into medical education will contribute to improving communication and to providing a better quality of care for all patients. INCORPORATION OF TEACHING ON THE IMPACT OF RACE, ETHNICITY, AND CULTURE ON CLINICAL DECISION-MAKING Unequal Treatment and more recent studies found that stereotyping by health care providers can lead to disparate treatment based on a patient’s race or ethnicity. The Liaison Committee on Medical Education, which accredits medical schools, issued a directive that medical education should include instruction on how a patient’s race, ethnicity, and culture might unconsciously impact communication and clinical decision-making. Patient Interventions Difficulty navigating the health care system and obtaining access to care can be a hindrance to all populations, particularly to minorities. Similarly, lack of empowerment or involvement in the medical encounter by minorities can be a barrier to care. Patients need to be educated on how to navigate the health care system and how best to access care. Interventions should be used to increase patients’ participation in treatment decisions. general Recommendations • INCREASE AWARENESS OF RACIAL/ETHNIC DISPARITIES IN HEALTH CARE Efforts to raise awareness of racial/ethnic health care disparities have done little for the general public but have been fairly successful among physicians, according to a Kaiser Family Foundation report. In 2006, nearly 6 in 10 people surveyed believed that blacks received the same quality of care as whites, and 5 in 10 believed that Latinos received the same quality of care as whites. These estimates are similar to findings in a 1999 survey. Despite this lack of awareness, most people believed that all Americans deserve quality care, regardless of their background. In contrast, the level of awareness among physicians has risen sharply. In 2002, the majority (69%) of physicians said that the health care system “rarely or never” treated people unfairly on the basis of their racial/ethnic background. In 2005, less than one-quarter (24%) of physicians disagreed with the statement that “minority patients generally receive lower-quality care than white patients.” Increasing awareness of racial and ethnic health disparities among health care professionals and the public is an important first step in addressing these disparities. The ultimate goals are to generate discourse and to mobilize action to address disparities at multiple levels, including health policy makers, health systems, and the community. CONDUCT FURTHER RESEARCH TO IDENTIFY SOURCES OF DISPARITIES AND PROMISING INTERVENTIONS While the literature that formed the basis for the findings reported and recommendations made in Unequal Treatment provided significant evidence for racial and ethnic disparities, additional research is needed in several areas. First, most of the literature on disparities focuses on black-versus-white differences; much less is known about the experiences of other minority groups. Improving the ability to collect racial and ethnic patient data should facilitate this process. However, in instances where the necessary systems are not yet in place, racial and ethnic patient data may be collected prospectively in the setting of clinical or health services research to more fully elucidate disparities for other populations. Second, much of the literature on disparities to date has focused on defining areas in which these disparities exist, but less has been done to identify the multiple factors that contribute to the disparities or to test interventions to address these factors. There is clearly a need for research that identifies promising practices and solutions to disparities. Individual health care providers can do several things in the clinical encounter to address racial and ethnic disparities in health care. Be Aware that Disparities Exist Increasing awareness of racial and ethnic disparities among health care professionals is an important first step in addressing disparities in health care. Only with greater awareness can care providers be attuned to their behavior in clinical practice and thus monitor that behavior and ensure that all patients receive the highest quality of care, regardless of race, ethnicity, or culture. Practice Culturally Competent Care Previous efforts have been made to teach clinicians about the attitudes, values, beliefs, and behaviors of certain cultural groups—the key practice “dos and don’ts” in caring for “the Hispanic patient” or the “Asian patient,” for example. In certain situations, learning about a particular local community or cultural group, with a goal of following the principles of community-oriented primary care, can be helpful; when broadly and uncritically applied, however, this approach can actually lead to stereotyping and oversimplification of culture, without respect for its complexity. Cultural competence has thus evolved from merely learning information and making assumptions about patients on the basis of their backgrounds to focusing on the development of skills that follow the principles of patient-centered care. Patient-centeredness encompasses the qualities of compassion, empathy, and responsiveness to the needs, values, and expressed preferences of the individual patient. Cultural competence aims to take things a step further by expanding the repertoire of knowledge and skills classically defined as “patient-centered” to include those that are especially useful in cross-cultural interactions (and that, in fact, are vital in all clinical encounters). This repertoire includes effectively using interpreter services, eliciting the patient’s understanding of his or her condition, assessing decision-making preferences and the role of family, determining the patient’s views about biomedicine versus complementary and alternative medicine, recognizing sexual and gender issues, and building trust. For example, while it is important to understand all patients’ beliefs about health, it may be particularly crucial to understand the health beliefs of patients who come from a different culture or have a different health care experience. With the individual patient as teacher, the physician can adjust his or her practice style to meet the patient’s specific needs. Avoid Stereotyping Several strategies can allow health care providers to counteract, both systemically and individually, the normal tendency to stereotype. For example, when racially/ethnically/culturally/socially diverse teams in which each member is given equal power are assembled and are tasked to achieve a common goal, a sense of camaraderie develops and prevents the development of stereotypes based on race/ ethnicity, gender, culture, or class. Thus, health care providers should aim to gain experiences working with and learning from a diverse set of colleagues. In addition, simply being aware of the operation of social cognitive factors allows providers to actively check up on or monitor their behavior. Physicians can constantly reevaluate to ensure that they are offering the same things, in the same ways, to all patients. Understanding one’s own susceptibility to stereotyping—and how disparities may result—is essential in providing equitable, high-quality care to all patients. Work to Build Trust Patients’ mistrust of the health care system and of health care providers impacts multiple facets of the medical encounter, with effects ranging from decreased patient satisfaction to delayed care. Although the historic legacy of discrimination can never be erased, several steps can be taken to build trust with patients and to address disparities. First, providers must be aware that mistrust exists and is more prevalent among minority populations, given the history of discrimination in the United States and other countries. Second, providers must reassure patients that they come first, that everything possible will be done to ensure that they always get the best care available, and that their caregivers will serve as their advocates. Third, interpersonal skills and communication techniques that demonstrate honesty, openness, compassion, and respect on the part of the health care provider are essential tools in dismantling mistrust. Finally, patients indicate that trust is built when there is shared, participatory decision-making and the provider makes a concerted effort to understand the patient’s background. When the doctor–patient relationship is reframed as one of solidarity, the patient’s sense of vulnerability can be transformed into one of trust. The successful elimination of disparities requires trust-building interventions and strengthening of this relationship. The issue of racial and ethnic disparities in health care has gained national prominence, both with the release of the IOM report Unequal Treatment and with more recent articles that have confirmed their persistence and explored their root causes. Furthermore, another 16e-7 influential IOM report, Crossing the Quality Chasm, has highlighted the importance of equity—i.e., no variations in quality of care due to personal characteristics, including race and ethnicity—as a central principle of quality. Current efforts in health care reform and transformation, including a greater focus on value (high-quality care and cost-control), will sharpen the nation’s focus on the care of populations who experience low-quality, costly care. Addressing disparities will become a major focus, and there will be many obvious opportunities for interventions to eliminate them. Greater attention to addressing the root causes of disparities will improve the care provided to all patients, not just those who belong to racial and ethnic minorities. The authors thank Marina Cervantes for her contributions to this chapter. Ethical Issues in Clinical Medicine Bernard Lo, Christine Grady Twenty-first-century physicians face novel ethical dilemmas that can be perplexing and emotionally draining. For example, electronic medical records, handheld personal devices, and provision of care by interdisciplinary teams all hold the promise of more coordinated and 17e comprehensive care but also raise new concerns about confidentiality, appropriate boundaries of the doctor–patient relationship, and responsibility. Chapter 1 puts the practice of medicine into a professional and historical context. The current chapter presents approaches and principles that physicians can use to address the ethical issues they encounter in their work. Physicians make ethical judgments about clinical situations every day. Traditional professional codes and ethical principles provide instructive guidance for physicians but need to be interpreted and applied to each situation. Physicians need to be prepared for lifelong learning about ethical issues and dilemmas as well as about new scientific and clinical developments. When struggling with difficult ethical issues, physicians may need to reevaluate their basic convictions, tolerate uncertainty, and maintain their integrity while respecting the opinions of others. Discussing perplexing ethical issues with other members of the health care team, ethics consultation services, or the hospital ethics committee can clarify issues and reveal strategies for resolution, including improving communication and dealing with strong or conflicting emotions. Several approaches may be useful for resolving ethical issues. Among these approaches are those based on ethical principles, virtue ethics, professional oaths, and personal values. These various sources of guidance encompass precepts that may conflict in a particular case, leaving the physician in a quandary. In a diverse society, different individuals may turn to different sources of moral guidance. In addition, general moral precepts often need to be interpreted and applied in the context of a particular clinical situation. When facing an ethical challenge, physicians should articulate their concerns and reasoning, discuss and listen to the views of others involved in the case, and call on available resources as needed. Through these efforts, physicians can gain deeper insight into the ethical issues they face and often can reach mutually acceptable resolutions to complex problems. Ethical principles can serve as general guidelines to help physicians determine the right thing to do. Respecting Patients Physicians should always treat patients with respect, which entails understanding patients’ goals, communicating effectively, obtaining informed and voluntary consent for interventions, respecting informed refusals, and protecting confidentiality. Different clinical goals and approaches are often feasible, and interventions can cause both benefit and harm. Individuals place different values on health and medical care and weigh the benefits and risks of medical interventions differently. Generally, the values and informed choices of patients should be respected. OBTAINING INFORMED CONSENT To help patients make informed decisions, physicians should discuss with them the nature of the proposed care; the alternatives; and the risks, benefits, and likely consequences of each option. Informed consent involves more than obtaining signatures on consent forms. Physicians should promote shared decision-making by educating patients, answering their questions, making recommendations, and helping them deliberate. Patients can be overwhelmed by medical jargon, needlessly complicated explanations, or the provision of too much information at once. Patients can make informed decisions only if they receive honest and understandable information. Competent, informed patients may refuse recommended interventions and choose among reasonable alternatives. If patients cannot give consent in an emergency and if delay of treatment while 17e-1 surrogates are contacted will place their lives or health in peril, treatment can be given without informed consent. People are presumed to want such emergency care unless they have previously indicated otherwise. Respect for patients does not entitle the patients to insist on any care they want. Physicians are not obligated to provide interventions that have no physiologic rationale, that have already failed, or that are contrary to evidence-based practice recommendations, good clinical judgment, or public policies. National policies and laws also dictate certain decisions—e.g., allocating cadaveric organs for transplantation and, in most states, prohibiting physician-assisted suicide. Physicians should disclose and discuss relevant and accurate information about diagnosis, prognosis, and treatment options. To help patients cope with bad news, doctors can adjust the pace of disclosure, offer empathy and hope, provide emotional support, and call on other resources such as spiritual care or social work. Physicians may be tempted to withhold a serious diagnosis, misrepresent it by using ambiguous terms, or limit discussions of prognosis or risks for fear that certain information will make patients anxious or depressed. Providing honest information about clinical situations preserves patients’ autonomy and trust and promotes sound communication with patients and colleagues. However, patients may choose not to receive such information, asking surrogates to make decisions on their behalf, as is common with serious diagnoses in some traditional cultures. AVOIDING DECEPTION Health care providers sometimes consider using lies or deception to obtain benefits for patients. Lying refers to statements known to be false and intended to mislead the listener. Deception includes statements and actions intended to mislead the listener, whether or not they are literally true. For example, a physician might sign a disability form for a patient who does not meet disability criteria. Although motivated by a desire to help the patient, such deception is ethically problematic because it undermines physicians’ credibility and trustworthiness. MAINTAINING CONFIDENTIALITY Maintaining confidentiality is essential in respecting patients’ autonomy and privacy, encourages them to seek treatment and to discuss problems candidly, and prevents discrimination. However, confidentiality may be overridden to prevent serious harm to third parties or to the patient. Exceptions to confidentiality are justified if the risk is serious and probable, if there are no less restrictive measures by which to avert risk, if the adverse effects of overriding confidentiality are minimized, and if these adverse effects are deemed acceptable by society. For example, the law requires physicians to report cases of tuberculosis, sexually transmitted infection, elder or child abuse, and domestic violence. CARING FOR PATIENTS WHO LACK DECISION-MAKING CAPACITY Some patients are not able to make informed decisions because of unconsciousness, dementia, delirium, or other medical conditions. Although only the courts have the legal authority to determine that a patient is incompetent for making medical decisions, in practice, physicians determine when patients lack the capacity to make health care decisions and arrange for surrogates to make decisions for them, without involving the courts. Patients with decision-making capacity can express a choice and appreciate the medical situation; the nature of the proposed care; the alternatives; and the risks, benefits, and consequences of each alternative. Their choices should be consistent with their values and not the result of delusions or hallucinations. Psychiatrists may help assess decision-making capacity in difficult cases. When impairments are fluctuating or reversible, decisions should be postponed if possible until the patient recovers decision-making capacity. If a patient lacks decision-making capacity, physicians should ask: Who is the appropriate surrogate, and what would the patient want done? Patients may designate someone to serve as their health care proxy or to assume durable power of attorney for health care; such choices should be respected. (See Chap. 10 for further details about advance care planning.) Unless a patient without decision-making capacity has previously designated a health care proxy, physicians usually ask family members to serve as surrogates. Many patients want family members as surrogates, and family members generally have the patient’s best interests at heart. Statutes in most U.S. states delineate a prioritized list of relatives who may serve as surrogates if the patient has not designated a proxy. Surrogates’ decisions should be guided by the patient’s values, goals, and previously expressed preferences. However, it may be appropriate to override previous preferences in favor of the patient’s current best interests if an intervention is highly likely to provide a significant benefit, if previous statements do not fit the situation well, or if the patient expressed a desire for the surrogate to have leeway in making decisions. ACTING IN THE BEST INTERESTS OF PATIENTS Respect for patients is broader than respecting their autonomy to make informed choices about their medical care and promoting shared decision-making. Physicians should also be compassionate and dedicated and should act in the best interests of their patients. The principle of beneficence requires physicians to act for the patient’s benefit. Patients typically lack medical expertise and may be vulnerable because of their illness. They rely on physicians to provide sound recommendations and to promote their well-being. Physicians encourage such trust and have a fiduciary duty to act in the best interests of the patient, which should prevail over the physicians’ own self-interest or the interests of third parties, such as hospitals or insurers. Physicians’ fiduciary obligations contrast sharply with business relationships, which are characterized by “let the buyer beware,” not by reliance and trust. A related principle, “first do no harm,” forbids physicians to provide ineffective interventions or to act without due care. Although often cited, this precept alone provides only limited guidance because many beneficial interventions pose serious risks. Physicians should prevent unnecessary harm by recommending interventions that maximize benefit and minimize harm. MANAGING CONFLICTS BETWEEN RESPECTING PATIENTS AND ACTING IN THEIR BEST INTERESTS Conflicts can arise when patients’ refusal of interventions thwarts their own goals for care or causes them serious harm. For example, if a young woman with asthma refuses mechanical ventilation for reversible respiratory failure, simple acceptance of this decision by the physician, in the name of respecting autonomy, is morally constricted. Physicians should elicit patients’ expectations and concerns, correct their misunderstandings, and try to persuade them to accept beneficial therapies. If disagreements persist after such efforts, patients’ informed choices and views of their own best interests should prevail. While refusing recommended care does not render a patient incompetent, it may lead the physician to probe further to ensure that the patient has the capacity to make informed decisions. Acting Justly The principle of justice provides guidance to physicians about how to ethically treat patients and to make decisions about allocating important resources, including their own time. Justice in a general sense means fairness: people should receive what they deserve. In addition, it is important to act consistently in cases that are similar in ethically relevant ways. Otherwise, decisions may be arbitrary, biased, and unfair. Justice forbids discrimination in health care based on race, religion, gender, sexual orientation, or other personal characteristics (Chap. 16e). Justice also requires that limited health care resources be allocated fairly. Universal access to medically needed health care remains an unrealized moral aspiration in the United States and much of the rest of the world. Patients without health insurance often cannot afford health care and lack access to safety-net services. Even among insured patients, insurers may deny coverage for interventions recommended by the physician. In this situation, physicians should advocate for patients and try to help them obtain needed care. Doctors might consider—or patients might request—the use of deception to obtain such benefits. However, avoiding deception is a basic ethical guideline that sets limits on advocating for patients. Allocation of health care resources is unavoidable because these resources are limited. Ideally, decisions about allocation are made at the level of public policy, with physician input. For example, the United Network for Organ Sharing (www.unos.org) provides criteria for allocating scarce organs. Ad hoc resource allocation at the bedside is problematic because it may be inconsistent, unfair, and ineffective. Physicians have an important role, however, in avoiding unnecessary interventions. Evidence-based lists of tests and procedures that physicians and patients should question and discuss were developed through the recent initiative Choosing Wisely (http://www.choosingwisely.org/). At the bedside, physicians should act as patient advocates within constraints set by society, reasonable insurance coverage, and evidence-based practice. For example, if a patient’s insurer has a higher copayment for nonformulary drugs, it still may be reasonable for physicians to advocate for nonformulary products for good reasons (e.g., when the formulary drugs are ineffective or not tolerated). Virtue ethics focuses on physicians’ character and qualities, with the expectation that doctors will cultivate such virtues as compassion, dedication, altruism, humility, and integrity. Proponents argue that, if such characteristics become ingrained, they help guide physicians in novel situations. Moreover, merely following ethical precepts or principles without these virtues leads to uncaring doctor–patient relationships. Professional oaths and codes are useful guides for physicians. Most physicians take oaths at student white-coat ceremonies and at medical school graduation, and many are members of professional societies that have professional codes. Members of the profession pledge to the public and to their patients that they will be guided by the principles and values in these oaths or codes. Oaths and codes focus physicians on ethical ideals rather than on daily pragmatic concerns. However, professional oaths and codes—even the Hippocratic tradition—have been criticized for lack of patient or public input and the limited role given to patients in making decisions. Personal values, cultural traditions, and religious beliefs are important sources of personal morality that help physicians address ethical issues and cope with the moral distress they may experience in practice. While essential, personal morality is a limited ethical guide in clinical practice. Physicians have role-specific ethical obligations that go beyond their obligations as good people, including the duties to obtain informed consent and maintain confidentiality discussed earlier in this chapter. Furthermore, in a culturally and religiously diverse world, patients and colleagues have personal moral beliefs that commonly differ from their physicians’. Claims of Conscience Some physicians have conscientious objections to providing or referring patients for certain treatments, such as contraception. While physicians should not be asked to violate deeply held moral beliefs or religious convictions, patients need to receive medically appropriate, timely care. Institutions such as clinics and hospitals have a collective duty to provide care that patients need while making reasonable attempts to accommodate health care workers’ conscientious objections—for example, by arranging for another professional to provide the service in question. Patients seeking a relationship with a doctor or health care institution should be notified in advance of any conscientious objections to the provision of specific interventions. Since patients commonly must select providers for insurance purposes, switching providers when a specific service is needed would be burdensome. There are important limits on claims of conscience. Health care workers may not insist that patients receive unwanted medical interventions and may not refuse to treat patients because of their race, ethnicity, national origin, gender, or religion. Such discrimination is illegal and violates the physician’s duty to respect patients. Moral Distress Physicians and other health care providers may experience moral distress when they feel they know the ethically correct action to take in a particular situation but are constrained by institutional policies, limited resources, or a position subordinate to the ultimate decision-maker. Moral distress can lead to anger, anxiety, frustration, fatigue, and work dissatisfaction. Discussing complex clinical situations with colleagues and seeking assistance with difficult decisions helps to alleviate moral distress, as does a healthy work environment characterized by open communication and mutual respect. These various sources of guidance contain precepts that may conflict in a particular case, leaving the physician in a quandary. In a diverse society, different individuals may turn to different sources of moral guidance. In addition, general moral precepts often need to be interpreted and applied in the context of a particular clinical situation. When facing an ethical challenge, physicians should articulate their concerns and reasoning, discuss and listen to the views of others involved in the case, and call on available resources as needed. Through these efforts, physicians can gain deeper insight into the ethical issues they face and often reach mutually acceptable resolutions to complex problems. Recent changes in the organization and delivery of health care have led to new ethical challenges for physicians. The Accreditation Council for Graduate Medical Education requires medical students and residents to observe work-hour limitations, which are intended to help prevent physician burnout, reduce mistakes, and create a better balance between work and private life. In addition to continuing controversy over their effectiveness, some ethical concerns are raised by work-hour limitations. One concern is that physicians may develop a shift-worker mentality that undermines their dedication to the well-being of patients. Forced handoffs to colleagues may actually increase the risk of errors, and inflexibility can be detrimental. In some cases, trainees could provide an irreplaceable benefit to a patient or family by going beyond work-hour limits, especially if there is rapport with the patient or family that is not easily transferred to another provider. For example, a resident may want to discuss decisions about life-sustaining interventions or to comfort a family member over a patient’s death (Chap. 10). Thus strict adherence to work-hour limits is not always consistent with the ideal of acting for the good of the patient and with compassion. Exceptions to work-hour limits, however, should remain exceptions and should not be allowed to undercut work-hour policies. Physicians’ roles are changing as care is increasingly provided by multidisciplinary teams. The traditional hierarchy in which the physician is the “captain of the ship” may be inappropriate, particularly in areas such as prevention, disease management and its coordination, and patient education. Physicians should respect team members and acknowledge the expertise of those from other disciplinary backgrounds. Team-based care promises to provide more comprehensive and higher-quality care. However, regular communication and planning are critical to avoid diffusion of responsibility and to ensure that someone is accountable for the completion of patient-care tasks. The increasing use of evidence-based practice guidelines and benchmarking of performance raises the overall quality of care. However, practice guideline recommendations may be inappropriate for an individual patient, while another option may provide substantially greater benefits. In such situations, physicians’ duty to act in the patient’s best interests should take priority over benefits to society as a whole. Physicians need to understand practice guidelines, to recognize situations in which exceptions might be reasonable, and to be prepared to justify an exception. With the growing importance of and interest in global health, ods. Typically, physicians gain valuable experience while providing service to patients in need. Such arrangements, however, can raise ethical challenges—for example, because of differences in beliefs 17e-3 about health and illness, expectations regarding health care and the physician’s role, standards of clinical practice, and norms for disclosure of serious diagnoses. Additional dilemmas arise if visiting physicians take on responsibilities beyond their level of training or if donated drugs and equipment are not appropriate to local needs. Visiting physicians and trainees should exercise due diligence in obtaining needed information about the cultural and clinical practices in the host community, should work closely with local professionals and team members, and should be explicit about their skills, knowledge, and limits. In addition, these arrangements can pose risks. The visiting physician may face personal risk from infectious disease or motor vehicle accident. The host institution incurs administrative and supervisory costs. Advance preparation for these possibilities minimizes harm, distress, and misunderstanding. Increasingly, physicians use social and electronic media to share information with patients and other providers. Social networking may be especially useful in reaching young or otherwise hard-toaccess patients. However, the use of social media, including blogs, social networks, and websites, raises ethical challenges and can have harmful consequences if not approached prudently. Injudicious use of social media can pose risks to patient confidentiality, expose patients to intimate details of physicians’ personal lives, cross professional boundaries, and jeopardize therapeutic relationships. Posts may be considered unprofessional and lead to adverse consequences for a provider’s reputation, safety, or even employment, especially if they express frustration or anger over work incidents, disparage patients or colleagues, use offensive or discriminatory language, reveal highly personal information, or picture a physician intoxicated, using illegal drugs, or in sexually suggestive poses. Physicians should remember that, in the absence of highly restrictive privacy settings, postings on the Internet in general and on social networking sites in particular are usually permanent and may be accessible to the public, their employers, and their patients. Physicians should separate professional from personal websites, social networking accounts, and blogs and should follow guidelines developed by institutions and professional societies on using social media to communicate with patients. Acting in patients’ best interests may conflict with the physician’s self-interest or the interests of third parties such as insurers or hospitals. From an ethical viewpoint, patients’ interests should be paramount. Even the appearance of a conflict of interest may undermine trust in the profession. Health care providers may be offered financial incentives to improve the quality or efficiency of care. Such pay-for-performance incentives, however, could lead physicians to avoid sicker patients with more complicated cases or to focus on benchmarked outcomes even when such a focus is not in the best interests of an individual patient. In contrast, fee-for-service payments offer physicians incentives to order more interventions than may be necessary or to refer patients to laboratory or imaging facilities in which they have a financial stake. Regardless of financial incentives, physicians should recommend available care that is in the patient’s best interests—no more and no less. Financial relationships between physicians and industry are increasingly scrutinized. Gifts from drug and device companies may create an inappropriate risk of undue influence, induce subconscious feelings of reciprocity, impair public trust, and increase the cost of health care. Many academic medical centers have banned drug-company gifts of pens, notepads, and meals to physicians. Under the new Physician Payment Sunshine Act, companies must disclose publicly the names of physicians to whom they have made payments or transferred material goods and the amount of those payments or transfers. The challenge will be to distinguish payments for scientific consulting and research contracts—which are consistent with professional and academic missions and should be encouraged—from those for promotional speaking and consulting whose goal is to increase sales of company products. Some health care workers, fearing fatal occupational infections, have refused to care for certain patients, such as those with HIV infection or severe acute respiratory syndrome (SARS). Such fears about personal safety need to be acknowledged. Health care institutions should reduce occupational risk by providing proper training, protective equipment, and supervision. To fulfill their mission of helping patients, physicians should provide appropriate care within their clinical expertise, despite some personal risk. Errors are inevitable in clinical medicine, and some errors cause serious adverse events that harm patients. Most errors are caused by lapses of attention or flaws in the system of delivering health care; only a few result from blameworthy individual behavior (Chaps. 3 and 12e). Physicians and students may fear that disclosing errors will damage their careers. However, patients appreciate being told when an error occurs, receiving an apology, and being informed about efforts to prevent similar errors in the future. Physicians and health care institutions show respect for patients by disclosing errors, offering appropriate compensation for harm done, and using errors as opportunities to improve the quality of care. Overall, patient safety is more likely to be improved through a quality improvement approach to errors rather than a punitive one except in cases of gross incompetence, physician impairment, boundary violations, or repeated violations of standard procedures. Physicians’ interest in learning, which fosters the long-term goal of benefiting future patients, may conflict with the short-term goal of providing optimal care to current patients. When trainees learn to carry out procedures on patients, they lack the proficiency of experienced physicians, and patients may experience inconvenience, discomfort, longer procedures, or even increased risk. Although patients’ consent for trainee participation in their care is always important, it is particularly important for intimate examinations, such as pelvic, rectal, breast, and testicular examinations, and for invasive procedures. To ensure patients’ cooperation, some care providers introduce students as physicians or do not tell patients that trainees will be performing procedures. Such misrepresentation undermines trust, may lead to more elaborate deception, and makes it difficult for patients to make informed choices about their care. Patients should be told who is providing care and how trainees are supervised. Most patients, when informed, allow trainees to play an active role in their care. Physicians may hesitate to intervene when colleagues impaired by alcohol abuse, drug abuse, or psychiatric or medical illness place patients at risk. However, society relies on physicians to regulate themselves. If colleagues of an impaired physician do not take steps to protect patients, no one else may be in a position to do so. Clinical research is essential to translate scientific discoveries into beneficial tests and therapies for patients. However, clinical research raises ethical concerns because participants face inconvenience and risks in research that is designed not specifically to benefit them but rather to advance scientific knowledge. Ethical guidelines for researchers require them to obtain informed and voluntary consent from participants and approval from an institutional review board, which determines that risks to participants are acceptable and have been minimized and recommends appropriate additional protections when research includes vulnerable participants. Physicians may be involved as clinical research investigators or may be in a position to refer or recommend clinical trial participation to their patients. Physicians should be critical consumers of clinical research results and keep up with advances that change standards of practice. Courses and guidance on the ethics of clinical research are widely available. Pain: Pathophysiology and Management James P. Rathmell, Howard L. Fields The province of medicine is to preserve and restore health and to relieve suffering. Understanding pain is essential to both of these goals. Because pain is universally understood as a signal of disease, it 18 SEC Tion 1 PAin FIguRE 18-1 Components of a typical cutaneous nerve. There are two distinct functional cat-egories of axons: primary afferents with cell bodies in the dorsal root ganglion, and sympathetic postganglionic fibers with cell bodies in the sympathetic ganglion. Primary afferents include those with large-diameter myelinated (Aβ), small-diameter myelinated (Aδ), and unmyelinated (C) axons. All sympathetic postganglionic fibers are unmyelinated. CHAPTER 18 Pain: Pathophysiology and Management is the most common symptom that brings a patient to a physician’s attention. The function of the pain sensory system is to protect the body and maintain homeostasis. It does this by detecting, localizing, and identifying potential or actual tissue-damaging processes. Because different diseases produce characteristic patterns of tissue damage, the quality, time course, and location of a patient’s pain lend important diagnostic clues. It is the physician’s responsibility to provide rapid and effective pain relief. Pain is an unpleasant sensation localized to a part of the body. It is often described in terms of a penetrating or tissue-destructive process (e.g., stabbing, burning, twisting, tearing, squeezing) and/or of a bodily or emotional reaction (e.g., terrifying, nauseating, sickening). Furthermore, any pain of moderate or higher intensity is accompanied by anxiety and the urge to escape or terminate the feeling. These properties illustrate the duality of pain: it is both sensation and emotion. When it is acute, pain is characteristically associated with behavioral arousal and a stress response consisting of increased blood pressure, heart rate, pupil diameter, and plasma cortisol levels. In addition, local muscle contraction (e.g., limb flexion, abdominal wall rigidity) is often present. PERIPHERAL MECHANISMS The Primary Afferent Nociceptor A peripheral nerve consists of the axons of three different types of neurons: primary sensory afferents, motor neurons, and sympathetic postganglionic neurons (Fig. 18-1). The cell bodies of primary sensory afferents are located in the dorsal root ganglia within the vertebral foramina. The primary afferent axon has two branches: one projects centrally into the spinal cord and the other projects peripherally to innervate tissues. Primary afferents are classified by their diameter, degree of myelination, and conduction velocity. The largest diameter afferent fibers, A-beta (Aβ), respond maximally to light touch and/or moving stimuli; they are present primarily in nerves that innervate the skin. In normal individuals, the activity of these fibers does not produce pain. There are two other classes of primary afferent nerve fibers: the small diameter myelinated A-delta (Aδ) and the unmyelinated (C) axons (Fig. 18-1). These fibers are present in nerves to the skin and to deep somatic and visceral structures. Some tissues, such as the cornea, are innervated only by Aδ and C fiber afferents. Most Aδ and C fiber afferents respond maximally only to intense (painful) stimuli and produce the subjective experience of pain when they are electrically stimulated; this defines them as primary afferent nociceptors (pain receptors). The ability to detect painful stimuli is completely abolished when conduction in Aδ and C fiber axons is blocked. Individual primary afferent nociceptors can respond to several different types of noxious stimuli. For example, most nociceptors respond to heat; intense cold; intense mechanical distortion, such as a pinch; changes in pH, particularly an acidic environment; and application of chemical irritants including adenosine triphosphate (ATP), serotonin, bradykinin, and histamine. Sensitization When intense, repeated, or prolonged stimuli are applied to damaged or inflamed tissues, the threshold for activating primary afferent nociceptors is lowered, and the frequency of firing is higher for all stimulus intensities. Inflammatory mediators such as bradykinin, nerve-growth factor, some prostaglandins, and leukotrienes contribute to this process, which is called sensitization. Sensitization occurs at the level of the peripheral nerve terminal (peripheral sensitization) as well as at the level of the dorsal horn of the spinal cord (central sensitization). Peripheral sensitization occurs in damaged or inflamed tissues, when inflammatory mediators activate intracellular signal transduction in nociceptors, prompting an increase in the production, transport, and membrane insertion of chemically gated and voltage-gated ion channels. These changes increase the excitability of nociceptor terminals and lower their threshold for activation by mechanical, thermal, and chemical stimuli. Central sensitization occurs when activity, generated by nociceptors during inflammation, enhances the excitability of nerve cells in the dorsal horn of the spinal cord. Following injury and resultant sensitization, normally innocuous stimuli can produce pain (termed allodynia). Sensitization is a clinically important process that contributes to tenderness, soreness, and hyperalgesia (increased pain intensity in response to the same noxious stimulus; e.g., moderate pressure causes severe pain). A striking example of sensitization is sunburned skin, in which severe pain can be produced by a gentle slap on the back or a warm shower. Sensitization is of particular importance for pain and tenderness in deep tissues. Viscera are normally relatively insensitive to noxious mechanical and thermal stimuli, although hollow viscera do generate significant discomfort when distended. In contrast, when affected by a disease process with an inflammatory component, deep structures such as joints or hollow viscera characteristically become exquisitely sensitive to mechanical stimulation. A large proportion of Aδ and C fiber afferents innervating viscera are completely insensitive in normal noninjured, noninflamed tissue. That is, they cannot be activated by known mechanical or thermal stimuli and are not spontaneously active. However, in the presence of inflammatory mediators, these afferents become sensitive to mechanical stimuli. Such afferents have been termed silent nociceptors, and their characteristic properties may explain how, under pathologic conditions, the relatively insensitive deep structures can become the source of severe and debilitating pain and tenderness. Low pH, prostaglandins, leukotrienes, and other inflammatory mediators such as bradykinin play a significant role in sensitization. Nociceptor-Induced Inflammation Primary afferent nociceptors also have a neuroeffector function. Most nociceptors contain polypeptide mediators that are released from their peripheral terminals when they are activated (Fig. 18-2). An example is substance P, an 11-amino-acid peptide. Substance P is released from primary afferent nociceptors and has multiple biologic activities. It is a potent vasodilator, degranulates mast cells, is a chemoattractant for leukocytes, and increases the production and release of inflammatory mediators. Interestingly, depletion of substance P from joints reduces the severity of experimental arthritis. Primary afferent nociceptors are not simply passive messengers of threats to tissue injury but also play an active role in tissue protection through these neuroeffector functions. CENTRAL MECHANISMS The Spinal Cord and Referred Pain The axons of primary afferent nociceptors enter the spinal cord via the dorsal root. They terminate in the dorsal horn of the spinal gray matter (Fig. 18-3). The terminals of primary afferent axons contact spinal neurons that transmit the pain signal to brain sites involved in pain perception. When primary afferents are activated by noxious stimuli, they release neurotransmitters from their terminals that excite the spinal cord neurons. The major neurotransmitter released is glutamate, which rapidly excites dorsal horn neurons. Primary afferent nociceptor terminals also release peptides, including substance P and calcitonin gene-related peptide, which produce a slower and longer-lasting excitation of the dorsal horn neurons. The axon of each primary afferent contacts many spinal neurons, and each spinal neuron receives convergent inputs from many primary afferents. The convergence of sensory inputs to a single spinal pain-transmission neuron is of great importance because it underlies the phenomenon of referred pain. All spinal neurons that receive input from the viscera and deep musculoskeletal structures also receive input from the skin. The convergence patterns are determined by the spinal segment of the dorsal root ganglion that supplies the afferent innervation of a structure. For example, the afferents that supply the central diaphragm are derived from the third and fourth cervical dorsal root ganglia. Primary afferents with cell bodies in these same ganglia supply the skin of the shoulder and lower neck. Thus, sensory inputs from both the shoulder skin and the central diaphragm converge on pain-transmission neurons in the third and fourth cervical spinal segments. Because of this convergence and the fact that the spinal neurons are most often activated by inputs from the skin, activity evoked in spinal neurons by input from deep structures is mislocalized by the patient to a place that roughly corresponds with the region of skin innervated by the same spinal segment. Thus, inflammation near the central diaphragm is often reported as shoulder discomfort. This spatial displacement of pain sensation from the site of the injury that produces it is known as referred pain. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 18-2 Events leading to activation, sensitization, and spread of sensitization of primary afferent nociceptor terminals. A. Direct activation by intense pressure and consequent cell damage. Cell damage induces lower pH (H+) and leads to release of potassium (K+) and to synthesis of prostaglandins (PG) and bradykinin (BK). Prostaglandins increase the sensitivity of the terminal to bradykinin and other pain-producing substances. B. Secondary activation. Impulses generated in the stimulated terminal propagate not only to the spinal cord but also into other terminal branches where they induce the release of peptides, including substance P (SP). Substance P causes vasodilation and neurogenic edema with further accumulation of bradykinin (BK). Substance P also causes the release of histamine (H) from mast cells and serotonin (5HT) from platelets. Ascending Pathways for Pain A majority of spinal neurons contacted by primary afferent nociceptors send their axons to the contralateral thalamus. These axons form the contralateral spinothalamic tract, which lies in the anterolateral white matter of the spinal cord, the lateral edge of the medulla, and the lateral pons and midbrain. The spinothalamic pathway is crucial for pain sensation in humans. Interruption of this pathway produces permanent deficits in pain and temperature discrimination. Spinothalamic tract axons ascend to several regions of the thalamus. There is tremendous divergence of the pain signal from these thalamic sites to several distinct areas of the cerebral cortex that subserve different aspects of the pain experience (Fig. 18-4). One of the thalamic projections is to the somatosensory cortex. This projection mediates the purely sensory aspects of pain, i.e., its location, intensity, and quality. Other thalamic neurons project to cortical regions that are linked FIguRE 18-3 The convergence-projection hypothesis of referred pain. According to this hypothesis, visceral afferent nociceptors converge on the same pain-projection neurons as the afferents from the somatic structures in which the pain is perceived. The brain has no way of knowing the actual source of input and mistakenly “projects” the sensation to the somatic structure. to emotional responses, such as the cingulate gyrus and other areas of the frontal lobes, including the insular cortex. These pathways to the frontal cortex subserve the affective or unpleasant emotional dimension of pain. This affective dimension of pain produces suffering and exerts potent control of behavior. Because of this dimension, fear is FIguRE 18-4 Pain transmission and modulatory pathways. A. Transmission system for nociceptive messages. Noxious stimuli activate the sensitive peripheral ending of the primary afferent nociceptor by the process of transduction. The message is then transmitted over the peripheral nerve to the spinal cord, where it synapses with cells of origin of the major ascending pain pathway, the spinothalamic tract. The message is relayed in the thalamus to the anterior cingulate (C), frontal insular (F), and somatosensory cortex (SS). B. Pain-modulation network. Inputs from frontal cortex and hypothalamus activate cells in the midbrain that control spinal pain-transmission cells via cells in the medulla. a constant companion of pain. As a consequence, injury or surgical lesions to areas of the frontal cortex activated by painful stimuli can diminish the emotional impact of pain while largely preserving the individual’s ability to recognize noxious stimuli as painful. The pain produced by injuries of similar magnitude is remarkably variable in different situations and in different individuals. For example, athletes have been known to sustain serious fractures with only minor pain, and Beecher’s classic World War II survey revealed that many soldiers in battle were unbothered by injuries that would have produced agonizing pain in civilian patients. Furthermore, even the suggestion that a treatment will relieve pain can have a significant analgesic effect (the placebo effect). On the other hand, many patients find even minor injuries (such as venipuncture) frightening and unbearable, and the expectation of pain can induce pain even without a noxious stimulus. The suggestion that pain will worsen following administration of an inert substance can increase its perceived intensity (the nocebo effect). The powerful effect of expectation and other psychological variables on the perceived intensity of pain is explained by brain circuits that modulate the activity of the pain-transmission pathways. One of these circuits has links to the hypothalamus, midbrain, and medulla, and it selectively controls spinal pain-transmission neurons through a descending pathway (Fig. 18-4). Human brain–imaging studies have implicated this painmodulating circuit in the pain-relieving effect of attention, suggestion, and opioid analgesic medications (Fig. 18-5). Furthermore, each of the component structures of the pathway contains opioid receptors and is sensitive to the direct application of opioid drugs. In animals, lesions of this descending modulatory system reduce the analgesic effect of systemically administered opioids such as morphine. Along with the opioid receptor, the component nuclei of this pain-modulating circuit contain endogenous opioid peptides such as the enkephalins and β-endorphin. The most reliable way to activate this endogenous opioid-mediated modulating system is by suggestion of pain relief or by intense emotion directed away from the pain-causing injury (e.g., during severe threat or an athletic competition). In fact, pain-relieving endogenous opioids are released following surgical procedures and in patients given a placebo for pain relief. Pain-modulating circuits can enhance as well as suppress pain. Both pain-inhibiting and pain-facilitating neurons in the medulla project to and control spinal pain-transmission neurons. Because pain-transmission neurons can be activated by modulatory neurons, it is theoretically possible to generate a pain signal with no peripheral noxious stimulus. In fact, human functional imaging studies have demonstrated increased activity in this circuit during migraine headaches. A central circuit that facilitates pain could account for the finding that pain can be induced by suggestion or enhanced by expectation and provides a framework for understanding how psychological factors can contribute to chronic pain. Lesions of the peripheral or central nociceptive pathways typically result in a loss or impairment of pain sensation. Paradoxically, damage to or dysfunction of these pathways can also produce pain. For example, damage to peripheral nerves, as occurs in diabetic neuropathy, or to primary afferents, as in herpes zoster infection, can result in pain that is referred to the body region innervated by the damaged nerves. Pain may also be produced by damage to the central nervous system (CNS), for example, in some patients following trauma or vascular injury to the spinal cord, brainstem, or thalamic areas that contain central nociceptive pathways. Such neuropathic pains are often severe and are typically resistant to standard treatments for pain. Neuropathic pain typically has an unusual burning, tingling, or electric shock–like quality and may be triggered by very light touch. These features are rare in other types of pain. On examination, a sensory deficit is characteristically present in the area of the patient’s pain. Hyperpathia, a greatly exaggerated pain sensation to innocuous CHAPTER 18 Pain: Pathophysiology and Management FIguRE 18-5 Functional magnetic resonance imaging (fMRI) demonstrates placebo-enhanced brain activity in anatomic regions correlating with the opioidergic descending pain control system. Top panel: Frontal fMRI image shows placebo-enhanced brain activity in the dorsal lateral prefrontal cortex (DLPFC). Bottom panel: Sagittal fMRI images show placebo-enhanced responses in the rostral anterior cingulate cortex (rACC), the rostral ventral medullae (RVM), the periaqueductal gray (PAG) area, and the hypothalamus. The placebo-enhanced activity in all areas was reduced by naloxone, demonstrating the link between the descending opioidergic system and the placebo analgesic response. (Adapted with permission from F Eippert et al: Neuron 63:533, 2009.) or mild nociceptive stimuli, is also characteristic of neuropathic pain; patients often complain that the very lightest moving stimulus evokes exquisite pain (allodynia). In this regard, it is of clinical interest that a topical preparation of 5% lidocaine in patch form is effective for patients with postherpetic neuralgia who have prominent allodynia. A variety of mechanisms contribute to neuropathic pain. As with sensitized primary afferent nociceptors, damaged primary afferents, including nociceptors, become highly sensitive to mechanical stimulation and may generate impulses in the absence of stimulation. Increased sensitivity and spontaneous activity are due, in part, to an increased concentration of sodium channels in the damaged nerve fiber. Damaged primary afferents may also develop sensitivity to nor-epinephrine. Interestingly, spinal cord pain-transmission neurons cut off from their normal input may also become spontaneously active. PART 2 Cardinal Manifestations and Presentation of Diseases Thus, both CNS and peripheral nervous system hyperactivity contribute to neuropathic pain. Sympathetically Maintained Pain Patients with peripheral nerve injury occasionally develop spontaneous pain in the region innervated by the nerve. This pain is often described as having a burning quality. The pain typically begins after a delay of hours to days or even weeks and is accompanied by swelling of the extremity, periarticular bone loss, and arthritic changes in the distal joints. The pain may be relieved by a local anesthetic block of the sympathetic innervation to the affected extremity. Damaged primary afferent nociceptors acquire adrenergic sensitivity and can be activated by stimulation of the sympathetic outflow. This constellation of spontaneous pain and signs of sympathetic dysfunction following injury has been termed complex regional pain syndrome (CRPS). When this occurs after an identifiable nerve injury, it is termed CRPS type II (also known as posttraumatic neuralgia or, if severe, causalgia). When a similar clinical picture appears without obvious nerve injury, it is termed CRPS type I (also known as reflex sympathetic dystrophy). CRPS can be produced by a variety of injuries, including fractures of bone, soft tissue trauma, myocardial infarction, and stroke (Chap. 446). CRPS type I typically resolves with symptomatic treatment; however, when it persists, detailed examination often reveals evidence of peripheral nerve injury. Although the pathophysiology of CRPS is poorly understood, the pain and the signs of inflammation, when acute, can be rapidly relieved by blocking the sympathetic nervous system. This implies that sympathetic activity can activate undamaged nociceptors when inflammation is present. Signs of sympathetic hyperactivity should be sought in patients with post-traumatic pain and inflammation and no other obvious explanation. The ideal treatment for any pain is to remove the cause; thus, while treatment can be initiated immediately, efforts to establish the underlying etiology should always proceed as treatment begins. Sometimes, treating the underlying condition does not immediately relieve pain. Furthermore, some conditions are so painful that rapid and effective analgesia is essential (e.g., the postoperative state, burns, trauma, cancer, or sickle cell crisis). Analgesic medications are a first line of treatment in these cases, and all practitioners should be familiar with their use. ASPIRIN, ACETAMINOPHEN, AND NONSTEROIDAL ANTI-INFLAMMATORY AgENTS (NSAIDs) These drugs are considered together because they are used for similar problems and may have a similar mechanism of action (Table 18-1). All these compounds inhibit cyclooxygenase (COX), and, except for acetaminophen, all have anti-inflammatory actions, especially at higher dosages. They are particularly effective for mild to moderate headache and for pain of musculoskeletal origin. Because they are effective for these common types of pain and are available without prescription, COX inhibitors are by far the most commonly used analgesics. They are absorbed well from the gastrointestinal tract and, with occasional use, have only minimal side effects. With chronic use, gastric irritation is a common side effect of aspirin and NSAIDs and is the problem that most frequently limits the dose that can be given. Gastric irritation is most severe with aspirin, which may cause erosion and ulceration of the gastric mucosa leading to bleeding or perforation. Because aspirin irreversibly acetylates platelet cyclooxygenase and thereby interferes with coagulation of the blood, gastrointestinal bleeding is a particular risk. Older age and history of gastrointestinal disease increase the risks of aspirin and NSAIDs. In addition to the well-known gastrointestinal toxicity of NSAIDs, nephrotoxicity is a significant problem for patients using these drugs on a chronic basis. Patients at risk for renal insufficiency, particularly those with significant contraction of their intravascular volume as occurs with chronic diuretic use or Generic Name Dose, mg Interval Comments Nonnarcotic analgesics: usual doses and intervals Codeine 30–60 q4h 30–60 q4h Nausea common Oxycodone — 5–10 q4–6h Usually available with acetaminophen or aspirin Morphine 5 q4h 30 q4h Morphine sustained — 15–60 bid to tid Oral slow-release preparation release Hydromorphone 1–2 q4h 2–4 q4h Shorter acting than morphine sulfate Levorphanol 2 q6–8h 4 q6–8h Longer acting than morphine sulfate; absorbed well PO Methadone 5–10 q6–8h 5–20 q6–8h Delayed sedation due to long half-life; therapy should not be initiated with >40 mg/d, and dose escalation should be made no more frequently than every 3 days Meperidine 50–100 q3–4h 300 q4h Poorly absorbed PO; normeperidine is a toxic metabolite; routine use of this agent is 7-day transdermal patch Buprenorphine 0.3 q6–8h CHAPTER 18 Pain: Pathophysiology and Management Anticholinergic Orthostatic Cardiac Ave. Dose, Generic Name 5-HT NE Sedative Potency Potency Hypotension Arrhythmia mg/d Range, mg/d Oxcarbazepine 300 bid Pregabalin 150–600 aAntidepressants, anticonvulsants, and antiarrhythmics have not been approved by the U.S. Food and Drug Administration (FDA) for the treatment of pain. 1800 mg/d is FDA approved for postherpetic neuralgia. Abbreviations: 5-HT, serotonin; NE, norepinephrine. acute hypovolemia, should be monitored closely. NSAIDs can also increase blood pressure in some individuals. Long-term treatment with NSAIDs requires regular blood pressure monitoring and treatment if necessary. Although toxic to the liver when taken in high doses, acetaminophen rarely produces gastric irritation and does not interfere with platelet function. The introduction of parenteral forms of NSAIDs, ketorolac and diclofenac, extends the usefulness of this class of compounds in the management of acute severe pain. Both agents are sufficiently potent and rapid in onset to supplant opioids for many patients with acute severe headache and musculoskeletal pain. bGabapentin in doses up to There are two major classes of COX: COX-1 is constitutively expressed, and COX-2 is induced in the inflammatory state. COX-2– selective drugs have similar analgesic potency and produce less gastric irritation than the nonselective COX inhibitors. The use of COX-2–selective drugs does not appear to lower the risk of nephrotoxicity compared to nonselective NSAIDs. On the other hand, COX-2–selective drugs offer a significant benefit in the management of acute postoperative pain because they do not affect blood coagulation. Nonselective COX inhibitors are usually contraindicated postoperatively because they impair platelet-mediated blood clotting and are thus associated with increased bleeding at the operative site. COX-2 inhibitors, including celecoxib (Celebrex), are associated with increased cardiovascular risk. It appears that this is a class effect of NSAIDs, excluding aspirin. These drugs are contraindicated in patients in the immediate period after coronary artery bypass surgery and should be used with caution in elderly patients and those with a history of or significant risk factors for cardiovascular disease. Opioids are the most potent pain-relieving drugs currently available. Of all analgesics, they have the broadest range of efficacy and provide the most reliable and effective method for rapid pain relief. Although side effects are common, most are reversible: nausea, vomiting, pruritus, and constipation are the most frequent and bothersome side effects. Respiratory depression is uncommon at standard analgesic doses, but can be life-threatening. Opioid-related side effects can be reversed rapidly with the narcotic antagonist naloxone. Many physicians, nurses, and patients have a certain trepidation about using opioids that is based on an exaggerated fear of addiction. In fact, there is a vanishingly small chance of patients becoming addicted to narcotics as a result of their appropriate medical use. The physician should not hesitate to use opioid analgesics in patients with acute severe pain. Table 18-1 lists the most commonly used opioid analgesics. Opioids produce analgesia by actions in the CNS. They activate pain-inhibitory neurons and directly inhibit pain-transmission neurons. Most of the commercially available opioid analgesics act at the same opioid receptor (μ-receptor), differing mainly in potency, speed of onset, duration of action, and optimal route of administration. Some side effects are due to accumulation of nonopioid metabolites that are unique to individual drugs. One striking example of this is normeperidine, a metabolite of meperidine. At higher doses of meperidine, typically greater than 1 g/d, accumulation of normeperidine can produce hyperexcitability and seizures that are not reversible with naloxone. Normeperidine accumulation is increased in patients with renal failure. The most rapid pain relief is obtained by intravenous administration of opioids; relief with oral administration is significantly slower. Because of the potential for respiratory depression, patients with any form of respiratory compromise must be kept under close observation following opioid administration; an oxygen-saturation monitor may be useful, but only in a setting where the monitor is under constant surveillance. Opioid-induced respiratory depression is typically accompanied by sedation and a reduction in respiratory rate. A fall in oxygen saturation represents a critical level of respiratory depression and the need for immediate intervention to prevent life-threatening hypoxemia. Ventilatory assistance should be maintained until the opioid-induced respiratory depression has resolved. The opioid antagonist naloxone should be readily available whenever opioids are used at high doses or in patients with compromised pulmonary function. Opioid effects are dose-related, and there is great variability among patients in the doses that relieve pain and produce side effects. Synergistic respiratory depression is common when opioids are administered with other CNS depressants, most commonly the benzodiazepines. Because of this, initiation of therapy requires titration to optimal dose and interval. The most important principle is to provide adequate pain relief. This requires determining whether the drug has adequately relieved the pain and frequent reassessment to determine the optimal interval for dosing. The most common error made by physicians in managing severe pain with opioids is to prescribe an inadequate dose. Because many patients are reluctant to complain, this practice leads to needless suffering. In the absence of sedation at the expected time of peak effect, a physician should not hesitate to repeat the initial dose to achieve satisfactory pain relief. An innovative approach to the problem of achieving adequate pain relief is the use of patient-controlled analgesia (PCA). PCA uses a microprocessor-controlled infusion device that can deliver a baseline continuous dose of an opioid drug as well as preprogrammed PART 2 Cardinal Manifestations and Presentation of Diseases additional doses whenever the patient pushes a button. The patient can then titrate the dose to the optimal level. This approach is used most extensively for the management of postoperative pain, but there is no reason why it should not be used for any hospitalized patient with persistent severe pain. PCA is also used for short-term home care of patients with intractable pain, such as that caused by metastatic cancer. It is important to understand that the PCA device delivers small, repeated doses to maintain pain relief; in patients with severe pain, the pain must first be brought under control with a loading dose before transitioning to the PCA device. The bolus dose of the drug (typically 1 mg of morphine, 0.2 mg of hydromorphone, or 10 μg of fentanyl) can then be delivered repeatedly as needed. To prevent overdosing, PCA devices are programmed with a lockout period after each demand dose is delivered (5–10 min) and a limit on the total dose delivered per hour. Although some have advocated the use of a simultaneous continuous or basal infusion of the PCA drug, this increases the risk of respiratory depression and has not been shown to increase the overall efficacy of the technique. The availability of new routes of administration has extended the usefulness of opioid analgesics. Most important is the availability of spinal administration. Opioids can be infused through a spinal catheter placed either intrathecally or epidurally. By applying opioids directly to the spinal or epidural space adjacent to the spinal cord, regional analgesia can be obtained using relatively low total doses. Indeed, the dose required to produce effective localized analgesia when using morphine intrathecally (0.1–0.3 mg) is a fraction of that required to produce similar analgesia when administered intravenously (5–10 mg). In this way, side effects such as sedation, nausea, and respiratory depression can be minimized. This approach has been used extensively during labor and delivery and for postoperative pain relief following surgical procedures. Continuous intrathecal delivery via implanted spinal drug-delivery systems is now commonly used, particularly for the treatment of cancer-related pain that would require sedating doses for adequate pain control if given systemically. Opioids can also be given intranasally (butorphanol), rectally, and transdermally (fentanyl and buprenorphine), or through the oral mucosa (fentanyl), thus avoiding the discomfort of frequent injections in patients who cannot be given oral medication. The fentanyl and buprenorphine transdermal patches have the advantage of providing fairly steady plasma levels, which maximizes patient comfort. Recent additions to the armamentarium for treating opioid-induced side effects are the peripherally acting opioid antagonists alvimopan (Entereg) and methylnaltrexone (Rellistor). Alvimopan is available as an orally administered agent that is restricted to the intestinal lumen by limited absorption; methylnaltrexone is available in a subcutaneously administered form that has virtually no penetration into the CNS. Both agents act by binding to peripheral μ-receptors, thereby inhibiting or reversing the effects of opioids at these peripheral sites. The action of both agents is restricted to receptor sites outside of the CNS; thus, these drugs can reverse the adverse effects of opioid analgesics that are mediated through their peripheral receptors without reversing their analgesic effects. Alvimopan has proven effective in lowering the duration of persistent ileus following abdominal surgery in patients receiving opioid analgesics for postoperative pain control. Methylnaltrexone has proven effective for relief of opioid-induced constipation in patients taking opioid analgesics on a chronic basis. Opioid and COX Inhibitor Combinations When used in combination, opioids and COX inhibitors have additive effects. Because a lower dose of each can be used to achieve the same degree of pain relief and their side effects are nonadditive, such combinations are used to lower the severity of dose-related side effects. However, fixed-ratio combinations of an opioid with acetaminophen carry an important risk. Dose escalation as a result of increased severity of pain or decreased opioid effect as a result of tolerance may lead to ingestion of levels of acetaminophen that are toxic to the liver. Although acetaminophen-related hepatotoxicity is uncommon, it remains a significant cause for liver failure. Thus, many practitioners have moved away from the use of opioid-acetaminophen combination analgesics to avoid the risk of excessive acetaminophen exposure as the dose of the analgesic is escalated. Managing patients with chronic pain is intellectually and emotionally challenging. The patient’s problem is often difficult or impossible to diagnose with certainty; such patients are demanding of the physician’s time and often appear emotionally distraught. The traditional medical approach of seeking an obscure organic pathology is usually unhelpful. On the other hand, psychological evaluation and behaviorally based treatment paradigms are frequently helpful, particularly in the setting of a multidisciplinary pain-management center. Unfortunately, this approach, while effective, remains largely underused in current medical practice. There are several factors that can cause, perpetuate, or exacerbate chronic pain. First, of course, the patient may simply have a disease that is characteristically painful for which there is presently no cure. Arthritis, cancer, chronic daily headaches, fibromyalgia, and diabetic neuropathy are examples of this. Second, there may be secondary perpetuating factors that are initiated by disease and persist after that disease has resolved. Examples include damaged sensory nerves, sympathetic efferent activity, and painful reflex muscle contraction (spasm). Finally, a variety of psychological conditions can exacerbate or even cause pain. There are certain areas to which special attention should be paid in a patient’s medical history. Because depression is the most common emotional disturbance in patients with chronic pain, patients should be questioned about their mood, appetite, sleep patterns, and daily activity. A simple standardized questionnaire, such as the Beck Depression Inventory, can be a useful screening device. It is important to remember that major depression is a common, treatable, and potentially fatal illness. Other clues that a significant emotional disturbance is contributing to a patient’s chronic pain complaint include pain that occurs in multiple, unrelated sites; a pattern of recurrent, but separate, pain problems beginning in childhood or adolescence; pain beginning at a time of emotional trauma, such as the loss of a parent or spouse; a history of physical or sexual abuse; and past or present substance abuse. On examination, special attention should be paid to whether the patient guards the painful area and whether certain movements or postures are avoided because of pain. Discovering a mechanical component to the pain can be useful both diagnostically and therapeutically. Painful areas should be examined for deep tenderness, noting whether this is localized to muscle, ligamentous structures, or joints. Chronic myofascial pain is very common, and, in these patients, deep palpation may reveal highly localized trigger points that are firm bands or knots in muscle. Relief of the pain following injection of local anesthetic into these trigger points supports the diagnosis. A neuropathic component to the pain is indicated by evidence of nerve damage, such as sensory impairment, exquisitely sensitive skin (allodynia), weakness, and muscle atrophy, or loss of deep tendon reflexes. Evidence suggesting sympathetic nervous system involvement includes the presence of diffuse swelling, changes in skin color and temperature, and hypersensitive skin and joint tenderness compared with the normal side. Relief of the pain with a sympathetic block supports the diagnosis, but once the condition becomes chronic, the response to sympathetic blockade is of variable magnitude and duration; the role for repeated sympathetic blocks in the overall management of CRPS is not established. A guiding principle in evaluating patients with chronic pain is to assess both emotional and organic factors before initiating therapy. Addressing these issues together, rather than waiting to address emotional issues after organic causes of pain have been ruled out, improves compliance in part because it assures patients that a psychological evaluation does not mean that the physician is questioning the validity of their complaint. Even when an organic cause for a patient’s pain can be found, it is still wise to look for other factors. For example, a cancer patient with painful bony metastases may have additional pain due to nerve damage and may also be depressed. Optimal therapy requires that each of these factors be looked for and treated. Once the evaluation process has been completed and the likely causative and exacerbating factors identified, an explicit treatment plan should be developed. An important part of this process is to identify specific and realistic functional goals for therapy, such as getting a good night’s sleep, being able to go shopping, or returning to work. A multidisciplinary approach that uses medications, counseling, physical therapy, nerve blocks, and even surgery may be required to improve the patient’s quality of life. There are also some newer, relatively invasive procedures that can be helpful for some patients with intractable pain. These include image-guided interventions such as epidural injection of glucocorticoids for acute radicular pain and radiofrequency treatment of the facet joints for chronic facet-related back and neck pain. For patients with severe and persistent pain that is unresponsive to more conservative treatment, placement of electrodes within the spinal canal overlying the dorsal columns of the spinal cord (spinal cord stimulation) or implantation of intrathecal drug-delivery systems has shown significant benefit. The criteria for predicting which patients will respond to these procedures continue to evolve. They are generally reserved for patients who have not responded to conventional pharmacologic approaches. Referral to a multidisciplinary pain clinic for a full evaluation should precede any invasive procedure. Such referrals are clearly not necessary for all chronic pain patients. For some, pharmacologic management alone can provide adequate relief. The tricyclic antidepressants (TCAs), particularly nortriptyline and desipramine (Table 18-1), are useful for the management of chronic pain. Although developed for the treatment of depression, the TCAs have a spectrum of dose-related biologic activities that include analgesia in a variety of chronic clinical conditions. Although the mechanism is unknown, the analgesic effect of TCAs has a more rapid onset and occurs at a lower dose than is typically required for the treatment of depression. Furthermore, patients with chronic pain who are not depressed obtain pain relief with antidepressants. There is evidence that TCAs potentiate opioid analgesia, so they may be useful adjuncts for the treatment of severe persistent pain such as occurs with malignant tumors. Table 18-2 lists some of the painful conditions that respond to TCAs. TCAs are of particular value in the management of neuropathic pain such as occurs in diabetic neuropathy and postherpetic neuralgia, for which there are few other therapeutic options. The TCAs that have been shown to relieve pain have significant side effects (Table 18-1; Chap. 466). Some of these side effects, such as orthostatic hypotension, drowsiness, cardiac conduction delay, memory impairment, constipation, and urinary retention, Rheumatoid arthritisa,b aControlled trials demonstrate analgesia. bControlled studies indicate benefit but not analgesia. CHAPTER 18 Pain: Pathophysiology and Management are particularly problematic in elderly patients, and several are additive to the side effects of opioid analgesics. The selective serotonin reuptake inhibitors such as fluoxetine (Prozac) have fewer and less serious side effects than TCAs, but they are much less effective for relieving pain. It is of interest that venlafaxine (Effexor) and duloxetine (Cymbalta), which are nontricyclic antidepressants that block both serotonin and norepinephrine reuptake, appear to retain most of the pain-relieving effect of TCAs with a side effect profile more like that of the selective serotonin reuptake inhibitors. These drugs may be particularly useful in patients who cannot tolerate the side effects of TCAs. These drugs are useful primarily for patients with neuropathic pain. Phenytoin (Dilantin) and carbamazepine (Tegretol) were first shown to relieve the pain of trigeminal neuralgia. This pain has a characteristic brief, shooting, electric shock–like quality. In fact, anticonvulsants seem to be particularly helpful for pains that have such a lancinating quality. Newer anticonvulsants, gabapentin (Neurontin) and pregabalin (Lyrica), are effective for a broad range of neuropathic pains. Furthermore, because of their favorable side effect profile, these newer anticonvulsants are often used as first-line agents. The long-term use of opioids is accepted for patients with pain due to malignant disease. Although opioid use for chronic pain of nonmalignant origin is controversial, it is clear that, for many patients, opioids are the only option that produces meaningful pain relief. This is understandable because opioids are the most potent and have the broadest range of efficacy of any analgesic medications. Although addiction is rare in patients who first use opioids for pain relief, some degree of tolerance and physical dependence is likely with long-term use. Furthermore, animal studies suggest that long-term opioid therapy may worsen pain in some individuals. Therefore, before embarking on opioid therapy, other options should be explored, and the limitations and risks of opioids should be explained to the patient. It is also important to point out that some opioid analgesic medications have mixed agonist-antagonist properties (e.g., butorphanol and buprenorphine). From a practical standpoint, this means that they may worsen pain by inducing an abstinence syndrome in patients who are physically dependent on other opioid analgesics. With long-term outpatient use of orally administered opioids, it is desirable to use long-acting compounds such as levorphanol, methadone, sustained-release morphine, or transdermal fentanyl (Table 18-1). The pharmacokinetic profiles of these drug preparations enable the maintenance of sustained analgesic blood levels, potentially minimizing side effects such as sedation that are associated with high peak plasma levels, and reducing the likelihood of rebound pain associated with a rapid fall in plasma opioid concentration. Although long-acting opioid preparations may provide superior pain relief in patients with a continuous pattern of ongoing pain, others suffer from intermittent severe episodic pain and experience superior pain control and fewer side effects with the periodic use of short-acting opioid analgesics. Constipation is a virtually universal side effect of opioid use and should be treated expectantly. As noted above in the discussion of acute pain treatment, a recent advance for patients is the development of peripherally acting opioid antagonists that can reverse the constipation associated with opioid use without interfering with analgesia. Soon after the introduction of a controlled-release oxycodone formulation (OxyContin) in the late 1990s, a dramatic rise in emergency department visits and deaths associated with oxycodone ingestion appeared, focusing public attention on misuse of prescription pain medications. The magnitude of prescription opioid abuse has grown over the last decade, leading the Centers for Disease Control and Prevention to classify prescription opioid analgesic abuse as an epidemic. This appears to be due in large part to individuals using a prescription drug nonmedically, most often an opioid analgesic. PART 2 Cardinal Manifestations and Presentation of Diseases guiDELinES foR SELECTing AnD MoniToRing PATiEnTS RECEiving CHRoniC oPioiD THERAPy (CoT) foR THE TREATMEnT of CHRoniC, nonCAnCER PAin • Conduct a history, physical examination, and appropriate testing, including an assessment of risk of substance abuse, misuse, or addiction. • Consider a trial of COT if pain is moderate or severe, pain is having an adverse impact on function or quality of life, and potential therapeutic benefits outweigh potential harms. • A benefit-to-harm evaluation, including a history, physical examination, and appropriate diagnostic testing, should be performed and documented before and on an ongoing basis during COT. Informed Consent and Use of Management Plans • Informed consent should be obtained. A continuing discussion with the patient regarding COT should include goals, expectations, potential risks, and alternatives to COT. • Consider using a written COT management plan to document patient and clinician responsibilities and expectations and assist in patient education. • Initial treatment with opioids should be considered as a therapeutic trial to determine whether COT is appropriate. • Opioid selection, initial dosing, and titration should be individualized according to the patient’s health status, previous exposure to opioids, attainment of therapeutic goals, and predicted or observed harms. patients on COT periodically and as warranted by changing circumstances. Monitoring should include documentation of pain intensity and level of functioning, assessments of progress toward achieving therapeutic goals, presence of adverse events, and adherence to prescribed therapies. patients on COT who are at high risk or who have engaged in aberrant drug-related behaviors, clinicians should periodically obtain urine drug screens or other information to confirm adherence to the COT plan of care. • In patients on COT not at high risk and not known to have engaged in aberrant drug-related behaviors, clinicians should consider periodically obtaining urine drug screens or other information to confirm adherence to the COT plan of care. Source: Adapted with permission from R Chou et al: J Pain 10:113, 2009. Drug-induced deaths have rapidly risen and are now the second leading cause of death in Americans, just behind motor vehicle fatalities. In 2011, the Office of National Drug Control Policy established a multifaceted approach to address prescription drug abuse, including Prescription Drug Monitoring Programs that allow practitioners to determine if patients are receiving prescriptions from multiple providers and use of law enforcement to eliminate improper prescribing practices. This increased scrutiny leaves many practitioners hesitant to prescribe opioid analgesics, other than for brief periods to control pain associated with illness or injury. For now, the choice to begin chronic opioid therapy for a given patient is left to the individual practitioner. Pragmatic guidelines for properly selecting and monitoring patients receiving chronic opioid therapy are shown in Table 18-3. It is important to individualize treatment for patients with neuropathic pain. Several general principles should guide therapy: the first is to move quickly to provide relief, and the second is to minimize drug side effects. For example, in patients with postherpetic neuralgia and significant cutaneous hypersensitivity, topical lidocaine (Lidoderm patches) can provide immediate relief without side effects. Anticonvulsants (gabapentin or pregabalin; see above) or antidepressants (nortriptyline, desipramine, duloxetine, or venlafaxine) can be used as first-line drugs for patients with neuropathic pain. Systemically administered antiarrhythmic drugs such as lidocaine and mexiletine are less likely to be effective; although intravenous infusion of lidocaine can provide analgesia for patients with different types of neuropathic pain, the relief is usually transient, typically lasting just hours after the cessation of the infusion. The oral lidocaine congener mexiletine is poorly tolerated, producing frequent gastrointestinal adverse effects. There is no consensus on which class of drug should be used as a first-line treatment for any chronically painful condition. However, because relatively high doses of anticonvulsants are required for pain relief, sedation is very common. Sedation is also a problem with TCAs but is much less of a problem with serotonin/norepinephrine reuptake inhibitors (SNRIs; e.g., venlafaxine and duloxetine). Thus, in the elderly or in patients whose daily activities require high-level mental activity, these drugs should be considered the first line. In contrast, opioid medications should be used as a secondor third-line drug class. Although highly effective for many painful conditions, opioids are sedating, and their effect tends to lessen over time, leading to dose escalation and, occasionally, a worsening of pain due to physical dependence. Drugs of different classes can be used in combination to optimize pain control. It is worth emphasizing that many patients, especially those with chronic pain, seek medical attention primarily because they are suffering and because only physicians can provide the medications required for pain relief. A primary responsibility of all physicians is to minimize the physical and emotional discomfort of their patients. Familiarity with pain mechanisms and analgesic medications is an important step toward accomplishing this aim. Chest Discomfort David A. Morrow Chest discomfort is among the most common reasons for which patients present for medical attention at either an emergency depart-ment (ED) or an outpatient clinic. The evaluation of nontraumatic chest discomfort is inherently challenging owing to the broad variety 19 of possible causes, a minority of which are life-threatening conditions that should not be missed. It is helpful to frame the initial diagnostic assessment and triage of patients with acute chest discomfort around three categories: (1) myocardial ischemia; (2) other cardiopulmonary causes (pericardial disease, aortic emergencies, and pulmonary conditions); and (3) non-cardiopulmonary causes. Although rapid identification of high-risk conditions is a priority of the initial assessment, strategies that incorporate routine liberal use of testing carry the potential for adverse effects of unnecessary investigations. Chest discomfort is the third most common reason for visits to the ED in the United States, resulting in 6 to 7 million emergency visits each year. More than 60% of patients with this presentation are hospitalized for further testing, and the rest undergo additional investigation in the ED. Fewer than 25% of evaluated patients are eventually diagnosed with acute coronary syndrome (ACS), with rates of 5–15% in most series of unselected populations. In the remainder, the most common diagnoses are gastrointestinal causes (Fig. 19-1), and fewer than 10% are other life-threatening cardiopulmonary conditions. In a large proportion of patients with transient acute chest discomfort, ACS or another acute cardiopulmonary cause is excluded but the cause is not determined. Therefore, the resources and time devoted to the evaluation of chest discomfort in the absence of a severe cause are substantial. Nevertheless, a disconcerting 2–6% of patients with chest discomfort of presumed non-ischemic etiology who are discharged from the ED are later deemed to have had a missed myocardial infarction (MI). Patients with a missed diagnosis of MI have a 30-day risk of death that is double that of their counterparts who are hospitalized. The natural histories of ACS, acute pericardial diseases, pulmonary embolism, and aortic emergencies are discussed in Chaps. 288, 294 and 295, 300, and 301, respectively. In a study of more than 350,000 patients with unspecified presumed non-cardiopulmonary chest discomfort, the mortality rate 1 year after discharge was <2% and did not differ significantly from age-adjusted mortality in the general population. The estimated rate of major cardiovascular events through 30 days in patients with acute chest pain who had been stratified as low risk was 2.5% in a large population-based study that excluded patients with ST-segment elevation or definite noncardiac chest pain. The major etiologies of chest discomfort are discussed in this section and summarized in Table 19-1. Additional elements of the history, physical examination, and diagnostic testing that aid in distinguishing these causes are discussed in a later section (see “Approach to the Patient”). Myocardial ischemia causing chest discomfort, termed angina pectoris, is a primary clinical concern in patients presenting with chest symptoms. Myocardial ischemia is precipitated by an imbalance between myocardial oxygen requirements and myocardial oxygen supply, resulting in insufficient delivery of oxygen to meet the heart’s metabolic demands. Myocardial oxygen consumption may be elevated by increases in heart rate, ventricular wall stress, and myocardial contractility, whereas myocardial oxygen supply is determined by coronary Gastrointestinal 42% Ischemic heart disease 31% Chest wall syndrome 28% Pericarditis 4% Pleuritis 2% Pulmonary embolism 2% Lung cancer 1.5% Aortic aneurysm 1% Aortic stenosis 1% Herpes zoster 1% FIguRE 19-1 Distribution of final discharge diagnoses in patients with nontraumatic acute chest pain. (Figure prepared from data in P Fruergaard et al: Eur Heart J 17:1028, 1996.) PART 2 Cardinal Manifestations and Presentation of Diseases Gastrointenstinal Esophageal reflux 10–60 min Burning Substernal, epigastric Worsened by postprandial recumbency; relieved by antacids Esophageal spasm 2–30 min Pressure, tightness, Retrosternal Can closely mimic angina burning Peptic ulcer Prolonged; 60–90 min Burning Epigastric, substernal Relieved with food or antacids after meals Gallbladder disease Prolonged Aching or colicky Epigastric, right upper May follow meal quadrant; sometimes to the back Neuromuscular Costochondritis Variable Aching Sternal Sometimes swollen, tender, warm over joint; may be reproduced by localized pressure on examination Trauma or strain Usually constant Aching Localized to area of Reproduced by movement or strain palpation Herpes zoster Usually prolonged Sharp or burning Dermatomal distribution Vesicular rash in area of discomfort Psychological Emotional and psy-Variable; may be fleeting Variable; often mani-Variable; may be Situational factors may precipitate chiatric conditions or prolonged fests as tightness and retrosternal symptoms; history of panic attacks, dyspnea with feeling of blood flow and coronary arterial oxygen content. When myocardial ischemia is sufficiently severe and prolonged in duration (as little as 20 min), irreversible cellular injury occurs, resulting in MI. Ischemic heart disease is most commonly caused by atheromatous plaque that obstructs one or more of the epicardial coronary arteries. Stable ischemic heart disease (Chap. 293) usually results from the gradual atherosclerotic narrowing of the coronary arteries. Stable angina is characterized by ischemic episodes that are typically precipitated by a superimposed increase in oxygen demand during physical exertion and relieved upon resting. Ischemic heart disease becomes unstable most commonly when rupture or erosion of one or more atherosclerotic lesions triggers coronary thrombosis (Chap. 291e). Unstable ischemic heart disease is classified clinically by the presence or absence of detectable myocardial injury and the presence or absence of ST-segment elevation on the patient’s electrocardiogram (ECG). When acute coronary atherothrombosis occurs, the intracoronary thrombus may be partially obstructive, generally leading to myocardial ischemia in the absence of ST-segment elevation. Marked by ischemic symptoms at rest, with minimal activity, or in an accelerating pattern, unstable ischemic heart disease is classified as unstable angina when there is no detectable myocardial injury and as non–ST elevation MI (NSTEMI) when there is evidence of myocardial necrosis (Chap. 294). When the coronary thrombus is acutely and completely occlusive, transmural myocardial ischemia usually ensues, with ST-segment elevation on the ECG and myocardial necrosis leading to a diagnosis of ST elevation MI (STEMI, see Chap. 295). Clinicians should be aware that unstable ischemic symptoms may also occur predominantly because of increased myocardial oxygen demand (e.g., during intense psychological stress or fever) or because of decreased oxygen delivery due to anemia, hypoxia, or hypotension. However, the term acute coronary syndrome, which encompasses unstable angina, NSTEMI, and STEMI, is in general reserved for ischemia precipitated by acute coronary atherothrombosis. In order to guide therapeutic strategies, a standardized system for classification of MI has been expanded to discriminate MI resulting from acute coronary thrombosis (type 1) from MI occurring secondary to other imbalances of myocardial oxygen supply and demand (type 2; see Chap. 294). Other contributors to stable and unstable ischemic heart disease, such as endothelial dysfunction, microvascular disease, and vasospasm, may exist alone or in combination with coronary atherosclerosis and may be the dominant cause of myocardial ischemia in some patients. Moreover, non-atherosclerotic processes, including congenital abnormalities of the coronary vessels, myocardial bridging, coronary arteritis, and radiation-induced coronary disease, can lead to coronary obstruction. In addition, conditions associated with extreme myocardial oxygen demand and impaired endocardial blood flow, such as aortic valve disease (Chap. 301), hypertrophic cardiomyopathy, or idiopathic dilated cardiomyopathy (Chap. 287), can precipitate myocardial ischemia in patients with or without underlying obstructive atherosclerosis. Characteristics of Ischemic Chest Discomfort The clinical characteristics of angina pectoris, often referred to simply as “angina,” are highly similar whether the ischemic discomfort is a manifestation of stable ischemic heart disease, unstable angina, or MI; the exceptions are differences in the pattern and duration of symptoms associated with these syndromes (Table 19-1). Heberden initially described angina as a sense of “strangling and anxiety.” Chest discomfort characteristic of myocardial ischemia is typically described as aching, heavy, squeezing, crushing, or constricting. However, in a substantial minority of patients, the quality of discomfort is extremely vague and may be described as a mild tightness, or merely an uncomfortable feeling, that sometimes is experienced as numbness or a burning sensation. The site of the discomfort is usually retrosternal, but radiation is common and generally occurs down the ulnar surface of the left arm; the right arm, both arms, neck, jaw, or shoulders may also be involved. These and other characteristics of ischemic chest discomfort pertinent to discrimination from other causes of chest pain are discussed later in this chapter (see “Approach to the Patient”). Stable angina usually begins gradually and reaches its maximal intensity over a period of minutes before dissipating within several minutes with rest or with nitroglycerin. The discomfort typically occurs predictably at a characteristic level of exertion or psychological stress. By definition, unstable angina is manifest by self-limited anginal chest discomfort that is exertional but occurs at increased frequency with progressively lower intensity of physical activity or even at rest. Chest discomfort associated with MI is typically more severe, is prolonged (usually lasting ≥30 min), and is not relieved by rest. Mechanisms of Cardiac Pain The neural pathways involved in ischemic cardiac pain are poorly understood. Ischemic episodes are thought to excite local chemosensitive and mechanoreceptive receptors that, in turn, stimulate release of adenosine, bradykinin, and other substances that activate the sensory ends of sympathetic and vagal afferent fibers. The afferent fibers traverse the nerves that connect to the upper five thoracic sympathetic ganglia and upper five distal thoracic roots of the spinal cord. From there, impulses are transmitted to the thalamus. Within the spinal cord, cardiac sympathetic afferent impulses may converge with impulses from somatic thoracic structures, and this convergence may be the basis for referred cardiac pain. In addition, cardiac vagal afferent fibers synapse in the nucleus tractus solitarius of the medulla and then descend to the upper cervical spinothalamic tract, and this route may contribute to anginal pain experienced in the neck and jaw. OTHER CARDIOPuLMONARY CAuSES Pericardial and Other Myocardial Diseases (See also Chap. 288) Inflammation of the pericardium due to infectious or noninfectious causes can be responsible for acute or chronic chest discomfort. The visceral surface and most of the parietal surface of the pericardium are insensitive to pain. Therefore, the pain of pericarditis is thought to arise principally from associated pleural inflammation and is more common with infectious causes of pericarditis, which typically involve the pleura. Because of this pleural association, the discomfort of pericarditis is usually pleuritic pain that is exacerbated by breathing, coughing, or changes in position. Moreover, owing to the overlapping sensory supply of the central diaphragm via the phrenic nerve with somatic sensory fibers originating in the third to fifth cervical segments, the pain of pleural pericarditis is often referred to the shoulder and neck. Involvement of the pleural surface of the lateral diaphragm can lead to pain in the upper abdomen. Acute inflammatory and other non-ischemic myocardial diseases can also produce chest discomfort. The symptoms of Takotsubo (stress-related) cardiomyopathy often start abruptly with chest pain and shortness of breath. This form of cardiomyopathy, in its most recognizable form, is triggered by an emotionally or physically stressful event and may mimic acute MI because of its commonly associated ECG abnormalities, including ST-segment elevation, and elevated biomarkers of myocardial injury. Observational studies support a predilection for women >50 years of age. The symptoms of acute myocarditis are highly varied. Chest discomfort may either originate with inflammatory injury of the myocardium or be due to severe increases in wall stress related to poor ventricular performance. Diseases of the Aorta (See also Chap. 301) Acute aortic dissection (Fig. 19-1) is a less common cause of chest discomfort but is important because of the catastrophic natural history of certain subsets of cases when recognized late or left untreated. Acute aortic syndromes encompass a spectrum of acute aortic diseases related to disruption of the media of the aortic wall. Aortic dissection involves a tear in the aortic intima, resulting in separation of the media and creation of a separate “false” lumen. A penetrating ulcer has been described as ulceration of an aortic atheromatous plaque that extends through the intima and into the aortic media, with the potential to initiate an intramedial dissection or rupture into the adventitia. Intramural hematoma is an aortic wall hematoma with no demonstrable intimal flap, no radiologically apparent intimal tear, and no false lumen. Intramural hematoma can occur due to either rupture of the vasa vasorum or, less commonly, a penetrating ulcer. Each of these subtypes of acute aortic syndrome typically presents with chest discomfort that is often severe, sudden in onset, and sometimes described as “tearing” in quality. Acute aortic syndromes involving the ascending aorta tend to cause pain in the midline of the anterior chest, whereas descending aortic syndromes most often present with pain in the back. Therefore, dissections that begin in the ascending aorta and extend to the descending aorta tend to cause pain in the front of the chest that extends toward the back, between the shoulder blades. Proximal aortic dissections that involve the ascending aorta (type A in the Stanford nomenclature) are at high risk for major complications that may influence the clinical presentation, including (1) compromise of the aortic ostia of the coronary arteries, resulting in MI; (2) disruption of the aortic valve, causing acute aortic insufficiency; and (3) rupture of the hematoma into the pericardial space, leading to pericardial tamponade. Knowledge of the epidemiology of acute aortic syndromes can be helpful in maintaining awareness of this relatively uncommon group of disorders (estimated annual incidence, 3 cases per 100,000 population). Nontraumatic aortic dissections are very rare in the absence of hypertension or conditions associated with deterioration of the elastic or muscular components of the aortic media, including pregnancy, bicuspid aortic disease, or inherited connective tissue diseases, such as Marfan and Ehlers-Danlos syndromes. Although aortic aneurysms are most often asymptomatic, thoracic aortic aneurysms can cause chest pain and other symptoms by compressing adjacent structures. This pain tends to be steady, deep, and occasionally severe. Aortitis, whether of noninfectious or infectious etiology, in the absence of aortic dissection is a rare cause of chest or back discomfort. Pulmonary Conditions Pulmonary and pulmonary-vascular conditions that cause chest discomfort usually do so in conjunction with dyspnea and often produce symptoms that have a pleuritic nature. PULMONARY EMBOLISM (See also Chap. 300) Pulmonary emboli (annual incidence, ~1 per 1000) can produce dyspnea and chest discomfort that is sudden in onset. Typically pleuritic in pattern, the chest discomfort associated with pulmonary embolism may result from (1) involvement of the pleural surface of the lung adjacent to a resultant pulmonary infarction; (2) distention of the pulmonary artery; or (3) possibly, right ventricular wall stress and/or subendocardial ischemia related to acute pulmonary hypertension. The pain associated with small pulmonary emboli is often lateral and pleuritic and is believed to be related to the first of these three possible mechanisms. In contrast, massive pulmonary emboli may cause severe substernal pain that may mimic an MI and that is plausibly attributed to the second and third of these potential mechanisms. Massive or submassive pulmonary embolism may also be associated with syncope, hypotension, and signs of right heart failure. Other typical characteristics that aid in the recognition of pulmonary embolism are discussed later in this chapter (see “Approach to the Patient”). PNEUMOTHORAX (See also Chap. 317) Primary spontaneous pneumothorax is a rare cause of chest discomfort, with an estimated annual incidence in the United States of 7 per 100,000 among men and <2 per 100,000 among women. Risk factors include male sex, smoking, family history, and Marfan syndrome. The symptoms are usually sudden in onset, and dyspnea may be mild; thus, presentation to medical attention is sometimes delayed. Secondary spontaneous pneumothorax may occur in patients with underlying lung disorders, such as chronic obstructive pulmonary disease, asthma, or cystic fibrosis, and usually produces symptoms that are more severe. Tension pneumothorax is a medical emergency caused by trapped intrathoracic air that precipitates hemodynamic collapse. Other Pulmonary Parenchymal, Pleural, or Vascular Disease (See also Chaps. 304, 305, and 316) Most pulmonary diseases that produce chest pain, including pneumonia and malignancy, do so because of involvement of the pleura or surrounding structures. Pleurisy is typically described as a knifelike pain that is worsened by inspiration or coughing. In contrast, chronic pulmonary hypertension can manifest as chest pain that may be very similar to angina in its characteristics, suggesting right ventricular myocardial ischemia in some cases. Reactive airways diseases similarly can cause chest tightness associated with breathlessness rather than pleurisy. NON-CARDIOPuLMONARY CAuSES gastrointenstinal Conditions (See also Chap. 344) Gastrointestinal disorders are the most common cause of nontraumatic chest discomfort and often produce symptoms that are difficult to discern from more serious causes of chest pain, including myocardial ischemia. Esophageal disorders, in particular, may simulate angina in the character and location of the pain. Gastroesophageal reflux and disorders of esophageal motility are common and should be considered in the differential diagnosis of chest pain (Fig. 19-1 and Table 19-1). Acid reflux often causes a burning discomfort. The pain of esophageal spasm, in contrast, is commonly an intense, squeezing discomfort that is retrosternal in location and, like angina, may be relieved by nitroglycerin or dihydropyridine calcium channel antagonists. Chest pain can also result from injury to the esophagus, such as a Mallory-Weiss tear or even an esophageal rupture (Boerhaave syndrome) caused by severe vomiting. Peptic ulcer disease is most commonly epigastric in location but can radiate into the chest (Table 19-1). Hepatobiliary disorders, including cholecystitis and biliary colic, may mimic acute cardiopulmonary diseases. Although the pain arising from these disorders usually localizes to the right upper quadrant of the abdomen, it is variable and may be felt in the epigastrium and radiate to the back and lower chest. This discomfort is sometimes referred to the scapula or may in rare cases be felt in the shoulder, suggesting diaphragmatic irritation. The pain is steady, usually lasts several hours, and subsides spontaneously, without symptoms between attacks. Pain resulting from pancreatitis is typically aching epigastric pain that radiates to the back. Musculoskeletal and Other Causes (See also Chap. 393) Chest discomfort can be produced by any musculoskeletal disorder involving the chest wall or the nerves of the chest wall, neck, or upper limbs. PART 2 Cardinal Manifestations and Presentation of Diseases Costochondritis causing tenderness of the costochondral junctions (Tietze’s syndrome) is relatively common. Cervical radiculitis may manifest as a prolonged or constant aching discomfort in the upper chest and limbs. The pain may be exacerbated by motion of the neck. Occasionally, chest pain can be caused by compression of the brachial plexus by the cervical ribs, and tendinitis or bursitis involving the left shoulder may mimic the radiation of angina. Pain in a dermatomal distribution can also be caused by cramping of intercostal muscles or by herpes zoster (Chap. 217). Emotional and Psychiatric Conditions As many as 10% of patients who present to emergency departments with acute chest discomfort have a panic disorder or related condition (Table 19-1). The symptoms may include chest tightness or aching that is associated with a sense of anxiety and difficulty breathing. The symptoms may be prolonged or fleeting. APPROACH TO THE PATIENT: Given the broad set of potential causes and the heterogeneous risk of serious complications in patients who present with acute nontraumatic chest discomfort, the priorities of the initial clinical encounter include assessment of (1) the patient’s clinical stability and (2) the probability that the patient has an underlying cause of the discomfort that may be life-threatening. The high-risk conditions of principal concern are acute cardiopulmonary processes, including ACS, acute aortic syndrome, pulmonary embolism, tension pneumothorax, and pericarditis with tamponade. Among non-cardiopulmonary causes of chest pain, esophageal rupture likely holds the greatest urgency for diagnosis. Patients with these conditions may deteriorate rapidly despite initially appearing well. The remaining population with non-cardiopulmonary conditions has a more favorable prognosis during completion of the diagnostic work-up. A rapid targeted assessment for a serious cardiopulmonary cause is of particular relevance for patients with acute ongoing pain who have presented for emergency evaluation. Among patients presenting in the outpatient setting with chronic pain or pain that has resolved, a general diagnostic assessment is reasonably undertaken (see “Outpatient Evaluation of Chest Discomfort,” below). A series of questions that can be used to structure the clinical evaluation of patients with chest discomfort is shown in Table 19-2. 1. Could the chest discomfort be due to an acute, potentially life-threatening condition that warrants urgent evaluation and management? 2. If not, could the discomfort be due to a chronic condition likely to lead to serious complications? 3. If not, could the discomfort be due to an acute condition that warrants specific treatment? 4. If not, could the discomfort be due to another treatable chronic condition? Esophageal reflux Cervical disk disease Esophageal spasm Arthritis of the shoulder or spine Peptic ulcer disease Costochondritis Gallbladder disease Other musculoskeletal disorders Other gastrointestinal conditions Anxiety state Source: Developed by Dr. Thomas H. Lee for the 18th edition of Harrison’s Principles of Internal Medicine. The evaluation of nontraumatic chest discomfort relies heavily on the clinical history and physical examination to direct subsequent diagnostic testing. The evaluating clinician should assess the quality, location (including radiation), and pattern (including onset and duration) of the pain as well as any provoking or alleviating factors. The presence of associated symptoms may also be useful in establishing a diagnosis. Quality of Pain The quality of chest discomfort alone is never sufficient to establish a diagnosis. However, the characteristics of the pain are pivotal in formulating an initial clinical impression and assessing the likelihood of a serious cardiopulmonary process (Table 19-1), including acs in particular (Fig. 19-2). Pressure or tightness is consistent with a typical presentation of myocardial ischemic pain. Nevertheless, the clinician must remember that some patients with ischemic chest symptoms deny any “pain” but rather complain of dyspnea or a vague sense of anxiety. The severity of the discomfort has poor diagnostic accuracy. It is often helpful to ask about the similarity of the discomfort to previous definite ischemic symptoms. It is unusual for angina to be sharp, as in knifelike, stabbing, or pleuritic; however, patients sometimes use the word “sharp” to convey the intensity of discomfort rather than the quality. Pleuritic discomfort is suggestive of a process involving the pleura, including pericarditis, pulmonary embolism, or pulmonary parenchymal processes. Less frequently, the pain of pericarditis or massive pulmonary embolism is a steady severe pressure or aching that can be difficult to discriminate from myocardial ischemia. “Tearing” or “ripping” pain is often described by patients with acute aortic dissection. However, acute aortic emergencies also present commonly with severe, knifelike pain. A burning quality can suggest acid reflux or peptic ulcer disease but may also occur with myocardial ischemia. Esophageal pain, particularly with spasm, can be a severe squeezing discomfort identical to angina. Location of Discomfort A substernal location with radiation to the neck, jaw, shoulder, or arms is typical of myocardial ischemic discomfort. Some patients present with aching in sites of radiated pain as their only symptoms of ischemia. However, pain that is highly localized—e.g., that which can be demarcated by the tip of one finger—is highly unusual for angina. A retrosternal location should prompt consideration of esophageal pain; however, other gastrointestinal conditions usually present with pain that is most intense in the abdomen or epigastrium, with possible radiation into the chest. Angina may also occur in an epigastric location. However, pain that occurs solely above the mandible or below the epigastrium is rarely angina. Severe pain radiating to the back, particularly between the shoulder blades, should prompt consideration of an acute aortic syndrome. Radiation to the trapezius ridge is characteristic of pericardial pain and does not usually occur with angina. Pattern Myocardial ischemic discomfort usually builds over minutes and is exacerbated by activity and mitigated by rest. In contrast, pain that reaches its peak intensity immediately is more suggestive of aortic dissection, pulmonary embolism, or spontaneous pneumothorax. Pain that is fleeting (lasting only a few seconds) is rarely ischemic in origin. Similarly, pain that is constant in intensity for a prolonged period (many hours to days) is unlikely to represent myocardial ischemia if it occurs in the absence of other clinical consequences, such as abnormalities of the ECG, elevation of cardiac biomarkers, or clinical sequelae (e.g., heart failure or hypotension). Both myocardial ischemia and acid reflux may have their onset in the morning, the latter because of the absence of food to absorb gastric acid. Provoking and Alleviating Factors Patients with myocardial ischemic pain usually prefer to rest, sit, or stop walking. However, clinicians should be aware of the phenomenon of “warm-up angina” in which some patients experience relief of angina as they continue at the same or even a greater level of exertion without symptoms (Chap. 293). Alterations in the intensity of pain with changes in position or movement of the upper extremities and neck are less likely with myocardial ischemia and suggest a musculoskeletal etiology. The pain of pericarditis, however, often is worse in the supine position and relieved by sitting upright and leaning forward. Gastroesophageal reflux may be exacerbated by alcohol, some foods, or by a reclined position. Relief can occur with sitting. Radiation to right arm or shoulder Radiation to both arms or shoulders Associated with exertion Radiation to left arm Associated with diaphoresis Associated with nausea or vomiting Worse than previous angina or similar to previous MI Described as pressure Inframammary location Reproducible with palpation Described as sharp Described as positional Described as pleuritic 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 FIguRE 19-2 Association of chest pain characteristics with the probability of acute myocardial infarction (AMI). (Figure prepared from data in CJ Swap, JT Nagurney: JAMA 294:2623, 2005.) Exacerbation by eating suggests a gastrointestinal etiology such as peptic ulcer disease, cholecystitis, or pancreatitis. Peptic ulcer disease tends to become symptomatic 60–90 min after meals. However, in the setting of severe coronary atherosclerosis, redistribution of blood flow to the splanchnic vasculature after eating can trigger postprandial angina. The discomfort of acid reflux and peptic ulcer disease is usually diminished promptly by acid-reducing therapies. In contrast with its impact in some patients with angina, physical exertion is very unlikely to alter symptoms from gastrointestinal causes of chest pain. Relief of chest discomfort within minutes after administration of nitroglycerin is suggestive of but not sufficiently sensitive or specific for a definitive diagnosis of myocardial ischemia. Esophageal spasm may also be relieved promptly with nitroglycerin. A delay of >10 min before relief is obtained after nitroglycerin suggests that the symptoms either are not caused by ischemia or are caused by severe ischemia, such as during acute MI. Associated Symptoms Symptoms that accompany myocardial ischemia may include diaphoresis, dyspnea, nausea, fatigue, faintness, and eructations. In addition, these symptoms may exist in isolation as anginal equivalents (i.e., symptoms of myocardial ischemia other than typical angina), particularly in women and the elderly. Dyspnea may occur with multiple conditions considered in the differential diagnosis of chest pain and thus is not discriminative, but the presence of dyspnea is important because it suggests a cardiopulmonary etiology. Sudden onset of significant respiratory distress should lead to consideration of pulmonary embolism and spontaneous pneumothorax. Hemoptysis may occur with pulmonary embolism, or as blood-tinged frothy sputum in severe heart failure but usually points toward a pulmonary parenchymal etiology of chest symptoms. Presentation with syncope or pre-syncope should prompt consideration of hemodynamically significant pulmonary embolism or aortic dissection as well as ischemic arrhythmias. Although nausea and vomiting suggest a gastrointestinal disorder, these symptoms may occur in the setting of MI (more commonly inferior MI), presumably because of activation of the vagal reflex or stimulation of left ventricular receptors as part of the Bezold-Jarisch reflex. Past Medical History The past medical history is useful in assessing the patient for risk factors for coronary atherosclerosis (Chap. 291e) and venous thromboembolism (Chap. 300) as well as for conditions that may predispose the patient to specific disorders. For example, a history of connective tissue diseases such as marfan syndrome should heighten the clinician’s suspicion of an acute aortic syndrome or spontaneous pneumothorax. A careful history may elicit clues about depression or prior panic attacks. In addition to providing an initial assessment of the patient’s clinical stability, the physical examination of patients with chest discomfort can provide direct evidence of specific etiologies of chest pain (e.g., unilateral absence of lung sounds) and can identify potential precipitants of acute cardiopulmonary causes of chest pain (e.g., uncontrolled hypertension), relevant comorbid conditions (e.g., obstructive pulmonary disease), and complications of the presenting syndrome (e.g., heart failure). However, because the findings on physical examination may be normal in patients with unstable ischemic heart disease, an unremarkable physical exam is not definitively reassuring. general The patient’s general appearance is helpful in establishing an initial impression of the severity of illness. Patients with acute MI or other acute cardiopulmonary disorders often appear anxious, uncomfortable, pale, cyanotic, or diaphoretic. Patients who are massaging or clutching their chests may describe their pain with a clenched fist held against the sternum (Levine’s sign). Occasionally, body habitus is helpful—e.g., in patients with Marfan syndrome or the prototypical young, tall, thin man with spontaneous pneumothorax. PART 2 Cardinal Manifestations and Presentation of Diseases Vital Signs Significant tachycardia and hypotension are indicative of important hemodynamic consequences of the underlying cause of chest discomfort and should prompt a rapid survey for the most severe conditions, such as acute MI with cardiogenic shock, massive pulmonary embolism, pericarditis with tamponade, or tension pneumothorax. Acute aortic emergencies usually present with severe hypertension but may be associated with profound hypotension when there is coronary arterial compromise or dissection into the pericardium. Sinus tachycardia is an important manifestation of submassive pulmonary embolism. Tachypnea and hypoxemia point toward a pulmonary cause. The presence of low-grade fever is nonspecific because it may occur with MI and with thromboembolism in addition to infection. Pulmonary Examination of the lungs may localize a primary pulmonary cause of chest discomfort, as in cases of pneumonia, asthma, or pneumothorax. Left ventricular dysfunction from severe ischemia/infarction as well as acute valvular complications of MI or aortic dissection can lead to pulmonary edema, which is an indicator of high risk. Cardiac The jugular venous pulse is often normal in patients with acute myocardial ischemia but may reveal characteristic patterns with pericardial tamponade or acute right ventricular dysfunction (Chaps. 267 and 288). Cardiac auscultation may reveal a third or, more commonly, a fourth heart sound, reflecting myocardial systolic or diastolic dysfunction. Murmurs of mitral regurgitation or a harsh murmur of a ventricular-septal defect may indicate mechanical complications of STEMI. A murmur of aortic insufficiency may be a complication of proximal aortic dissection. Other murmurs may reveal underlying cardiac disorders contributory to ischemia (e.g., aortic stenosis or hypertrophic cardiomyopathy). Pericardial friction rubs reflect pericardial inflammation. Abdominal Localizing tenderness on the abdominal exam is useful in identifying a gastrointestinal cause of the presenting syndrome. Abdominal findings are infrequent with purely acute cardiopulmonary problems, except in the case of underlying chronic cardiopulmonary disease or severe right ventricular dysfunction leading to hepatic congestion. Vascular Pulse deficits may reflect underlying chronic atherosclerosis, which increases the likelihood of coronary artery disease. However, evidence of acute limb ischemia with loss of the pulse and pallor, particularly in the upper extremities, can indicate catastrophic consequences of aortic dissection. Unilateral lower-extremity swelling should raise suspicion about venous thromboembolism. Musculoskeletal Pain arising from the costochondral and chondrosternal articulations may be associated with localized swelling, redness, or marked localized tenderness. Pain on palpation of these joints is usually well localized and is a useful clinical sign, though deep palpation may elicit pain in the absence of costochondritis. Although palpation of the chest wall often elicits pain in patients with various musculoskeletal conditions, it should be appreciated that chest wall tenderness does not exclude myocardial ischemia. Sensory deficits in the upper extremities may be indicative of cervical disk disease. Electrocardiography is crucial in the evaluation of nontraumatic chest discomfort. The ECG is pivotal for identifying patients with ongoing ischemia as the principal reason for their presentation as well as secondary cardiac complications of other disorders. Professional society guidelines recommend that an ECG be obtained within 10 min of presentation, with the primary goal of identifying patients with ST-segment elevation diagnostic of MI who are candidates for immediate interventions to restore flow in the occluded coronary artery. ST-segment depression and symmetric T-wave inversions at least 0.2 mV in depth are useful for detecting myocardial ischemia in the absence of STEMI and are also indicative of higher risk of death or recurrent ischemia. Serial performance of ECGs (every 30–60 min) is recommended in the ED evaluation of suspected ACS. In addition, an ECG with right-sided lead placement should be considered in patients with clinically suspected ischemia and a nondiagnostic standard 12-lead ECG. Despite the value of the resting ECG, its sensitivity for ischemia is poor—as low as 20% in some studies. Abnormalities of the ST segment and T wave may occur in a variety of conditions, including pulmonary embolism, ventricular hypertrophy, acute and chronic pericarditis, myocarditis, electrolyte imbalance, and metabolic disorders. Notably, hyperventilation associated with panic disorder can also lead to nonspecific ST and T-wave abnormalities. Pulmonary embolism is most often associated with sinus tachycardia but can also lead to rightward shift of the ECG axis, manifesting as an S-wave in lead I, with a Q-wave and T-wave in lead III (Chaps. 268 and 300). In patients with ST-segment elevation, the presence of diffuse lead involvement not corresponding to a specific coronary anatomic distribution and PR-segment depression can aid in distinguishing pericarditis from acute MI. (See Chap. 308e) Plain radiography of the chest is performed routinely when patients present with acute chest discomfort and selectively when individuals who are being evaluated as outpatients have subacute or chronic pain. The chest radiograph is most useful for identifying pulmonary processes, such as pneumonia or pneumothorax. Findings are often unremarkable in patients with ACS, but pulmonary edema may be evident. Other specific findings include widening of the mediastinum in some patients with aortic dissection, Hampton’s hump or Westermark’s sign in patients with pulmonary embolism (Chaps. 300 and 308e), or pericardial calcification in chronic pericarditis. Laboratory testing in patients with acute chest pain is focused on the detection of myocardial injury. Such injury can be detected by the presence of circulating proteins released from damaged myocardial cells. Owing to the time necessary for this release, initial biomarkers of injury may be in the normal range, even in patients with STEMI. Because of superior cardiac tissue-specificity compared with creatine kinase MB, cardiac troponin is the preferred biomarker for the diagnosis of MI and should be measured in all patients with suspected ACS at presentation and repeated in 3–6 h. Testing after 6 h is required only when there is uncertainty regarding the onset of pain or when stuttering symptoms have occurred. It is not necessary or advisable to measure troponin in patients without suspicion of ACS unless this test is being used specifically for risk stratification (e.g., in pulmonary embolism or heart failure). The development of cardiac troponin assays with progressively greater analytical sensitivity has facilitated detection of substantially lower blood concentrations of troponin than was previously possible. This evolution permits earlier detection of myocardial injury, enhances the overall accuracy of a diagnosis of MI, and improves risk stratification in suspected ACS. The greater negative predictive value of a negative troponin result with current-generation assays is an advantage in the evaluation of chest pain in the ED. Rapid rule-out protocols that use serial testing and changes in troponin concentration over as short a period as 1–2 h appear promising and remain under investigation. However, with these advantages has come a trade-off: myocardial injury is detected in a larger proportion of patients who have non-ACS cardiopulmonary conditions than with previous, less sensitive assays. This evolution in testing for myocardial necrosis has rendered other aspects of the clinical evaluation critical to the practitioner’s determination of the probability that the symptoms represent ACS. In addition, observation of a change in cardiac troponin concentration between serial samples is useful in discriminating acute causes of myocardial injury from chronic elevation due to underlying structural heart disease, end-stage renal disease, or interfering antibodies. The diagnosis of MI is reserved for acute myocardial injury that is marked by a rising and/or falling pattern—with at least one value exceeding the 99th percentile reference limit—and that is caused by ischemia. Other non-ischemic insults, such as myocarditis, may result in myocardial injury but should not be labeled MI. Other laboratory assessments may include the D-dimer test to aid in exclusion of pulmonary embolism (Chap. 300). Measurement of a B-type natriuretic peptide is useful when considered in conjunction with the clinical history and exam for the diagnosis of heart failure. B-type natriuretic peptides also provide prognostic information regarding patients with ACS and those with pulmonary embolism. Other putative biomarkers of acute myocardial ischemia or ACS, such as myeloperoxidase, have not been adopted in routine use. Multiple clinical algorithms have been developed to aid in decision-making during the evaluation and disposition of patients with acute nontraumatic chest pain. Such decision-aids have been derived on the basis of their capacity to estimate either of two closely related but not identical probabilities: (1) the probability of a final diagnosis of ACS and (2) the probability of major cardiac events during short-term follow-up. Such decision-aids are used most commonly to identify patients with a low clinical probability of ACS who are candidates either for early provocative testing for ischemia or for discharge from the ED. Goldman and Lee developed one of the first such decision-aids, using only the ECG and risk indicators—hypotension, pulmonary rales, and known ischemic heart disease—to categorize patients into four risk categories ranging from a <1% to a >16% probability of a major cardiovascular complication. The Acute Cardiac Ischemia Time-Insensitive Predictive Instrument (ACI-TIPI) combines age, sex, chest pain presence, and ST-segment abnormalities to define a probability of ACS. More recently developed decision-aids are shown in Fig. 19-3. Elements common to each of these tools are (1) symptoms typical for ACS; (2) older age; (3) risk factors for or known atherosclerosis; (4) ischemic ECG abnormalities; and (5) elevated cardiac troponin levels. Although, because of very low specificity, the overall diagnostic performance of such decision-aids is poor (area under the receiver operating curve, 0.55–0.65), they can help identify patients with a very low probability of ACS (e.g., <1%). Nevertheless, no such decision-aid (or single clinical factor) is sufficiently sensitive and well validated to use as a sole tool for clinical decision-making. Clinicians should differentiate between the algorithms discussed above and risk scores derived for stratification of prognosis (e.g., the TIMI and GRACE risk scores, Chap. 295) in patients who already have an established diagnosis of ACS. The latter risk scores were not designed to be used for diagnostic assessment. Exercise electrocardiography (“stress testing”) is commonly employed for completion of risk stratification of patients who have undergone an initial evaluation that has not revealed a specific cause of chest discomfort and has identified them as being at low or selectively intermediate risk of ACS. Early exercise testing is safe in patients without high-risk findings after 8–12 h of observation and can assist in refining their prognostic assessment. For example, of low-risk patients who underwent exercise testing in the first 48 h after presentation, those without evidence of ischemia had a 2% rate of cardiac events through 6 months, whereas the rate was 15% among patients with either clear evidence of ischemia or an equivocal result. Patients who are unable to exercise may undergo pharmacological stress testing with either nuclear perfusion imaging or echocardiography. Notably, some experts have deemed the routine use of stress testing for low-risk patients unsupported by direct clinical evidence and a potentially unnecessary source of cost. Professional society guidelines identify ongoing chest pain as a contraindication to stress testing. In selected patients with persistent pain and nondiagnostic ECG and biomarker data, resting PART 2 Cardinal Manifestations and Presentation of Diseases HEART Score History Highly suspicious Moderately suspicious Slightly suspicious 2 1 0 ECG Significant ST-depression Non-specific abnormality Normal 2 1 0 Age 65 y 45–<65 y <45 y 2 1 0 Risk factors 3 risk factors 1–2 risk factors None 2 1 0 Troponin (serial) 3 × 99th percentile 1–<3 × 99th percentile �99th percentile 2 1 0 TOTAL Low-risk: 0–3 Not low risk: 4 North American Chest Pain Rule High Risk Criteria Y/N Typical symptoms for ischemia ECG: acute ischemic changes Age 50 y Known coronary artery disease Troponin (serial) >99th percentile Low-risk: All No Not Low-risk: Any Yes 20.2Captured as low-risk (%) 4.4 Sensitivity 99.1 100 Specificity 25.7 5.6 FIguRE 19-3 Examples of decision-aids used in conjunction with serial measurement of cardiac troponin for evaluation of acute chest pain. (Figure prepared from data in SA Mahler et al: Int J Cardiol 168:795, 2013.) myocardial perfusion images can be obtained; the absence of any perfusion abnormality substantially reduces the likelihood of coronary artery disease. In some centers, early myocardial perfusion imaging is performed as part of a routine strategy for evaluating patients at low or intermediate risk of ACS in parallel with other testing. Management of patients with normal perfusion images can be expedited with earlier discharge and outpatient stress testing, if indicated. Those with abnormal rest perfusion imaging, which cannot discriminate between old or new myocardial defects, must undergo additional in-hospital evaluation. Other noninvasive imaging studies of the chest can be used selectively to provide additional diagnostic and prognostic information on patients with chest discomfort. Echocardiography Echocardiography is not necessarily routine in patients with chest discomfort. However, in patients with an uncertain diagnosis, particularly those with nondiagnostic ST elevation, ongoing symptoms, or hemodynamic instability, detection of abnormal regional wall motion provides evidence of possible ischemic dysfunction. Echocardiography is diagnostic in patients with mechanical complications of MI or in patients with pericardial tamponade. Transthoracic echocardiography is poorly sensitive for aortic dissection, although an intimal flap may sometimes be detected in the ascending aorta. CT Angiography (See Chap. 270e) CT angiography is emerging as a modality for the evaluation of patients with acute chest discomfort. Coronary CT angiography is a sensitive technique for detection of obstructive coronary disease, particularly in the proximal third of the major epicardial coronary arteries. CT appears to enhance the speed to disposition of patients with a low-intermediate probability for ACS; its major strength being the negative predictive value of a finding of no significant disease. In addition, contrast-enhanced CT can detect focal areas of myocardial injury in the acute setting as decreased areas of enhancement. At the same time, CT angiography can exclude aortic dissection, pericardial effusion, and pulmonary embolism. Balancing factors in the consideration of the emerging role of coronary CT angiography in low-risk patients are radiation exposure and additional testing prompted by nondiagnostic abnormal results. MRI (See Chap. 270e) Cardiac magnetic resonance (CMR) imaging is an evolving, versatile technique for structural and functional evaluation of the heart and the vasculature of the chest. CMR accurately measures ventricular dimensions and function and can be performed as a modality for pharmacologic stress perfusion imaging. Gadolinium-enhanced CMR can provide early detection of MI, defining areas of myocardial necrosis accurately, and can delineate patterns of myocardial disease that are often useful in discriminating ischemic from non-ischemic myocardial injury. Although usually not practical for the urgent evaluation of acute chest discomfort, CMR can be a useful modality for cardiac structural evaluation of patients with elevated cardiac troponin levels in the absence of definite coronary artery disease. CMR coronary angiography is in its early stages. MRI also permits highly accurate assessment for aortic dissection but is infrequently used as the first test because CT and transesophageal echocardiography are usually more practical. Because of the challenges inherent in reliably identifying the small proportion of patients with serious causes of acute chest discomfort while not exposing the larger number of low-risk patients to unnecessary testing and extended ED or hospital evaluations, many medical centers have adopted critical pathways to expedite the assessment and management of patients with nontraumatic chest pain, often in dedicated chest pain units. Such pathways are generally aimed at (1) rapid identification, triage, and treatment of high-risk cardiopulmonary conditions (e.g., STEMI); (2) accurate identification of low-risk patients who can be safely observed in units with less intensive monitoring, undergo early exercise testing, or be discharged home; and (3) through more efficient and systematic accelerated diagnostic protocols, safe reduction in costs associated with overuse of testing and unnecessary hospitalizations. In some studies, provision of pro-tocol-driven care in chest pain units has decreased costs and overall duration of hospital evaluation with no detectable excess of adverse clinical outcomes. Chest pain is common in outpatient practice, with a lifetime prevalence of 20–40% in the general population. More than 25% of patients with MI have had a related visit with a primary care physician in the previous month. The diagnostic principles are the same as in the ED. However, the pretest probability of an acute cardiopulmonary cause is significantly lower. Therefore, testing paradigms are less intense, with an emphasis on the history, physical examination, and ECG. Moreover, decision-aids developed for settings with a high prevalence of significant cardiopulmonary disease have lower positive predictive value when applied in the practitioner’s office. However, in general, if the level of clinical suspicion of ACS is sufficiently high to consider troponin testing, the patient should be referred to the ED for evaluation. Abdominal Pain Danny O. Jacobs, William Silen Correctly interpreting acute abdominal pain can be quite challenging. Few clinical situations require greater judgment, because the most catastrophic of events may be forecast by the subtlest of symptoms and signs. In every instance, the clinician must distinguish those conditions 20 that require urgent intervention from those that do not and can best be managed nonoperatively. A meticulously executed, detailed history and physical examination are critically important for focusing the differential diagnosis, where necessary, and allowing the diagnostic evaluation to proceed expeditiously (Table 20-1). The etiologic classification in Table 20-2, although not complete, provides a useful framework for evaluating patients with abdominal pain. The most common causes of abdominal pain on admission are acute appendicitis, nonspecific abdominal pain, pain of urologic origin, and intestinal obstruction. A diagnosis of “acute or surgical abdomen” is not acceptable because of its often misleading and erroneous connotations. Most patients who present with acute abdominal pain will have self-limited disease processes. However, it is important to remember that pain severity does not necessarily correlate with the severity of the underlying condition. The most obvious of “acute abdomens” may SoME KEy CoMPonEnTS of THE PATiEnT’S HiSToRy Age Time and mode of onset of the pain Pain characteristics Duration of symptoms Location of pain and sites of radiation Associated symptoms and their relationship to the pain Nausea, emesis, and anorexia Diarrhea, constipation, or other changes in bowel habits Menstrual history not require operative intervention, and the mildest of abdominal pains 103 may herald an urgently correctable lesion. Any patient with abdominal pain of recent onset requires early and thorough evaluation and accurate diagnosis. SOME MECHANISMS OF PAIN ORIgINATINg IN THE ABDOMEN Inflammation of the Parietal Peritoneum The pain of parietal peritoneal inflammation is steady and aching in character and is located directly over the inflamed area, its exact reference being possible because it is transmitted by somatic nerves supplying the parietal peritoneum. The intensity of the pain is dependent on the type and amount of material to which the peritoneal surfaces are exposed in a given time period. For example , the sudden release into the peritoneal cavity of a small quantity of sterile acid gastric juice causes much more pain than the same amount of grossly contaminated neutral feces. Enzymatically active pancreatic juice incites more pain and inflammation than does the same amount of sterile bile containing no potent enzymes. Blood is normally only a mild irritant and the response to urine can be bland, so exposure of blood and urine to the peritoneal cavity may go unnoticed unless it is sudden and massive. Bacterial contamination, such as may occur with pelvic inflammatory disease or perforated distal intestine, causes low-intensity pain until multiplication causes a significant amount of inflammatory mediators to be released. Patients with perforated upper gastrointestinal ulcers may present entirely differently depending on how quickly gastric juices enter the peritoneal cavity. Thus, the rate at which any inflammatory material irritates the peritoneum is important. The pain of peritoneal inflammation is invariably accentuated by pressure or changes in tension of the peritoneum, whether produced by palpation or by movement such as with coughing or sneezing. The patient with peritonitis characteristically lies quietly in bed, preferring to avoid motion, in contrast to the patient with colic, who may be thrashing in discomfort. Another characteristic feature of peritoneal irritation is tonic reflex spasm of the abdominal musculature, localized to the involved body segment. Its intensity depends on the integrity of the nervous system, the location of the inflammatory process, and the rate at which it develops. Spasm over a perforated retrocecal appendix or perforation into the lesser peritoneal sac may be minimal or absent because of the protective effect of overlying viscera. Catastrophic abdominal emergencies may be associated with minimal or no detectable pain or muscle spasm in obtunded, seriously ill, debilitated, immunosuppressed, or psychotic patients. A slowly developing process also often greatly attenuates the degree of muscle spasm. Obstruction of Hollow Viscera Intraluminal obstruction classically elicits intermittent or colicky abdominal pain that is not as well localized as the pain of parietal peritoneal irritation. However, the absence of cramping discomfort should not be misleading because distention of a hollow viscus may also produce steady pain with only rare paroxysms. Small-bowel obstruction often presents as poorly localized, intermittent periumbilical or supraumbilical pain. As the intestine progressively dilates and loses muscular tone, the colicky nature of the pain may diminish. With superimposed strangulating obstruction, pain may spread to the lower lumbar region if there is traction on the root of the mesentery. The colicky pain of colonic obstruction is of lesser intensity, is commonly located in the infraumbilical area, and may often radiate to the lumbar region. Sudden distention of the biliary tree produces a steady rather than colicky type of pain; hence, the term biliary colic is misleading. Acute distention of the gallbladder usually causes pain in the right upper quadrant with radiation to the right posterior region of the thorax or to the tip of the right scapula, but it is also not uncommonly found near the midline. Distention of the common bile duct often causes epigastric pain that may radiate to the upper lumbar region. Considerable variation is common, however, so that differentiation between these may be impossible. The typical subscapular pain or lumbar radiation is frequently absent. Gradual dilatation of the biliary tree, as can occur with carcinoma of the head of the pancreas, may cause no pain Pain Originating in the Abdomen PART 2 Cardinal Manifestations and Presentation of Diseases Mechanical obstruction of hollow viscera Obstruction of the small or large intestine Obstruction of the biliary tree Obstruction of the ureter Abdominal wall Distortion or traction of mesentery Trauma or infection of muscles Distension of visceral surfaces, e.g., by hemorrhage Hepatic or renal capsules or only a mild aching sensation in the epigastrium or right upper quadrant. The pain of distention of the pancreatic ducts is similar to that described for distention of the common bile duct but, in addition, is very frequently accentuated by recumbency and relieved by the upright position. Obstruction of the urinary bladder usually causes dull, low-intensity pain in the suprapubic region. Restlessness without specific complaint of pain may be the only sign of a distended bladder in an obtunded patient. In contrast, acute obstruction of the intravesicular portion of the ureter is characterized by severe suprapubic and flank pain that radiates to the penis, scrotum, or inner aspect of the upper thigh. Obstruction of the ureteropelvic junction manifests as pain near the costovertebral angle, whereas obstruction of the remainder of the ureter is associated with flank pain that often extends into the same side of the abdomen. Vascular Disturbances A frequent misconception is that pain due to intraabdominal vascular disturbances is sudden and catastrophic in nature. Certain disease processes, such as embolism or thrombosis of the superior mesenteric artery or impending rupture of an abdominal aortic aneurysm, can certainly be associated with diffuse, severe pain. Yet, just as frequently, the patient with occlusion of the superior mesenteric artery only has mild continuous or cramping diffuse pain for 2 or 3 days before vascular collapse or findings of peritoneal inflammation appear. The early, seemingly insignificant discomfort is caused by hyperperistalsis rather than peritoneal inflammation. Indeed, absence of tenderness and rigidity in the presence of continuous, diffuse pain (e.g., “pain out of proportion to physical findings”) in a patient likely to have vascular disease is quite characteristic of occlusion of the superior mesenteric artery. Abdominal pain with radiation to the sacral region, flank, or genitalia should always signal the possible presence of a rupturing abdominal aortic aneurysm. This pain may persist over a period of several days before rupture and collapse occur. Abdominal Wall Pain arising from the abdominal wall is usually constant and aching. Movement, prolonged standing, and pressure accentuate the discomfort and associated muscle spasm. In the case of hematoma of the rectus sheath, now most frequently encountered in association with anticoagulant therapy, a mass may be present in the lower quadrants of the abdomen. Simultaneous involvement of muscles in other parts of the body usually serves to differentiate myositis of the abdominal wall from other processes that might cause pain in the same region. Pain referred to the abdomen from the thorax , spine, or genitalia may present a vexing diagnostic challenge because diseases of the upper part of the abdominal cavity such as acute cholecystitis or perforated ulcer may be associated with intrathoracic complications. A most important, yet often forgotten, dictum is that the possibility of intra-thoracic disease must be considered in every patient with abdominal pain, especially if the pain is in the upper abdomen. Systematic questioning and examination directed toward detecting myocardial or pulmonary infarction, pneumonia, pericarditis, or esophageal disease (the intrathoracic diseases that most often masquerade as abdominal emergencies) will often provide sufficient clues to establish the proper diagnosis. Diaphragmatic pleuritis resulting from pneumonia or pulmonary infarction may cause pain in the right upper quadrant and pain in the supraclavicular area, the latter radiation to be distinguished from the referred subscapular pain caused by acute distention of the extrahepatic biliary tree. The ultimate decision as to the origin of abdominal pain may require deliberate and planned observation over a period of several hours, during which repeated questioning and examination will provide the diagnosis or suggest the appropriate studies. Referred pain of thoracic origin is often accompanied by splinting of the involved hemithorax with respiratory lag and decrease in excursion more marked than that seen in the presence of intraabdominal disease. In addition, apparent abdominal muscle spasm caused by referred pain will diminish during the inspiratory phase of respiration, whereas it persists throughout both respiratory phases if it is of abdominal origin. Palpation over the area of referred pain in the abdomen also does not usually accentuate the pain and, in many instances, actually seems to relieve it. Thoracic disease and abdominal disease frequently coexist and may be difficult or impossible to differentiate. For example, the patient with known biliary tract disease often has epigastric pain during myocardial infarction, or biliary colic may be referred to the precordium or left shoulder in a patient who has suffered previously from angina pectoris. For an explanation of the radiation of pain to a previously diseased area, see Chap. 18. Referred pain from the spine, which usually involves compression or irritation of nerve roots, is characteristically intensified by certain motions such as cough, sneeze, or strain and is associated with hyperesthesia over the involved dermatomes. Pain referred to the abdomen from the testes or seminal vesicles is generally accentuated by the slightest pressure on either of these organs. The abdominal discomfort experienced is of dull, aching character and is poorly localized. Pain of metabolic origin may simulate almost any other type of intraabdominal disease. Several mechanisms may be at work. In certain instances, such as hyperlipidemia, the metabolic disease itself may be accompanied by an intraabdominal process such as pancreatitis, which can lead to unnecessary laparotomy unless recognized. C1 esterase deficiency associated with angioneurotic edema is often associated with episodes of severe abdominal pain. Whenever the cause of abdominal pain is obscure, a metabolic origin always must be considered. Abdominal pain is also the hallmark of familial Mediterranean fever (Chap. 392). The problem of differential diagnosis is often not readily resolved. The pain of porphyria and of lead colic is usually difficult to distinguish from that of intestinal obstruction, because severe hyperperistalsis is a prominent feature of both. The pain of uremia or diabetes is nonspecific, and the pain and tenderness frequently shift in location and intensity. Diabetic acidosis may be precipitated by acute appendicitis or intestinal obstruction, so if prompt resolution of the abdominal pain does not result from correction of the metabolic abnormalities, an underlying organic problem should be suspected. Black widow spider bites produce intense pain and rigidity of the abdominal muscles and 105 back, an area infrequently involved in intraabdominal disease. Evaluating and diagnosing causes of abdominal pain in immunosuppressed or otherwise immunocompromised patients is very difficult. This includes those who have undergone organ transplantation; who are receiving immunosuppressive treatments for autoimmune diseases, chemotherapy, or glucocorticoids; who have AIDS; and who are very old. In these circumstances, normal physiologic responses may be absent or masked. In addition, unusual infections may cause abdominal pain where the etiologic agents include cytomegalovirus, mycobacteria, protozoa, and fungi. These pathogens may affect all gastrointestinal organs, including the gallbladder, liver, and pancreas, as well as the gastrointestinal tract, causing occult or overtly symptomatic perforations of the latter. Splenic abscesses due to Candida or Salmonella infection should also be considered, especially when evaluating patients with left upper quadrant or left flank pain. Acalculous cholecystitis is a relative common complication in patients with AIDS, where it is often associated with cryptosporidiosis or cytomegalovirus infection. Neutropenic enterocolitis is often identified as a cause of abdominal pain and fever in some patients with bone marrow suppression due to chemotherapy. Acute graft-versus-host disease should be considered. Optimal management of these patients may require meticulous follow-up including serial examinations to be certain that surgical intervention is not required to treat an underlying disease process. Diseases that injure sensory nerves may cause causalgic pain. It has a burning character and is usually limited to the distribution of a given peripheral nerve. Normal nonpainful stimuli such as touch or a change in temperature may be causalgic and may frequently be present even at rest. The demonstration of irregularly spaced cutaneous pain spots may be the only indication that an old nerve injury exists. Even though the pain may be precipitated by gentle palpation, rigidity of the abdominal muscles is absent, and the respirations are not disturbed. Distention of the abdomen is uncommon, and the pain has no relationship to the intake of food. Pain arising from spinal nerves or roots comes and goes suddenly and is of a lancinating type (Chap. 22). It may be caused by herpes zoster, impingement by arthritis, tumors, a herniated nucleus pulposus, diabetes , or syphilis. It is not associated with food intake, abdominal distention, or changes in respiration. Severe muscle spasm, as in the gastric crises of tabes dorsalis, is common but is either relieved or not accentuated by abdominal palpation. The pain is made worse by movement of the spine and is usually confined to a few dermatomes. Hyperesthesia is very common. Pain due to functional causes conforms to none of the aforementioned patterns. Mechanisms of disease are not clearly established. Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterized by abdominal pain and altered bowel habits. The diagnosis is made on the basis of clinical criteria (Chap. 352) and after exclusion of demonstrable structural abnormalities. The episodes of abdominal pain are often brought on by stress, and the pain varies considerably in type and location. Nausea and vomiting are rare. Localized tenderness and muscle spasm are inconsistent or absent. The causes of IBS or related functional disorders are not known. APPROACH TO THE PATIENT: Few abdominal conditions require such urgent operative intervention that an orderly approach need be abandoned, no matter how ill the patient. Only patients with exsanguinating intraabdominal hemorrhage (e.g., ruptured aneurysm) must be rushed to the operating room immediately, but in such instances, only a few minutes are required to assess the critical nature of the problem. Under these circumstances, all obstacles must be swept aside, adequate venous access for fluid replacement obtained, and the operation begun. Many of these patients have died in the radiology department or the emergency room while awaiting unnecessary examinations such as electrocardiograms or computed tomography (CT) scans. There are no contraindications to operation when massive intraabdominal hemorrhage is present. Fortunately, this situation is relatively rare. This statement does not necessarily apply to patients with intraluminal gastrointestinal hemorrhage, who can often be managed by other means (Chap. 57). Nothing will supplant an orderly, painstakingly detailed history, which is far more valuable than any laboratory or radiographic examination. This kind of history is laborious and time-consuming, making it not especially popular, even though a reasonably accurate diagnosis can be made on the basis of the history alone in the majority of cases. In cases of acute abdominal pain, a diagnosis is readily established in most instances, whereas success is not so frequent in patients with chronic pain. IBS is one of the most common causes of abdominal pain and must always be kept in mind (Chap. 352). The location of the pain can assist in narrowing the differential diagnosis (Table 20-3); however, the chronological sequence of events in the patient’s history is often more important than the pain’s location. If the examiner is sufficiently open-minded and unhurried, asks the proper questions, and listens, the patient will usually provide the diagnosis. Careful attention should be paid to the extraabdominal regions. Narcotics or analgesics should not be withheld until a definitive diagnosis or a definitive plan has been formulated; obfuscation of the diagnosis by adequate analgesia is unlikely. An accurate menstrual history in a female patient is essential. It is important to remember that normal anatomic relationships can be significantly altered by the gravid uterus. Abdominal and pelvic pain may occur during pregnancy due to conditions that do PART 2 Cardinal Manifestations and Presentation of Diseases Irritable bowel syndrome Psychiatric disease Peritonitis Diabetes Abbreviation: GERD, gastroesophageal reflux disease. not require surgery. Lastly, some otherwise noteworthy laboratory values (e.g., leukocytosis) may represent the normal physiologic changes of pregnancy. In the examination, simple critical inspection of the patient, e.g., of facies, position in bed, and respiratory activity, provides valuable clues. The amount of information to be gleaned is directly proportional to the gentleness and thoroughness of the examiner. Once a patient with peritoneal inflammation has been examined brusquely, accurate assessment by the next examiner becomes almost impossible. Eliciting rebound tenderness by sudden release of a deeply palpating hand in a patient with suspected peritonitis is cruel and unnecessary. The same information can be obtained by gentle percussion of the abdomen (rebound tenderness on a miniature scale), a maneuver that can be far more precise and localizing. Asking the patient to cough will elicit true rebound tenderness without the need for placing a hand on the abdomen. Furthermore, the forceful demonstration of rebound tenderness will startle and induce protective spasm in a nervous or worried patient in whom true rebound tenderness is not present. A palpable gallbladder will be missed if palpation is so aggressive that voluntary muscle spasm becomes superimposed on involuntary muscular rigidity. As with history taking, sufficient time should be spent in the examination. Abdominal signs may be minimal but nevertheless, if accompanied by consistent symptoms, may be exceptionally meaningful. Abdominal signs may be virtually or totally absent in cases of pelvic peritonitis, so careful pelvic and rectal examinations are mandatory in every patient with abdominal pain. Tenderness on pelvic or rectal examination in the absence of other abdominal signs can be caused by operative indications such as perforated appendicitis, diverticulitis, twisted ovarian cyst, and many others. Much attention has been paid to the presence or absence of peristaltic sounds, their quality, and their frequency. Auscultation of the abdomen is one of the least revealing aspects of the physical examination of a patient with abdominal pain. Catastrophes such as a strangulating small intestinal obstruction or perforated appendicitis may occur in the presence of normal peristaltic sounds. Conversely, when the proximal part of the intestine above obstruction becomes markedly distended and edematous, peristaltic sounds may lose the characteristics of borborygmi and become weak or absent, even when peritonitis is not present. It is usually the severe chemical peritonitis of sudden onset that is associated with the truly silent abdomen. Laboratory examinations may be valuable in assessing the patient with abdominal pain, yet, with few exceptions, they rarely establish a diagnosis. Leukocytosis should never be the single deciding factor as to whether or not operation is indicated. A white blood cell count >20,000/μL may be observed with perforation of a viscus, but pancreatitis, acute cholecystitis, pelvic inflammatory disease, and intestinal infarction may also be associated with marked leukocytosis. A normal white blood cell count is not rare in cases of perforation of abdominal viscera. The diagnosis of anemia may be more helpful than the white blood cell count, especially when combined with the history. The urinalysis may reveal the state of hydration or rule out severe renal disease, diabetes, or urinary infection. Blood urea nitrogen, glucose, and serum bilirubin levels may be helpful. Serum amylase levels may be increased by many diseases other than pancreatitis, e.g., perforated ulcer, strangulating intestinal obstruction, and acute cholecystitis; thus, elevations of serum amylase do not rule out the need for an operation. Plain and upright or lateral decubitus radiographs of the abdomen may be of value in cases of intestinal obstruction, perforated ulcer, and a variety of other conditions. They are usually unnecessary in patients with acute appendicitis or strangulated external hernias. In rare instances, barium or water-soluble contrast study of the upper part of the gastrointestinal tract may demonstrate partial intestinal obstruction that may elude diagnosis by other means. If there is any question of obstruction of the colon, oral administration of barium sulfate should be avoided. On the other hand, in cases of suspected colonic obstruction (without perforation), a contrast enema may be diagnostic. In the absence of trauma, peritoneal lavage has been replaced as a diagnostic tool by CT scanning and laparoscopy. Ultrasonography has proved to be useful in detecting an enlarged gallbladder or pancreas, the presence of gallstones, an enlarged ovary, or a tubal pregnancy. Laparoscopy is especially helpful in diagnosing pelvic conditions, such as ovarian cysts, tubal pregnancies, salpingitis, and acute appendicitis. Radioisotopic hepatobiliary iminodiacetic acid scans (HIDAs) may help differentiate acute cholecystitis or biliary colic from acute pancreatitis. A CT scan may demonstrate an enlarged pancreas, ruptured spleen, or thickened colonic or appendiceal wall and streaking of the mesocolon or mesoappendix characteristic of diverticulitis or appendicitis. Sometimes, even under the best circumstances with all available aids and with the greatest of clinical skill, a definitive diagnosis cannot be established at the time of the initial examination. Nevertheless, even in the absence of a clear anatomic diagnosis, it may be abundantly clear to an experienced and thoughtful physician and surgeon that operation is indicated on clinical grounds alone. Should that decision be questionable, watchful waiting with repeated questioning and examination will often elucidate the true nature of the illness and indicate the proper course of action. Headache Peter J. Goadsby, Neil H. Raskin Headache is among the most common reasons patients seek medical attention, on a global basis being responsible for more disability than any other neurologic problem. Diagnosis and management are based on a careful clinical approach augmented by an understanding of the 21 anatomy, physiology, and pharmacology of the nervous system pathways mediating the various headache syndromes. This chapter will focus on the general approach to a patient with headache; migraine and other primary headache disorders are discussed in Chap. 447. A classification system developed by the International Headache Society (www.ihs-headache.org/) characterizes headache as primary or secondary (Table 21-1). Primary headaches are those in which headache and its associated features are the disorder in itself, whereas secondary headaches are those caused by exogenous disorders (Headache Classification Committee of the International Headache Society, 2013). Primary headache often results in considerable disability and a decrease in the patient’s quality of life. Mild secondary headache, such as that seen in association with upper respiratory tract infections, is Source: After J Olesen et al: The Headaches. Philadelphia, Lippincott Williams & Wilkins, 2005. common but rarely worrisome. Life-threatening headache is relatively 107 uncommon, but vigilance is required in order to recognize and appropriately treat such patients. Pain usually occurs when peripheral nociceptors are stimulated in response to tissue injury, visceral distension, or other factors (Chap. 18). In such situations, pain perception is a normal physiologic response mediated by a healthy nervous system. Pain can also result when pain-producing pathways of the peripheral or central nervous system (CNS) are damaged or activated inappropriately. Headache may originate from either or both mechanisms. Relatively few cranial structures are pain-producing; these include the scalp, middle meningeal artery, dural sinuses, falx cerebri, and proximal segments of the large pial arteries. The ventricular ependyma, choroid plexus, pial veins, and much of the brain parenchyma are not pain-producing. The key structures involved in primary headache appear to be the following: The large intracranial vessels and dura mater and the peripheral terminals of the trigeminal nerve that innervate these structures The caudal portion of the trigeminal nucleus, which extends into the dorsal horns of the upper cervical spinal cord and receives input from the first and second cervical nerve roots (the trigeminocervical complex) Rostral pain-processing regions, such as the ventroposteromedial thalamus and the cortex The pain-modulatory systems in the brain that modulate input from trigeminal nociceptors at all levels of the pain-processing pathways and influence vegetative functions, such as hypothalamus and brainstem structures The innervation of the large intracranial vessels and dura mater by the trigeminal nerve is known as the trigeminovascular system. Cranial autonomic symptoms, such as lacrimation, conjunctival injection, nasal congestion, rhinorrhea, periorbital swelling, aural fullness, and ptosis, are prominent in the trigeminal autonomic cephalalgias, including cluster headache and paroxysmal hemicrania, and may also be seen in migraine, even in children. These autonomic symptoms reflect activation of cranial parasympathetic pathways, and functional imaging studies indicate that vascular changes in migraine and cluster headache, when present, are similarly driven by these cranial autonomic systems. Moreover, they can often be mistaken for symptoms or signs of cranial sinus inflammation, which is thus overdiagnosed and inappropriately managed. Migraine and other primary headache types are not “vascular headaches”; these disorders do not reliably manifest vascular changes, and treatment outcomes cannot be predicted by vascular effects. Migraine is a brain disorder and is best understood and managed as such. CLINICAL EVALuATION OF ACuTE, NEW-ONSET HEADACHE The patient who presents with a new, severe headache has a differential diagnosis that is quite different from the patient with recurrent headaches over many years. In new-onset and severe headache, the probability of finding a potentially serious cause is considerably greater than in recurrent headache. Patients with recent onset of pain require prompt evaluation and appropriate treatment. Serious causes to be considered include meningitis, subarachnoid hemorrhage, epidural or subdural hematoma, glaucoma, tumor, and purulent sinusitis. When worrisome symptoms and signs are present (Table 21-2), rapid diagnosis and management are critical. A careful neurologic examination is an essential first step in the evaluation. In most cases, patients with an abnormal examination or a history of recent-onset headache should be evaluated by a computed tomography (CT) or magnetic resonance imaging (MRI) study. As an initial screening procedure for intracranial pathology in this setting, CT and MRI methods appear to be equally sensitive. In some circumstances, a lumbar puncture (LP) is also required, unless a benign etiology can be otherwise established. A general evaluation of acute headache might include cranial arteries by palpation; cervical spine by Pain induced by bending, lifting, cough Pain associated with local tenderness, e.g., region of temporal artery the effect of passive movement of the head and by imaging; the investigation of cardiovascular and renal status by blood pressure monitoring and urine examination; and eyes by funduscopy, intraocular pressure measurement, and refraction. The psychological state of the patient should also be evaluated because a relationship exists between head pain and depression. This is intended to identify comorbidity rather than provide an explanation for the headache, because troublesome headache is seldom simply caused by mood change. Although it is notable that medicines with antidepressant actions are also effective in the prophylactic treatment of both tension-type headache and migraine, each symptom must be treated optimally. Underlying recurrent headache disorders may be activated by pain that follows otologic or endodontic surgical procedures. Thus, pain about the head as the result of diseased tissue or trauma may reawaken an otherwise quiescent migraine syndrome. Treatment of the headache is largely ineffective until the cause of the primary problem is addressed. Serious underlying conditions that are associated with headache are described below. Brain tumor is a rare cause of headache and even less commonly a cause of severe pain. The vast majority of patients presenting with severe headache have a benign cause. The management of secondary headache focuses on diagnosis and treatment of the underlying condition. Acute, severe headache with stiff neck and fever suggests meningitis. LP is mandatory. Often there is striking accentuation of pain with eye movement. Meningitis can be easily mistaken for migraine in that the cardinal symptoms of pounding headache, photophobia, nausea, and vomiting are frequently present, perhaps reflecting the underlying biology of some of the patients. Meningitis is discussed in Chaps. 164 and 165. Acute, severe headache with stiff neck but without fever suggests subarachnoid hemorrhage. A ruptured aneurysm, arteriovenous malformation, or intraparenchymal hemorrhage may also present with headache alone. Rarely, if the hemorrhage is small or below the foramen magnum, the head CT scan can be normal. Therefore, LP may be required to definitively diagnose subarachnoid hemorrhage. Intracranial hemorrhage is discussed in Chap. 330. Approximately 30% of patients with brain tumors consider headache to be their chief complaint. The head pain is usually nondescript—an intermittent deep, dull aching of moderate intensity, which may worsen with exertion or change in position and may be associated with nausea and vomiting. This pattern of symptoms results from PART 2 Cardinal Manifestations and Presentation of Diseases migraine far more often than from brain tumor. The headache of brain tumor disturbs sleep in about 10% of patients. Vomiting that precedes the appearance of headache by weeks is highly characteristic of posterior fossa brain tumors. A history of amenorrhea or galactorrhea should lead one to question whether a prolactin-secreting pituitary adenoma (or the polycystic ovary syndrome) is the source of headache. Headache arising de novo in a patient with known malignancy suggests either cerebral metastases or carcinomatous meningitis, or both. Head pain appearing abruptly after bending, lifting, or coughing can be due to a posterior fossa mass, a Chiari malformation, or low cerebrospinal fluid (CSF) volume. Brain tumors are discussed in Chap. 118. (See also Chaps. 39 and 385) Temporal (giant cell) arteritis is an inflammatory disorder of arteries that frequently involves the extra-cranial carotid circulation. It is a common disorder of the elderly; its annual incidence is 77 per 100,000 individuals age 50 and older. The average age of onset is 70 years, and women account for 65% of cases. About half of patients with untreated temporal arteritis develop blindness due to involvement of the ophthalmic artery and its branches; indeed, the ischemic optic neuropathy induced by giant cell arteritis is the major cause of rapidly developing bilateral blindness in patients >60 years. Because treatment with glucocorticoids is effective in preventing this complication, prompt recognition of the disorder is important. Typical presenting symptoms include headache, polymyalgia rheumatica (Chap. 385), jaw claudication, fever, and weight loss. Headache is the dominant symptom and often appears in association with malaise and muscle aches. Head pain may be unilateral or bilateral and is located temporally in 50% of patients but may involve any and all aspects of the cranium. Pain usually appears gradually over a few hours before peak intensity is reached; occasionally, it is explosive in onset. The quality of pain is only seldom throbbing; it is almost invariably described as dull and boring, with superimposed episodic stabbing pains similar to the sharp pains that appear in migraine. Most patients can recognize that the origin of their head pain is superficial, external to the skull, rather than originating deep within the cranium (the pain site for migraineurs). Scalp tenderness is present, often to a marked degree; brushing the hair or resting the head on a pillow may be impossible because of pain. Headache is usually worse at night and often aggravated by exposure to cold. Additional findings may include reddened, tender nodules or red streaking of the skin overlying the temporal arteries, and tenderness of the temporal or, less commonly, the occipital arteries. The erythrocyte sedimentation rate (ESR) is often, although not always, elevated; a normal ESR does not exclude giant cell arteritis. A temporal artery biopsy followed by immediate treatment with prednisone 80 mg daily for the first 4–6 weeks should be initiated when clinical suspicion is high. The prevalence of migraine among the elderly is substantial, considerably higher than that of giant cell arteritis. Migraineurs often report amelioration of their headaches with prednisone; thus, caution must be used when interpreting the therapeutic response. Glaucoma may present with a prostrating headache associated with nausea and vomiting. The headache often starts with severe eye pain. On physical examination, the eye is often red with a fixed, moderately dilated pupil. Glaucoma is discussed in Chap. 39. Primary headaches are disorders in which headache and associated features occur in the absence of any exogenous cause. The most common are migraine, tension-type headache, and the trigeminal autonomic cephalalgias, notably cluster headache. These entities are discussed in detail in Chap. 447. aMay be complicated by medication overuse. bSome patients may have headache >4 h/d. Abbreviations: CNS, central nervous system; SUNA, short-lasting unilateral neuralgiform headache attacks with cranial autonomic symptoms; SUNCT, short-lasting unilateral neuralgiform headache attacks with conjunctival injection and tearing. The broad diagnosis of chronic daily headache (CDH) can be applied when a patient experiences headache on 15 days or more per month. CDH is not a single entity; it encompasses a number of different headache syndromes, both primary and secondary (Table 21-3). In aggregate, this group presents considerable disability and is thus specially dealt with here. Population-based estimates suggest that about 4% of adults have daily or near-daily headache. APPROACH TO THE PATIENT: The first step in the management of patients with CDH is to diagnose any secondary headache and treat that problem (Table 21-3). This can sometimes be a challenge where the underlying cause triggers a worsening of a primary headache. For patients with primary headaches, diagnosis of the headache type will guide therapy. Preventive treatments such as tricyclics, either amitriptyline or nortriptyline at doses up to 1 mg/kg, are very useful in patients with CDH arising from migraine or tension-type headache or where the secondary cause has activated the underlying primary headache. Tricyclics are started in low doses (10–25 mg) daily and may be given 12 h before the expected time of awakening in order to avoid excess morning sleepiness. Anticonvulsants, such as topiramate, valproate, flunarizine (not available in the United States), and candesartan are also useful in migraine. The management of medically intractable headache is difficult. Currently there are a number of promising neuromodulatory approaches, such as occipital nerve stimulation, which appears to modulate thalamic processing in migraine, and has also shown promise in chronic cluster headache, short-lasting unilateral neuralgiform headache attacks with cranial autonomic symptoms (SUNA), short-lasting unilateral neuralgiform headache attacks with conjunctival injection and tearing (SUNCT), and hemicrania continua (Chap. 447). Single-pulse transcranial magnetic stimulation is in use in Europe and is approved for migraine with aura in the United States. Other modalities are discussed in Chap. 447. Overuse of analgesic medication for headache can aggravate headache frequency, markedly impair the effect of preventive medicines, and induce a state of refractory daily or near-daily headache called medication-overuse headache. A proportion of patients who stop taking analgesics will experience substantial improvement in the severity and frequency of their headache. However, even after cessation of analgesic use, many patients continue to have headache, although they may feel clinically improved in some way, especially if they have been using opioids or barbiturates regularly. The residual symptoms probably represent the underlying primary headache disorder, and most commonly, this issue occurs in patients prone to migraine. Management of Medication Overuse: Outpatients For patients who overuse medications, it is essential that analgesic use be reduced and eliminated. One approach is to reduce the medication dose by 10% every 1–2 weeks. Immediate cessation of analgesic use is possible for some patients, provided there is no contraindication. Both approaches are facilitated by the use of a medication diary maintained during the month or two before cessation; this helps to identify the scope of the problem. A small dose of a nonsteroidal anti-inflammatory drug (NSAID) such as naproxen, 500 mg bid, if tolerated, will help relieve residual pain as analgesic use is reduced. NSAID overuse is not usually a problem for patients with daily headache when a NSAID with a longer half-life is taken once or twice daily; however, overuse problems may develop with more frequent dosing schedules or shorter acting NSAIDS. Once the patient has substantially reduced analgesic use, a preventive medication should be introduced. It must be emphasized that preventives generally do not work in the presence of analgesic overuse. The most common cause of unresponsiveness to treatment is the use of a preventive when analgesics continue to be used regularly. For some patients, discontinuing analgesics is very difficult; often the best approach is to directly inform the patient that some degree of pain is inevitable during this initial period. Management of Medication Overuse: Inpatients Some patients will require hospitalization for detoxification. Such patients have typically failed efforts at outpatient withdrawal or have a significant medical condition, such as diabetes mellitus, which would complicate withdrawal as an outpatient. Following admission to the hospital, acute medications are withdrawn completely on the first day, in the absence of a contraindication. Antiemetics and fluids are administered as required; clonidine is used for opioid withdrawal symptoms. For acute intolerable pain during the waking hours, aspirin, 1 g IV (not approved in United States), is useful. IM chlorpromazine can be helpful at night; patients must be adequately hydrated. Three to 5 days into the admission, as the effect of the withdrawn substance wears off, a course of IV dihydroergotamine (DHE) can be used. DHE, administered every 8 h for 5 consecutive days, can induce a significant remission that allows a preventive treatment to be established. 5-HT3 antagonists, such as ondansetron or granisetron, or the neurokinin receptor antagonist, aprepitant, may be required with DHE to prevent significant nausea, and domperidone (not approved in the United States) orally or by suppository can be very helpful. Avoiding sedating or otherwise side effect prone antiemetics is helpful. New daily persistent headache (NDPH) is a clinically distinct syndrome; its causes are listed in Table 21-4. aIncludes postinfectious forms. Clinical Presentation The patient with NDPH presents with headache on most if not all days, and the patient can clearly, and often vividly, recall the moment of onset. The headache usually begins abruptly, but onset may be more gradual; evolution over 3 days has been proposed as the upper limit for this syndrome. Patients typically recall the exact day and circumstances of the onset of headache; the new, persistent head pain does not remit. The first priority is to distinguish between a primary and a secondary cause of this syndrome. Subarachnoid hemorrhage is the most serious of the secondary causes and must be excluded either by history or appropriate investigation (Chap. 330). Secondary NDPH • LOw CSF VOLUME HEAdACHE In these syndromes, head pain is positional: it begins when the patient sits or stands upright and resolves upon reclining. The pain, which is occipitofrontal, is usually a dull ache but may be throbbing. Patients with chronic low CSF volume headache typically present with a history of headache from one day to the next that is generally not present on waking but worsens during the day. Recumbency usually improves the headache within minutes, and it can take only minutes to an hour for the pain to return when the patient resumes an upright position. The most common cause of headache due to persistent low CSF volume is CSF leak following LP. Post-LP headache usually begins within 48 h but may be delayed for up to 12 days. Its incidence is between 10 and 30%. Beverages with caffeine may provide temporary relief. Besides LP, index events may include epidural injection or a vigorous Valsalva maneuver, such as from lifting, straining, coughing, clearing the eustachian tubes in an airplane, or multiple orgasms. Spontaneous CSF leaks are well recognized, and the diagnosis should be considered whenever the headache history is typical, even when there is no obvious index event. As time passes from the index event, the postural nature may become less apparent; cases in which the index event occurred several years before the eventual diagnosis have been recognized. Symptoms appear to result from low volume rather than low pressure: although low CSF pressures, typically 0–50 mmH2O, are usually identified, a pressure as high as 140 mmH2O has been noted with a documented leak. Postural orthostatic tachycardia syndrome (POTS; Chap. 454) can present with orthostatic headache similar to low CSF volume headache and is a diagnosis that needs consideration in this setting. When imaging is indicated to identify the source of a presumed leak, an MRI with gadolinium is the initial study of choice (Fig. 21-1). A striking pattern of diffuse meningeal enhancement is so typical that in the appropriate clinical context the diagnosis is established. Chiari malformations may sometimes be noted on MRI; in such cases, surgery to decompress the posterior fossa usually worsens the headache. Spinal MRI with T2 weighting may reveal a leak, and spinal MRI may demonstrate spinal meningeal cysts whose role in these syndromes is yet to be elucidated. The source of CSF leakage may be identified by spinal MRI with appropriate sequences, by CT, or increasingly by MR myelography. Less used now, 111In-DTPA CSF studies in the absence of a directly identified site of leakage, may demonstrate early emptying of 111In-DTPA tracer into the bladder or slow progress of tracer across the brain suggesting a CSF leak. Initial treatment for low CSF volume headache is bed rest. For patients with persistent pain, IV caffeine (500 mg in 500 mL of saline administered over 2 h) can be very effective. An electrocardiogram (ECG) to screen for arrhythmia should be performed before administration. It is reasonable to administer at least two infusions of caffeine before embarking on additional tests to identify the source of the CSF leak. Because IV caffeine is safe and can be curative, it spares many patients the need for further investigations. If unsuccessful, an abdominal binder may be helpful. If a leak can be identified, an autologous blood patch is usually curative. A blood patch is also effective for post-LP headache; in this setting, the location is empirically determined to be the site of the LP. In patients with intractable pain, oral theophylline is a useful alternative; however, its effect is less rapid than caffeine. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 21-1 Magnetic resonance image showing diffuse meningeal enhancement after gadolinium administration in a patient with low cerebrospinal fluid (CSF) volume headache. RAISEd CSF PRESSURE HEAdACHE Raised CSF pressure is well recognized as a cause of headache. Brain imaging can often reveal the cause, such as a space-occupying lesion. NDPH due to raised CSF pressure can be the presenting symptom for patients with idiopathic intracranial hypertension (pseudotumor cerebri) without visual problems, particularly when the fundi are normal. Persistently raised intracranial pressure can trigger chronic migraine. These patients typically present with a history of generalized headache that is present on waking and improves as the day goes on. It is generally worse with recumbency. Visual obscurations are frequent. The diagnosis is relatively straightforward when papilledema is present, but the possibility must be considered even in patients without funduscopic changes. Formal visual field testing should be performed even in the absence of overt ophthalmic involvement. Headache on rising in the morning or nocturnal headache is also characteristic of obstructive sleep apnea or poorly controlled hypertension. Evaluation of patients suspected to have raised CSF pressure requires brain imaging. It is most efficient to obtain an MRI, including an MR venogram, as the initial study. If there are no contraindications, the CSF pressure should be measured by LP; this should be done when the patient is symptomatic so that both the pressure and the response to removal of 20–30 mL of CSF can be determined. An elevated opening pressure and improvement in headache following removal of CSF are diagnostic. Initial treatment is with acetazolamide (250–500 mg bid); the headache may improve within weeks. If ineffective, topiramate is the next treatment of choice; it has many actions that may be useful in this setting, including carbonic anhydrase inhibition, weight loss, and neuronal membrane stabilization, likely mediated via effects on phosphorylation pathways. Severely disabled patients who do not respond to medical treatment require intracranial pressure monitoring and may require shunting. POSTTRAUMATIC HEAdACHE A traumatic event can trigger a headache process that lasts for many months or years after the event. The term trauma is used in a very broad sense: headache can develop following an injury to the head, but it can also develop after an infectious episode, typically viral meningitis, a flulike illness, or a parasitic infection. Complaints of dizziness, vertigo, and impaired memory can accompany the headache. Symptoms may remit after several weeks or persist for months and even years after the injury. Typically the neurologic examination is normal and CT or MRI studies are unrevealing. Chronic subdural hematoma may on occasion mimic this disorder. Posttraumatic headache may also be seen after carotid dissection and subarachnoid hemorrhage and after intracranial surgery. The underlying theme appears to be that a traumatic event involving the pain-producing meninges can trigger a headache process that lasts for many years. OTHER CAUSES In one series, one-third of patients with NDPH reported headache beginning after a transient flulike illness characterized by fever, neck stiffness, photophobia, and marked malaise. Evaluation typically reveals no apparent cause for the headache. There is no convincing evidence that persistent Epstein-Barr virus infection plays a role in NDPH. A complicating factor is that many patients undergo LP during the acute illness; iatrogenic low CSF volume headache must be considered in these cases. TREATMENT Treatment is largely empirical. Tricyclic antidepressants, notably amitriptyline, and anticonvulsants, such as topiramate, valproate, and gabapentin, have been used with reported benefit. The monoamine oxidase inhibitor phenelzine may also be useful in carefully selected patients. The headache usually resolves within 3–5 years, but it can be quite disabling. Most patients with headache will be seen first in a primary care setting. The task of the primary care physician is to identify the very few worrisome secondary headaches from the very great majority of primary and less troublesome secondary headaches (Table 21-2). Absent any warning signs, a reasonable approach is to treat when a diagnosis is established. As a general rule, the investigation should focus on identifying worrisome causes of headache or on gaining confidence if no primary headache diagnosis can be made. After treatment has been initiated, follow-up care is essential to identify whether progress has been made against the headache complaint. Not all headaches will respond to treatment, but, in general, worrisome headaches will progress and will be easier to identify. When a primary care physician feels the diagnosis is a primary headache disorder, it is worth noting that more than 90% of patients who present to primary care with a complaint of headache will have migraine (Chap. 447). In general, patients who do not have a clear diagnosis, have a primary headache disorder other than migraine or tension-type headache, or are unresponsive to two or more standard therapies for the considered headache type should be considered for referral to a specialist. In a practical sense, the threshold for referral is also determined by the experience of the primary care physician in headache medicine and the availability of secondary care options. John W. Engstrom, Richard A. Deyo The importance of back and neck pain in our society is underscored by the following: (1) the cost of back pain in the United States exceeds $100 billion annually; approximately one-third of these costs are direct health care expenses, and two-thirds are indirect costs resulting from loss of wages and productivity; (2) back symptoms are the most common cause of disability in those <45 years; (3) low back pain is the second most common reason for visiting a physician in the United States; and (4) 70% of persons will have back pain at some point in their lives. The anterior spine consists of cylindrical vertebral bodies separated by intervertebral disks and held together by the anterior and posterior longitudinal ligaments. The intervertebral disks are composed of a central gelatinous nucleus pulposus surrounded by a tough cartilagi-111 nous ring, the annulus fibrosis. Disks are responsible for 25% of spinal column length and allow the bony vertebrae to move easily upon each other (Figs. 22-1 and 22-2). Desiccation of the nucleus pulposus and degeneration of the annulus fibrosus increase with age and result in loss of disk height. The disks are largest in the cervical and lumbar regions where movements of the spine are greatest. The anterior spine absorbs the shock of bodily movements such as walking and running and, with the posterior spine, protects the spinal cord and nerve roots in the spinal canal. The posterior spine consists of the vertebral arches and processes. Each arch consists of paired cylindrical pedicles anteriorly and paired lamina posteriorly. The vertebral arch also gives rise to two transverse processes laterally, one spinous process posteriorly, plus two superior and two inferior articular facets. The apposition of a superior and inferior facet constitutes a facet joint. The posterior spine provides an anchor for the attachment of muscles and ligaments. The contraction of muscles attached to the spinous and transverse processes and lamina works like a system of pulleys and levers that results in flexion, extension, and lateral bending movements of the spine. Nerve root injury (radiculopathy) is a common cause of neck, arm, low back, buttock, and leg pain (see Figs. 31-2 and 31-3). The nerve roots exit at a level above their respective vertebral bodies in the cervical region (e.g., the C7 nerve root exits at the C6-C7 level) and below their respective vertebral bodies in the thoracic and lumbar regions (e.g., the T1 nerve root exits at the T1-T2 level). The cervical nerve roots follow a short intraspinal course before exiting. By contrast, because the spinal cord ends at the vertebral L1 or L2 level, the lumbar nerve roots follow a long intraspinal course and can be injured anywhere from the upper lumbar spine to their exit at the intervertebral foramen. For example, disk herniation at the L4-L5 level can produce not only L5 root compression, but also compression of the traversing S1 nerve root (Fig. 22-3). The lumbar nerve roots are mobile in the spinal canal, but eventually pass through the narrow lateral recess of the spinal canal and intervertebral foramen (Figs. 22-2 and 22-3). Neuroimaging of the spine must include both sagittal and axial views to assess possible compression in either the lateral recess or intervertebral foramen. Pain-sensitive structures of the spine include the periosteum of the vertebrae, dura, facet joints, annulus fibrosus of the intervertebral disk, epidural veins and arteries, and the longitudinal ligaments. Disease of these diverse structures may explain many cases of back pain without nerve root compression. Under normal circumstances, the nucleus pulposus of the intervertebral disk is not pain sensitive. APPROACH TO THE PATIENT: Delineating the type of pain reported by the patient is the essential first step. Attention is also focused on identification of risk factors for a serious underlying etiology. The most frequent causes of back pain are radiculopathy, fracture, tumor, infection, or referred pain from visceral structures (Table 22-1). Local pain is caused by injury to pain-sensitive structures that compress or irritate sensory nerve endings. The site of the pain is near the affected part of the back. Pain referred to the back may arise from abdominal or pelvic viscera. The pain is usually described as primarily abdominal or pelvic, accompanied by back pain and usually unaffected by posture. The patient may occasionally complain of back pain only. Pain of spine origin may be located in the back or referred to the buttocks or legs. Diseases affecting the upper lumbar spine tend to refer pain to the lumbar region, groin, or anterior thighs. Diseases affecting the lower lumbar spine tend to produce pain referred to the buttocks, posterior thighs, calves, or feet. Referred pain can explain pain syndromes that cross multiple dermatomes without evidence of nerve root compression. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 22-1 Vertebral anatomy. (From A Gauthier Cornuelle, DH Gronefeld: Radiographic Anatomy Positioning. New York, McGraw-Hill, 1998; with permission.) Radicular pain is typically sharp and radiates from the low back to a leg within the territory of a nerve root (see “Lumbar Disk Disease,” below). Coughing, sneezing, or voluntary contraction of abdominal muscles (lifting heavy objects or straining at stool) may elicit the radiating pain. The pain may increase in postures that stretch the nerves and nerve roots. Sitting with the leg outstretched places traction on the sciatic nerve and L5 and S1 roots because the nerve passes posterior to the hip. The femoral nerve (L2, L3, and L4 roots) passes anterior to the hip and is not stretched by sitting. FIguRE 22-2 Spinal column. (From A Gauthier Cornuelle, DH Gronefeld: Radiographic Anatomy Positioning. New York, McGraw-Hill, 1998; with permission.) The description of the pain alone often fails to distinguish between referred pain and radiculopathy, although a burning or electric quality favors radiculopathy. Pain associated with muscle spasm, although of obscure origin, is commonly associated with many spine disorders. The spasms are accompanied by abnormal posture, tense paraspinal muscles, and dull or achy pain in the paraspinal region. Knowledge of the circumstances associated with the onset of back pain is important when weighing possible serious underlying causes for the pain. Some patients involved in accidents or work-related injuries may exaggerate their pain for the purpose of compensation or for psychological reasons. A physical examination that includes the abdomen and rectum is advisable. Back pain referred from visceral organs may be reproduced during palpation of the abdomen (pancreatitis, abdominal aortic aneurysm [AAA]) or percussion over the costovertebral angles (pyelonephritis). The normal spine has a cervical and lumbar lordosis and a thoracic kyphosis. Exaggeration of these normal alignments may result in hyperkyphosis of the thoracic spine or hyperlordosis of the lumbar spine. Inspection may reveal a lateral curvature of the spine (scoliosis). An asymmetry in the prominence of the paraspinal muscles suggests muscle spasm. Spine pain reproduced by palpation over the spinous process reflects injury of the affected vertebrae or adjacent pain-sensitive structures. Forward bending is often limited by paraspinal muscle spasm; the latter may flatten the usual lumbar lordosis. Flexion at the hips is normal in patients with lumbar spine disease, but flexion of the lumbar spine is limited and sometimes painful. Lateral bending to the side opposite the injured spinal element may stretch the damaged tissues, worsen pain, and limit motion. Hyperextension of the spine (with the patient prone or standing) is limited when nerve root compression, facet joint pathology, or other bony spine disease is present. Pain from hip disease may mimic the pain of lumbar spine disease. Hip pain can be reproduced by internal and external rotation at the hip with the knee and hip in flexion or by compressing the heel with the examiner’s palm while the leg is extended (heel percussion sign). The straight leg–raising (SLR) maneuver is a simple bedside test for nerve root disease. With the patient supine, passive flexion of the extended leg at the hip stretches the L5 and S1 nerve roots and 113 CHAPTER 22 Back and Neck Pain 4th Lumbar vertebral body 5th Lumbar vertebral body 4th Lumbar pedicle L4 root Protruded L4-L5 disk L5 Root S1 Root S2 Root Protruded L5-S1 disk FIguRE 22-3 Compression of L5 and S1 roots by herniated disks. (From AH Ropper, MA Samuels: Adams and Victor’s Principles of Neurology, 9th ed. New York, McGraw-Hill, 2009; with permission.) the sciatic nerve. Passive dorsiflexion of the foot during the maneuver adds to the stretch. In healthy individuals, flexion to at least 80° is normally possible without causing pain, although a tight, stretching sensation in the hamstring muscles is common. The SLR test is positive if the maneuver reproduces the patient’s usual back or limb pain. Eliciting the SLR sign in both the supine and sitting positions can help determine if the finding is reproducible. The patient may describe pain in the low back, buttocks, posterior thigh, or lower leg, but the key feature is reproduction of the patient’s usual pain. Pain worse at rest or at night Prior history of cancer History of chronic infection (especially lung, urinary tract, skin) History of trauma Incontinence Age >70 years Intravenous drug use Glucocorticoid use History of a rapidly progressive neurologic deficit Unexplained fever Unexplained weight loss Percussion tenderness over the spine Abdominal, rectal, or pelvic mass Internal/external rotation of the leg at the hip; heel percussion sign Straight leg– or reverse straight leg–raising signs Progressive focal neurologic deficit The crossed SLR sign is present when flexion of one leg reproduces the usual pain in the opposite leg or buttocks. In disk herniation, the crossed SLR sign is less sensitive but more specific than the SLR sign. The reverse SLR sign is elicited by standing the patient next to the examination table and passively extending each leg with the knee fully extended. This maneuver, which stretches the L2-L4 nerve roots, lumbosacral plexus, and femoral nerve, is considered positive if the patient’s usual back or limb pain is reproduced. For all of these tests, the nerve or nerve root lesion is always on the side of the pain. The neurologic examination includes a search for focal weakness or muscle atrophy, focal reflex changes, diminished sensation in the legs, or signs of spinal cord injury. The examiner should be alert to the possibility of breakaway weakness, defined as fluctuations in the maximum power generated during muscle testing. Breakaway weakness may be due to pain or a combination of pain and an underlying true weakness. Breakaway weakness without pain is almost always due to a lack of effort. In uncertain cases, electromyography (EMG) can determine if true weakness due to nerve tissue injury is present. Findings with specific lumbosacral nerve root lesions are shown in Table 22-2 and are discussed below. LABORATORY, IMAgINg, AND EMg STuDIES Laboratory studies are rarely needed for the initial evaluation of nonspecific acute (<3 months in duration) low back pain (ALBP). Risk factors for a serious underlying cause and for infection, tumor, or fracture, in particular, should be sought by history and exam. If risk factors are present (Table 22-1), then laboratory studies (complete blood count [CBC], erythrocyte sedimentation rate [ESR], urinalysis) are indicated. If risk factors are absent, then management is conservative (see “Treatment,” below) Computed tomography (CT) scanning is superior to routine x-rays for the detection of fractures involving posterior spine structures, craniocervical and cervicothoracic junctions, C1 and C2 aReverse straight leg–raising sign present—see “Examination of the Back.” bThese muscles receive the majority of innervation from this root. PART 2 Cardinal Manifestations and Presentation of Diseases “Examination of the Back.” vertebrae, bone fragments within the spinal canal, or misalignment. CT scans are increasingly used as a primary screening modality for moderate to severe acute trauma. Magnetic resonance imaging (MRI) or CT myelography is the radiologic test of choice for evaluation of most serious diseases involving the spine. MRI is superior for the definition of soft tissue structures, whereas CT myelography provides optimal imaging of the lateral recess of the spinal canal and is better tolerated by claustrophobic patients. Annual population surveys in the United States suggest that patients with back pain have reported progressively worse functional limitations in recent years, rather than progressive improvements, despite rapid increases in spine imaging, opioid prescribing, injections, and spine surgery. This suggests that more selective use of diagnostic and treatment modalities may be appropriate. Spine imaging often reveals abnormalities of dubious clinical relevance that may alarm clinicians and patients alike and prompt further testing and unnecessary therapy. Both randomized trials and observational studies have suggested such a “cascade effect” of imaging may create a gateway to other unnecessary care. Based in part on such evidence, the American College of Physicians has made parsimonious spine imaging a high priority in its “Choosing Wisely” campaign, aimed at reducing unnecessary care. Successful efforts to reduce unnecessary imaging have typically been multifaceted. Some include physician education by clinical leaders and computerized decision support, to identify any recent relevant imaging tests and require approved indications for ordering an imaging test. Other strategies have included audit and feedback regarding individual rates of ordering and indications, and more rapid access to physical therapy or consultation for patients without imaging indications. When imaging tests are reported, it may be useful to indicate that certain degenerative findings are common in normal, pain-free individuals. In an observational study, this strategy was associated with lower rates of repeat imaging, opioid therapy, and physical therapy referral. Electrodiagnostic studies can be used to assess the functional integrity of the peripheral nervous system (Chap. 442e). Sensory nerve conduction studies are normal when focal sensory loss confirmed by examination is due to nerve root damage because the nerve roots are proximal to the nerve cell bodies in the dorsal root ganglia. Injury to nerve tissue distal to the dorsal root ganglion (e.g., plexus or peripheral nerve) results in reduced sensory nerve signals. Needle EMG complements nerve conduction studies by detecting denervation or reinnervation changes in a myotomal (segmental) Anterior thigh Anterior thigh, knee Knee, medial calf Anterolateral thigh Lateral calf, dorsal foot, posterolateral thigh, buttocks Bottom foot, posterior calf, posterior thigh, buttocks distribution. Multiple muscles supplied by different nerve roots and nerves are sampled; the pattern of muscle involvement indicates the nerve root(s) responsible for the injury. Needle EMG provides objective information about motor nerve fiber injury when clinical evaluation of weakness is limited by pain or poor effort. EMG and nerve conduction studies will be normal when sensory nerve root injury or irritation is the pain source. This is a common cause of acute, chronic, or recurrent low back and leg pain (Figs. 22-3 and 22-4). Disk disease is most likely to occur at the L4-L5 or L5-S1 levels, but upper lumbar levels are involved occasionally. The cause is often unknown, but the risk is increased in overweight individuals. Disk herniation is unusual prior to age 20 years and is rare in the fibrotic disks of the elderly. Complex genetic factors may play a role in predisposing some patients to disk disease. The pain may be located in the low back only or referred to a leg, buttock, or hip. A sneeze, cough, or trivial movement may cause the nucleus pulposus to prolapse, pushing the frayed and weakened annulus posteriorly. With severe disk disease, the nucleus may protrude through the annulus (herniation) or become extruded to lie as a free fragment in the spinal canal. The mechanism by which intervertebral disk injury causes back pain is controversial. The inner annulus fibrosus and nucleus pulposus are normally devoid of innervation. Inflammation and production of proinflammatory cytokines within a ruptured nucleus pulposus may trigger or perpetuate back pain. Ingrowth of nociceptive (pain) nerve fibers into inner portions of a diseased disk may be responsible for some chronic “diskogenic” pain. Nerve root injury (radiculopathy) from disk herniation is usually due to inflammation, but lateral herniation may produce compression in the lateral recess or at the intervertebral foramen. A ruptured disk may be asymptomatic or cause back pain, abnormal posture, limitation of spine motion (particularly flexion), a focal neurologic deficit, or radicular pain. A dermatomal pattern of sensory loss or a reduced or absent deep tendon reflex is more suggestive of a specific root lesion than is the pattern of pain. Motor findings (focal weakness, muscle atrophy, or fasciculations) occur less frequently than focal sensory or reflex changes. Symptoms and signs are usually unilateral, but bilateral involvement does occur with large central disk herniations that compress multiple roots or cause inflammation of Lumbar spinal stenosis without or with neurogenic claudication Neoplasms—Metastatic, Hematologic, Primary Bone Tumors Fractures Trauma/falls, motor vehicle accidents Atraumatic fractures: osteoporosis, neoplastic infiltration, osteomyelitis Osteoporosis—hyperparathyroidism, immobility Osteosclerosis (e.g., Paget’s disease) Autoimmune Inflammatory Arthritis Other Causes of Back Pain Referred pain from visceral disease (e.g., abdominal aortic aneurysm) Postural Psychiatric, malingering, chronic pain syndromes nerve roots within the spinal canal. Clinical manifestations of specific nerve root lesions are summarized in Table 22-2. The differential diagnosis covers a variety of serious and treatable conditions, including epidural abscess, hematoma, fracture, or tumor. FIguRE 22-4 Left L5 radiculopathy. A. Sagittal T2-weighted image on the left reflex changes may occur when spinal stenosis is associreveals disk herniation at the L4-L5 level. B. Axial T1-weighted image shows para-ated with neural foraminal narrowing and radiculopathy. central disk herniation with displacement of the thecal sac medially and the left L5 Severe neurologic deficits, including paralysis and urinerve root posteriorly in the left lateral recess. nary incontinence, occur only rarely. Fever, constant pain uninfluenced by position, sphincter abnormali-115 ties, or signs of spinal cord disease suggest an etiology other than lumbar disk disease. Absence of ankle reflexes can be a normal finding in persons older than age 60 years or a sign of bilateral S1 radiculopathy. An absent deep tendon reflex or focal sensory loss may indicate injury to a nerve root, but other sites of injury along the nerve must also be considered. For example, an absent knee reflex may be due to a femoral neuropathy or an L4 nerve root injury. A loss of sensation over the foot and lateral lower calf may result from a peroneal or lateral sciatic neuropathy or an L5 nerve root injury. Focal muscle atrophy may reflect injury to the anterior horn cells of the spinal cord, a nerve root, peripheral nerve, or disuse. A lumbar spine MRI scan or CT myelogram is necessary to establish the location and type of pathology. Spine MRIs yield exquisite views of intraspinal and adjacent soft tissue anatomy. Bony lesions of the lateral recess or intervertebral foramen are optimally visualized by CT myelography. The correlation of neuroradiologic findings to symptoms, particularly pain, is not simple. Contrast-enhancing tears in the annulus fibrosus or disk protrusions are widely accepted as common sources of back pain; however, studies have found that many asymptomatic adults have similar findings. Asymptomatic disk protrusions are also common and may enhance with contrast. Furthermore, in patients with known disk herniation treated either medically or surgically, persistence of the herniation 10 years later had no relationship to the clinical outcome. In summary, MRI findings of disk protrusion, tears in the annulus fibrosus, or hypertrophic facet joints are common incidental findings that, by themselves, should not dictate management decisions for patients with back pain. The diagnosis of nerve root injury is most secure when the history, examination, results of imaging studies, and the EMG are concordant. The correlation between CT and EMG for localization of nerve root injury is between 65 and 73%. Up to one-third of asymptomatic adults have a lumbar disk protrusion detected by CT or MRI scans. Management of lumbar disk disease is discussed below. Cauda equina syndrome (CES) signifies an injury of multiple lumbosacral nerve roots within the spinal canal distal to the termination of the spinal cord at L1-L2. Low back pain, weakness and areflexia in the legs, saddle anesthesia, or loss of bladder function may occur. The problem must be distinguished from disorders of the lower spinal cord (conus medullaris syndrome), acute transverse myelitis (Chap. 456), and Guillain-Barré syndrome (Chap. 460). Combined involvement of the conus medullaris and cauda equina can occur. CES is commonly due to a ruptured lumbosacral intervertebral disk, lumbosacral spine fracture, hematoma within the spinal canal (e.g., following lumbar puncture in patients with coagulopathy), compressive tumor, or other mass lesion. Treatment options include surgical decompression, some times urgently in an attempt to restore or preserve motor or sphincter function, or radiotherapy for metastatic tumors (Chap. 118). Lumbar spinal stenosis (LSS) describes a narrowed lumbar spinal canal and is frequently asymptomatic. Typical is neurogenic claudication, consisting of back and buttock or leg pain induced by walking or standing and relieved by sitting. Symptoms in the legs are usually bilateral. Unlike vascular claudication, symptoms are often provoked by standing without walking. Unlike lumbar disk disease, symptoms are usually relieved by sitting. Patients with neurogenic claudication can often walk much farther when leaning over a shopping cart and can pedal a stationary bike with ease while sitting. These flexed positions increase the anteroposterior spinal canal diameter and reduce intraspinal venous hypertension, resulting in pain relief. Focal weakness, sensory loss, or insufficient evidence to support the routine use of epidural glucocorticoid injections. Surgical therapy is considered when medical therapy does not relieve symptoms sufficiently to allow for resumption of activities of daily living or when focal neurologic signs are present. Most patients with neurogenic claudication who are treated medically do not improve over time. Surgical management can produce significant relief of back and leg pain within 6 weeks, and pain relief persists for at least 2 years. However, up to one-quarter develop recurrent stenosis at the same spinal level or an adjacent level 7–10 years after the initial surgery; recurrent symptoms usually respond to a second surgical decompression. Neural foraminal narrowing with radiculopathy is a common consequence of osteoarthritic processes that cause lumbar spinal stenosis (Figs. 22-1 and 22-6), including osteophytes, lateral disk protrusion, calcified disk-osteophytes, facet joint hypertrophy, uncovertebral joint hypertrophy (cervical spine), congenitally shortened pedicles, or, frequently, a combination of these processes. Neoplasms (primary or metastatic), fractures, infections (epidural abscess), or hematomas are other considerations. These con- LSS by itself is frequently asymptomatic, and the correlation between ditions can produce unilateral nerve root symptoms or signs due to the severity of symptoms and degree of stenosis of the spinal canal is compression at the intervertebral foramen or in the lateral recess; variable. LSS can be acquired (75%), congenital, or both. Congenital symptoms are indistinguishable from disk-related radiculopathy, but forms (achondroplasia, idiopathic) are characterized by short, thick ped-treatment may differ depending on the specific etiology. The history icles that produce both spinal canal and lateral recess stenosis. Acquired and neurologic examination alone cannot distinguish between these factors that contribute to spinal stenosis include degenerative diseases possibilities. A spine neuroimaging (CT or MRI) procedure is required (spondylosis, spondylolisthesis, scoliosis), trauma, spine surgery, meta-to identify the anatomic cause. Neurologic findings from the examibolic or endocrine disorders (epidural lipomatosis, osteoporosis, acro-nation and EMG can help direct the attention of the radiologist to megaly, renal osteodystrophy, hypoparathyroidism), and Paget’s disease. specific nerve roots, especially on axial images. For facet joint hypertro-MRI provides the best definition of the abnormal anatomy (Fig. 22-5). phy, surgical foraminotomy produces long-term relief of leg and back Conservative treatment of symptomatic LSS includes nonsteroidal pain in 80–90% of patients. The usefulness of therapeutic facet joint anti-inflammatory drugs (NSAIDs), acetaminophen, exercise pro-blocks for pain is controversial. Medical causes of lumbar or cervical grams, and symptomatic treatment of acute pain episodes. There is radiculopathy unrelated to anatomic spine disease include infections recesses of high signal FIguRE 22-5 Axial T2-weighted images of the lumbar spine. A. The image shows a normal thecal sac within the lumbar spinal canal. The thecal sac is bright. The lumbar roots are dark punctuate dots in the posterior thecal sac with the patient supine. B. The thecal sac is not well visualized due to severe lumbar spinal canal stenosis, partially the result of hypertrophic facet joints. PART 2 Cardinal Manifestations and Presentation of Diseases Normal right L4-5 intervertebral foramen, L4 root, and high signal FIguRE 22-6 Right L5 radiculopathy. A. Sagittal T2-weighted image. There is normal high signal around the exiting right L4 nerve root in the right neural foramen at L4-L5; effacement of the high signal in the right L5-S1 foramen is present one level caudal on the right at L5-S1. B. Axial T2-weighted image. The lateral recesses are normal bilaterally; the intervertebral foramen is normal on the left, but severely stenotic on the right. *Severe right L5-S1 foraminal stenosis. (e.g., herpes zoster, Lyme disease), carcinomatous meningitis, and root avulsion or traction (severe trauma). Spondylosis, or osteoarthritic spine disease, typically occurs in later life and primarily involves the cervical and lumbosacral spine. Patients often complain of back pain that increases with movement, is associated with stiffness, and is better when inactive. The relationship between clinical symptoms and radiologic findings is usually not straightforward. Pain may be prominent when x-ray, CT, or MRI findings are minimal, and prominent degenerative spine disease can be seen in asymptomatic patients. Osteophytes or combined disk-osteophytes may cause or contribute to central spinal canal stenosis, lateral recess stenosis, or neural foraminal narrowing. Spondylolisthesis is the anterior slippage of the vertebral body, pedicles, and superior articular facets, leaving the posterior elements behind. Spondylolisthesis can be associated with spondylolysis, congenital anomalies, degenerative spine disease, or other causes of mechanical weakness of the pars (e.g., infection, osteoporosis, tumor, trauma, prior surgery). The slippage may be asymptomatic or may cause low back pain and hamstring tightness, nerve root injury (the L5 root most frequently), symptomatic spinal stenosis, or CES in severe cases. Tenderness may be elicited near the segment that has “slipped” forward (most often L4 on L5 or occasionally L5 on S1). Focal anterolisthesis or retrolisthesis can occur at any cervical or lumbar level and be the source of neck or low back pain. Plain x-rays with the neck or low back in flexion and extension will reveal the movement at the abnormal spinal segment. Surgery is considered for pain symptoms that do not respond to conservative measures (e.g., rest, physical therapy) and in cases with progressive neurologic deficit, postural deformity, slippage >50%, or scoliosis. Back pain is the most common neurologic symptom in patients with systemic cancer and is the presenting symptom in 20%. The cause is usually vertebral body metastasis but can also result from spread of cancer through the intervertebral foramen (especially with lymphoma), from carcinomatous meningitis, or from metastasis to the spinal cord. Cancer-related back pain tends to be constant, dull, unrelieved by rest, and worse at night. By contrast, mechanical low back pain usually improves with rest. MRI, CT, and CT myelography are the studies of choice when spinal metastasis is suspected. Once a metastasis is found, imaging of the entire spine reveals additional tumor deposits in one-third of patients. MRI is preferred for soft tissue definition, but the most rapidly available imaging modality is best because the patient’s condition may worsen quickly without intervention. Fewer than 5% of patients who are nonambulatory at the time of diagnosis ever regain the ability to walk; thus, early diagnosis is crucial. The management of spinal metastasis is discussed in detail in Chap. 118. Vertebral osteomyelitis is often caused by staphylococci, but other bacteria or tuberculosis (Pott’s disease) may be responsible. The primary source of infection is usually the urinary tract, skin, or lungs. Intravenous drug use is a well-recognized risk factor. Whenever pyogenic osteomyelitis is found, the possibility of bacterial endocarditis should be considered. Back pain unrelieved by rest, spine tenderness over the involved spine segment, and an elevated ESR are the most common findings in vertebral osteomyelitis. Fever or an elevated white blood cell count is found in a minority of patients. MRI and CT are sensitive and specific for early detection of osteomyelitis; CT may be more readily available in emergency settings and better tolerated by some patients with severe back pain. The intervertebral disk can also be affected by infection (diskitis) and, very rarely, by tumor. Spinal epidural abscess (Chap. 456) presents with back pain (aggravated by movement or palpation), fever, radiculopathy, or signs of spinal cord compression. The subacute development of two or more of these findings should increase the index of suspicion for spinal epidural abscess. The abscess may track over multiple spinal levels and is best delineated by spine MRI. Lumbar adhesive arachnoiditis with radiculopathy is due to fibrosis 117 following inflammation within the subarachnoid space. The fibrosis results in nerve root adhesions and presents as back and leg pain associated with focal motor, sensory, or reflex changes. Causes of arachnoiditis include multiple lumbar operations, chronic spinal infections (especially tuberculosis in the developing world), spinal cord injury, intrathecal hemorrhage, myelography (rare), intrathecal injections (glucocorticoids, anesthetics, or other agents), and foreign bodies. The MRI shows clumped nerve roots or loculations of cerebrospinal fluid within the thecal sac. Clumped nerve roots may also occur with demyelinating polyneuropathy or neoplastic infiltration. Treatment is usually unsatisfactory. Microsurgical lysis of adhesions, dorsal rhizotomy, dorsal root ganglionectomy, and epidural glucocorticoids have been tried, but outcomes have been poor. Dorsal column stimulation for pain relief has produced varying results. A patient complaining of back pain and an inability to move the legs may have a spine fracture or dislocation; with fractures above L1 the spinal cord is at risk for compression. Care must be taken to avoid further damage to the spinal cord or nerve roots by immobilizing the back or neck pending the results of radiologic studies. Vertebral fractures frequently occur in the absence of trauma in association with osteoporosis, glucocorticoid use, osteomyelitis, or neoplastic infiltration. Sprains and Strains The terms low back sprain, strain, and mechanically induced muscle spasm refer to minor, self-limited injuries associated with lifting a heavy object, a fall, or a sudden deceleration such as in an automobile accident. These terms are used loosely and do not clearly describe a specific anatomic lesion. The pain is usually confined to the lower back, and there is no radiation to the buttocks or legs. Patients with paraspinal muscle spasm often assume unusual postures. Traumatic Vertebral Fractures Most traumatic fractures of the lumbar vertebral bodies result from injuries producing anterior wedging or compression. With severe trauma, the patient may sustain a fracture-dislocation or a “burst” fracture involving the vertebral body and posterior elements. Traumatic vertebral fractures are caused by falls from a height, sudden deceleration in an automobile accident, or direct injury. Neurologic impairment is common, and early surgical treatment is indicated. In victims of blunt trauma, CT scans of the chest, abdomen, or pelvis can be reformatted to detect associated vertebral fractures. METABOLIC CAuSES Osteoporosis and Osteosclerosis Immobilization, osteomalacia, the postmenopausal state, renal disease, multiple myeloma, hyperparathyroidism, hyperthyroidism, metastatic carcinoma, or glucocorticoid use may accelerate osteoporosis and weaken the vertebral body, leading to compression fractures and pain. Up to two-thirds of compression fractures seen on radiologic imaging are asymptomatic. The most common nontraumatic vertebral body fractures are due to postmenopausal or senile osteoporosis (Chap. 425). The risk of an additional vertebral fracture at 1 year following a first vertebral fracture is 20%. The presence of fever, weight loss, fracture at a level above T4, or the conditions described above should increase suspicion for a cause other than senile osteoporosis. The sole manifestation of a compression fracture may be localized back or radicular pain exacerbated by movement and often reproduced by palpation over the spinous process of the affected vertebra. Relief of acute pain can often be achieved with acetaminophen or a combination of opioids and acetaminophen. The role of NSAIDs is controversial. Both pain and disability are improved with bracing. Antiresorptive drugs, especially bisphosphonates (e.g., alendronate), have been shown to reduce the risk of osteoporotic fractures and are the preferred treatment to prevent additional fractures. Less than one-third of patients with prior compression fractures are adequately treated for osteoporosis despite the increased risk for future fractures; even fewer at-risk patients without a history of fracture are adequately treated. Given the negative results of sham-controlled studies of percutaneous vertebroplasty (PVP) and of kyphoplasty for osteoporotic 118 compression fractures associated with debilitating pain, these procedures are not routinely recommended. Osteosclerosis, an abnormally increased bone density often due to Paget’s disease, is readily identifiable on routine x-ray studies and can sometimes be a source of back pain. It may be associated with an isolated increase in alkaline phosphatase in an otherwise healthy older person. Spinal cord or nerve root compression can result from bony encroachment. The diagnosis of Paget’s disease as the cause of a patient’s back pain is a diagnosis of exclusion. For further discussion of these bone disorders, see Chaps. 424, 425, and 426e. Autoimmune inflammatory disease of the spine can present with the insidious onset of low back, buttock, or neck pain. Examples include rheumatoid arthritis (Chap 380), ankylosing spondylitis, reactive arthritis, psoriatic arthritis, or inflammatory bowel disease (Chap. 384). Spondylolysis is a bony defect in the vertebral pars interarticularis (a segment near the junction of the pedicle with the lamina); the cause is usually a stress microfracture in a congenitally abnormal segment. It occurs in up to 6% of adolescents. The defect (usually bilateral) is best visualized on plain x-rays, CT scan, or bone scan and is frequently asymptomatic. Symptoms may occur in the setting of a single injury, repeated minor injuries, or during a growth spurt. Spondylolysis is the most common cause of persistent low back pain in adolescents and is often associated with sports-related activities. Scoliosis refers to an abnormal curvature in the coronal (lateral) plane of the spine. With kyphoscoliosis, there is, in addition, a forward curvature of the spine. The abnormal curvature may be congenital due to abnormal spine development, acquired in adulthood due to degenerative spine disease, or occasionally progressive due to neuromuscular disease. The deformity can progress until ambulation or pulmonary function is compromised. Spina bifida occulta is a failure of closure of one or several vertebral arches posteriorly; the meninges and spinal cord are normal. A dimple or small lipoma may overlie the defect. Most cases are asymptomatic and discovered incidentally during an evaluation for back pain. Tethered cord syndrome usually presents as a progressive cauda equina disorder (see below), although myelopathy may also be the initial manifestation. The patient is often a young adult who complains of perineal or perianal pain, sometimes following minor trauma. MRI studies reveal a low-lying conus (below L1 and L2) and a short and thickened filum terminale. Diseases of the thorax, abdomen, or pelvis may refer pain to the posterior portion of the spinal segment that innervates the diseased organ. Occasionally, back pain may be the first and only manifestation. Upper abdominal diseases generally refer pain to the lower thoracic or upper lumbar region (eighth thoracic to the first and second lumbar vertebrae), lower abdominal diseases to the midlumbar region (second to fourth lumbar vertebrae), and pelvic diseases to the sacral region. Local signs (pain with spine palpation, paraspinal muscle spasm) are absent, and little or no pain accompanies routine movements of the spine. Low Thoracic or Lumbar Pain with Abdominal Disease Tumors of the posterior wall of the stomach or duodenum typically produce epigastric pain (Chaps. 109 and 348), but midline back or paraspinal pain may occur if retroperitoneal extension is present. Fatty foods occasionally induce back pain associated with biliary disease. Diseases of the pancreas can produce right or left paraspinal back pain. Pathology in retroperitoneal structures (hemorrhage, tumors, pyelonephritis) can produce paraspinal pain that radiates to the lower abdomen, groin, or anterior thighs. A mass in the iliopsoas region can produce unilateral lumbar pain with radiation toward the groin, labia, or testicle. The sudden appearance of lumbar pain in a patient receiving anticoagulants suggests retroperitoneal hemorrhage. PART 2 Cardinal Manifestations and Presentation of Diseases Isolated low back pain occurs in some patients with a contained rupture of an abdominal aortic aneurysm (AAA). The classic clinical triad of abdominal pain, shock, and back pain occurs in <20% of patients. The typical patient at risk is an elderly male smoker with back pain. The diagnosis may be missed because the symptoms and signs can be nonspecific. Misdiagnoses include nonspecific back pain, diverticulitis, renal colic, sepsis, and myocardial infarction. A careful abdominal examination revealing a pulsatile mass (present in 50–75% of patients) is an important physical finding. Patients with suspected AAA should be evaluated with abdominal ultrasound, CT, or MRI (Chap. 301). Sacral Pain with gynecologic and urologic Disease Pelvic organs rarely cause low back pain, except for gynecologic disorders involving the uterosacral ligaments. The pain is referred to the sacral region. Endometriosis or uterine cancers may invade the uterosacral ligaments. Pain associated with endometriosis is typically premenstrual and often continues until it merges with menstrual pain. Uterine malposition may cause uterosacral ligament traction (retroversion, descensus, and prolapse) or produce sacral pain after prolonged standing. Menstrual pain may be felt in the sacral region sometimes with poorly localized, cramping pain radiating down the legs. Pain due to neoplastic infiltration of nerves is typically continuous, progressive in severity, and unrelieved by rest at night. Less commonly, radiation therapy of pelvic tumors may produce sacral pain from late radiation necrosis of tissue. Low back pain that radiates into one or both thighs is common in the last weeks of pregnancy. Urologic sources of lumbosacral back pain include chronic prostatitis, prostate cancer with spinal metastasis (Chap. 115), and diseases of the kidney or ureter. Lesions of the bladder and testes do not often produce back pain. Infectious, inflammatory, or neoplastic renal diseases may produce ipsilateral lumbosacral pain, as can renal artery or vein thrombosis. Paraspinal lumbar pain may be a symptom of ureteral obstruction due to nephrolithiasis. OTHER CAuSES OF BACK PAIN Postural Back Pain There is a group of patients with nonspecific chronic low back pain (CLBP) in whom no specific anatomic lesion can be found despite exhaustive investigation. These individuals complain of vague, diffuse back pain with prolonged sitting or standing that is relieved by rest. Exercises to strengthen the paraspinal and abdominal muscles are sometimes helpful. Psychiatric Disease CLBP may be encountered in patients who seek financial compensation; in malingerers; or in those with concurrent substance abuse. Many patients with CLBP have a history of psychiatric illness (depression, anxiety states) or childhood trauma (physical or sexual abuse) that antedates the onset of back pain. Preoperative psychological assessment has been used to exclude patients with marked psychological impairments that predict a poor surgical outcome from spine surgery. The cause of low back pain occasionally remains unclear. Some patients have had multiple operations for disk disease but have persistent pain and disability. The original indications for surgery may have been questionable, with back pain only, no definite neurologic signs, or a minor disk bulge noted on CT or MRI. Scoring systems based on neurologic signs, psychological factors, physiologic studies, and imaging studies have been devised to minimize the likelihood of unsuccessful surgery. HEALTH CARE FOR POPuLATIONS OF BACK PAIN PATIENTS: A CLINICAL CARE SYSTEMS VIEW There are increasing pressures to contain health care costs, especially when expensive care is not based on sound evidence. Physicians, patients, the insurance industry, and government providers of health care will need to work together to ensure cost-effective care for patients with back pain. Surveys in the United States indicate that patients with back pain have reported progressively worse functional limitations in recent years, despite rapid increases in spine imaging, opioid prescribing, injections, and spine surgery. This suggests that more selective use of diagnostic and treatment modalities may be appropriate. Spine imaging often reveals abnormalities of dubious clinical relevance that may alarm clinicians and patients and prompt further testing and unnecessary therapy. Both randomized trials and observational studies have suggested a “cascade effect” of imaging, which may create a gateway to other unnecessary care. Based in part on such evidence, the American College of Physicians has made parsimonious spine imaging a high priority in its “Choosing Wisely” campaign, aimed at reducing unnecessary care. Successful efforts to reduce unnecessary imaging have included physician education by clinical leaders, computerized decision support to identify recent imaging tests and eliminate duplication, and requiring an approved indication to order an imaging test. Other strategies have included audit and feedback regarding individual practitioners’ rates of ordering and indications and facilitating rapid access to physical therapy for patients who do not need imaging. When imaging tests are reported, it may also be useful to routinely note that some degenerative findings are common in normal, pain-free individuals. In an observational study, this strategy was associated with lower rates of repeat imaging, opioid therapy, and referral for physical therapy. Mounting evidence of morbidities from long-term opioid therapy (including overdose, dependency, addiction, falls, fractures, accident risk, and sexual dysfunction) has prompted efforts to reduce use for chronic pain, including back pain (Chap. 18). Safety may be improved with automated reminders for high doses, early refills, or overlapping opioid and benzodiazepine prescriptions. Greater access to alternative treatments for chronic pain, such as tailored exercise programs and cognitive-behavioral therapy, may also reduce opioid prescribing. The high cost, wide geographic variations, and rapidly increasing rates of spinal fusion surgery have prompted scrutiny over appropriate indications. Some insurance carriers have begun to limit coverage for the most controversial indications, such as low back pain without radiculopathy. Finally, educating patients and the public about the risks of imaging and excessive therapy may be necessary. A successful media campaign in Australia provides a successful model for this approach. ALBP is defined as pain of <3 months in duration. Full recovery can be expected in more than 85% of adults with ALBP without leg pain. Most have purely “mechanical” symptoms (i.e., pain that is aggravated by motion and relieved by rest). The initial assessment excludes serious causes of spine pathology that require urgent intervention including infection, cancer, or trauma. Risk factors for a serious cause of ALBP are shown in Table 22-1. Laboratory and imaging studies are unnecessary if risk factors are absent. CT, MRI, or plain spine films are rarely indicated in the first month of symptoms unless a spine fracture, tumor, or infection is suspected. The prognosis is generally excellent. Many patients do not seek medical care and improve on their own. Even among those seen in primary care, two-thirds report being substantially improved after 7 weeks. This spontaneous improvement can mislead clinicians and researchers about the efficacy of treatment interventions unless subjected to rigorous prospective trials. Many treatments commonly used in the past but now known to be ineffective, including bed rest, lumbar traction, and coccygectomy, have been largely abandoned. Clinicians should reassure patients that improvement is very likely and instruct them in self-care. Education is an important part of treatment. Satisfaction and the likelihood of follow-up increase when patients are educated about prognosis, treatment methods, 119 activity modifications, and strategies to prevent future exacerbations. Patients who report that they did not receive an adequate explanation for their symptoms are likely to request further diagnostic tests. In general, bed rest should be avoided for relief of severe symptoms or kept to a day or two at most. Several randomized trials suggest that bed rest does not hasten the pace of recovery. In general, the best activity recommendation is for early resumption of normal physical activity, avoiding only strenuous manual labor. Possible advantages of early ambulation for ALBP include maintenance of cardiovascular conditioning, improved disk and cartilage nutrition, improved bone and muscle strength, and increased endorphin levels. Specific back exercises or early vigorous exercise have not shown benefits for acute back pain, but may be useful for chronic pain. Use of heating pads or blankets is sometimes helpful. Evidence-based guidelines recommend over-the-counter medicines such as acetaminophen and NSAIDs as first-line options for treatment of ALBP. In otherwise healthy patients, a trial of acetaminophen can be followed by NSAIDs for time-limited periods. In theory, the anti-inflammatory effects of NSAIDs might provide an advantage over acetaminophen to suppress inflammatory changes that accompany many causes of ALBP, but in practice, there is no clinical evidence to support the superiority of NSAIDs. The risk of renal and gastrointestinal toxicity with NSAIDs is increased in patients with preexisting medical comorbidities (e.g., renal insufficiency, cirrhosis, prior gastrointestinal hemorrhage, use of anticoagulants or steroids, heart failure). Skeletal muscle relaxants, such as cyclobenzaprine or methocarbamol, may be useful, but sedation is a common side effect. Limiting the use of muscle relaxants to nighttime only may be an option for patients with back pain that interferes with sleep. There is no good evidence to support the use of opioid analgesics or tramadol as first-line therapy for ALBP. Their use is best reserved for patients who cannot tolerate acetaminophen or NSAIDs or for those with severe refractory pain. As with muscle relaxants, these drugs are often sedating, so it may be useful to prescribe them at nighttime only. Side effects of short-term opioid use include nausea, constipation, and pruritus; risks of long-term opioid use include hypersensitivity to pain, hypogonadism, and dependency. Falls, fractures, driving accidents, and fecal impaction are other risks. Clinical efficacy of opioids beyond 16 weeks of use is unproven. There is no evidence to support use of oral or injected glucocorticoids for ALBP without radiculopathy. Similarly, therapies for neuropathic pain, such as gabapentin or tricyclic antidepressants, are not indicated for ALBP. Nonpharmacologic treatments for ALBP include spinal manipulation, exercise, physical therapy, massage, acupuncture, transcutaneous electrical nerve stimulation, and ultrasound. Spinal manipulation appears to be roughly equivalent to conventional medical treatments and may be a useful alternative for patients who wish to avoid or who cannot tolerate drug therapy. There is little evidence to support the use of physical therapy, massage, acupuncture, laser therapy, therapeutic ultrasound, corsets, or lumbar traction. Although important for chronic pain, back exercises for ALBP are generally not supported by clinical evidence. There is no convincing evidence regarding the value of ice or heat applications for ABLP; however, many patients report temporary symptomatic relief from ice or frozen gel packs, and heat may produce a short-term reduction in pain after the first week. Patients often report improved satisfaction with the care that they receive when they actively participate in the selection of symptomatic approaches that are tried. CLBP is defined as pain lasting >12 weeks; it accounts for 50% of total back pain costs. Risk factors include obesity, female gender, older age, prior history of back pain, restricted spinal mobility, pain radiating into a leg, high levels of psychological distress, poor self-rated health, minimal physical activity, smoking, job dissatisfaction, and widespread pain. In general, the same treatments that are 120 recommended for ALBP can be useful for patients with CLBP. In this setting, however, the long-term benefit of opioid therapy or muscle relaxants is less clear. Evidence supports the use of exercise therapy, and this can be one of the mainstays of treatment for CLBP. Effective regimens have generally included a combination of gradually increasing aerobic exercise, strengthening exercises, and stretching exercises. Motivating patients is sometimes challenging, and in this setting, a program of supervised exercise can improve compliance. In general, activity tolerance is the primary goal, while pain relief is secondary. Supervised intensive physical exercise or “work hardening” regimens have been effective in returning some patients to work, improving walking distance, and reducing pain. In addition, some forms of yoga have been evaluated in randomized trials and may be helpful for patients who are interested. A long-term benefit of spinal manipulation or massage for CLBP is unproven. Medications for CLBP may include acetaminophen, NSAIDs, and tricyclic antidepressants. Trials of tricyclics suggest benefit even for patients without evidence of depression. Trials do not support the efficacy of selective serotonin reuptake inhibitors (SSRIs) for CLBP. However, depression is common among patients with chronic pain and should be appropriately treated. Cognitive-behavioral therapy is based on evidence that psychological and social factors, as well as somatic pathology, are important in the genesis of chronic pain and disability. Cognitive-behavioral therapy includes efforts to identify and modify patients’ thinking about their pain and disability. A systematic review concluded that such treatments are more effective than a waiting list control group for short-term pain relief; however, long-term results remain unclear. Behavioral treatments may have effects similar in magnitude to exercise therapy. Back pain is the most frequent reason for seeking complementary and alternative treatments. The most common of these for back pain are spinal manipulation, acupuncture, and massage. The role of most complementary and alternative medicine approaches remains unclear. Biofeedback has not been studied rigorously. There is no convincing evidence that either spinal manipulation or transcutaneous electrical nerve stimulation (TENS) is effective in treating CLBP. Rigorous recent trials of acupuncture suggest that true acupuncture is not superior to sham acupuncture, but that both may offer an advantage over routine care. Whether this is due entirely to placebo effects provided even by sham acupuncture is uncertain. Some trials of massage therapy have been encouraging, but this has been less well studied than spinal manipulation or acupuncture. Various injections, including epidural glucocorticoid injections, facet joint injections, and trigger point injections, have been used for treating CLBP. However, in the absence of radiculopathy, there is no evidence that these approaches are effective. Injection studies are sometimes used diagnostically to help determine the anatomic source of back pain. The use of discography to provide evidence that a specific disk is the pain generator is not recommended. Pain relief following a glucocorticoid injection into a facet is commonly used as evidence that the facet joint is the pain source; however, the possibility that the response was a placebo effect or due to systemic absorption of the glucocorticoids is difficult to exclude. Another category of intervention for chronic back pain is electrothermal and radiofrequency therapy. Intradiskal therapy has been proposed using both types of energy to thermocoagulate and destroy nerves in the intervertebral disk, using specially designed catheters or electrodes. Current evidence does not support the use of these intradiskal therapies. Radiofrequency denervation is sometimes used to destroy nerves that are thought to mediate pain, and this technique has been used for facet joint pain (with the target nerve being the medial branch of the primary dorsal ramus), for back pain thought to arise from the intervertebral disk (ramus communicans), and radicular back pain (dorsal root ganglia). A few small trials have produced conflicting results for facet joint and diskogenic pain. A trial in patients with chronic radicular pain found no difference between radiofrequency PART 2 Cardinal Manifestations and Presentation of Diseases denervation of the dorsal root ganglia and sham treatment. These interventional therapies have not been studied in sufficient detail to draw conclusions of their value for CLBP. Surgical intervention for CLBP without radiculopathy has been evaluated in a small number of randomized trials, all conducted in Europe. Each of these studies included patients with back pain and a degenerative disk, but no sciatica. Three of the four trials concluded that lumbar fusion surgery was no more effective than highly structured, rigorous rehabilitation combined with cognitive-behavioral therapy. The fourth trial found an advantage of fusion surgery over haphazard “usual care,” which appeared to be less effective than the structured rehabilitation in other trials. Given conflicting evidence, indications for surgery for CLBP without radiculopathy have remained controversial. Both U.S. and British guidelines suggest considering referral for an opinion on spinal fusion for people who have completed an optimal nonsurgical treatment program (including combined physical and psychological treatment) and who have persistent severe back pain for which they would consider surgery. Lumbar disk replacement with prosthetic disks is U.S. Food and Drug Administration approved for uncomplicated patients needing single-level surgery at the L3-S1 levels. The disks are generally designed as metal plates with a polyethylene cushion sandwiched in between. The trials that led to approval of these devices compared them to spine fusion and concluded that the artificial disks were “not inferior.” Serious complications are somewhat more likely with the artificial disk. This treatment remains controversial for CLBP. Intensive multidisciplinary rehabilitation programs may involve daily or frequent physical therapy, exercise, cognitive-behavioral therapy, a workplace evaluation, and other interventions. For patients who have not responded to other approaches, such programs appear to offer some benefit. Systematic reviews suggest that the evidence is limited and benefits are incremental. Some observers have raised concern that CLBP may often be overtreated. For CLBP without radiculopathy, new British guidelines explicitly recommend against use of SSRIs, any type of injection, TENS, lumbar supports, traction, radiofrequency facet joint denervation, intradiskal electrothermal therapy, or intradiskal radiofrequency thermocoagulation. These treatments are also not recommended in guidelines from the American College of Physicians and the American Pain Society. On the other hand, exercise therapy and treatment of depression appear to be useful and underused. A common cause of back pain with radiculopathy is a herniated disk with nerve root impingement, resulting in back pain with radiation down the leg. The term sciatica is used when the leg pain radiates posteriorly in a sciatic or L5/S1 distribution. The prognosis for acute low back and leg pain with radiculopathy due to disk herniation is generally favorable, with most patients showing substantial improvement over months. Serial imaging studies suggest spontaneous regression of the herniated portion of the disk in two-thirds of patients over 6 months. Nonetheless, there are several important treatment options to provide symptomatic relief while this natural healing process unfolds. Resumption of normal activity is recommended. Randomized trial evidence suggests that bed rest is ineffective for treating sciatica as well as back pain alone. Acetaminophen and NSAIDs are useful for pain relief, although severe pain may require short courses of opioid analgesics. Epidural glucocorticoid injections have a role in providing temporary symptom relief for sciatica due to a herniated disk. However, there does not appear to be a benefit in terms of reducing subsequent surgical interventions. Diagnostic nerve root blocks have been advocated to determine if pain originates from a specific nerve root. However, improvement may result even when the nerve root is not responsible for the pain; this may occur as a placebo effect, from a pain-generating lesion located distally along the peripheral nerve, or from effects of systemic absorption. The utility of diagnostic nerve root blocks remains a subject of debate. Surgical intervention is indicated for patients who have progressive motor weakness due to nerve root injury demonstrated on clinical examination or EMG. Urgent surgery is recommended for patients who have evidence of CES or spinal cord compression, generally suggested by bowel or bladder dysfunction, diminished sensation in a saddle distribution, a sensory level on the trunk, and bilateral leg weakness or spasticity. Surgery is also an important option for patients who have disabling radicular pain despite optimal conservative treatment. Sciatica is perhaps the most common reason for recommending spine surgery. Because patients with a herniated disk and sciatica generally experience rapid improvement over a matter of weeks, most experts do not recommend considering surgery unless the patient has failed to respond to 6–8 weeks of maximum nonsurgical management. For patients who have not improved, randomized trials indicate that, compared to nonsurgical treatment, surgery results in more rapid pain relief. However, after the first year or two of follow-up, patients with sciatica appear to have much the same level of pain relief and functional improvement with or without surgery. Thus, both treatment approaches are reasonable, and patient preferences and needs (e.g., rapid return to employment) strongly influence decision making. Some patients will want the fastest possible relief and find surgical risks acceptable. Others will be more risk-averse and more tolerant of symptoms and will choose watchful waiting if they understand that improvement is likely in the end. The usual surgical procedure is a partial hemilaminectomy with excision of the prolapsed disk (diskectomy). Fusion of the involved lumbar segments should be considered only if significant spinal instability is present (i.e., degenerative spondylolisthesis). The costs associated with lumbar interbody fusion have increased dramatically in recent years. There are no large prospective, randomized trials comparing fusion to other types of surgical intervention. In one study, patients with persistent low back pain despite an initial diskectomy fared no better with spine fusion than with a conservative regimen of cognitive intervention and exercise. Artificial disks have been in use in Europe for the past decade; their utility remains controversial in the United States. Neck pain, which usually arises from diseases of the cervical spine and soft tissues of the neck, is common. Neck pain arising from the cervical spine is typically precipitated by movement and may be accompanied by focal tenderness and limitation of motion. Many of the prior com-121 ments made regarding causes of low back pain also apply to disorders of the cervical spine. The text below will emphasize differences. Pain arising from the brachial plexus, shoulder, or peripheral nerves can be confused with cervical spine disease (Table 22-4), but the history and examination usually identify a more distal origin for the pain. Cervical spine trauma, disk disease, or spondylosis with intervertebral foraminal narrowing may be asymptomatic or painful and can produce a myelopathy, radiculopathy, or both. The same risk factors for serious causes of low back pain also apply to neck pain with the additional feature that neurologic signs of myelopathy (incontinence, sensory level, spastic legs) may also occur. Lhermitte’s sign, an electrical shock down the spine with neck flexion, suggests involvement of the cervical spinal cord. Trauma to the cervical spine (fractures, subluxation) places the spinal cord at risk for compression. Motor vehicle accidents, violent crimes, or falls account for 87% of cervical spinal cord injuries (Chap. 456). Immediate immobilization of the neck is essential to minimize further spinal cord injury from movement of unstable cervical spine segments. The decision to obtain imaging should be based on the nature of the injury. The NEXUS low-risk criteria established that normally alert patients without palpation tenderness in the midline; intoxication; neurologic deficits; or painful distracting injuries were very unlikely to have sustained a clinically significant traumatic injury to the cervical spine. The Canadian C-spine rule recommends that imaging should be obtained following neck region trauma if the patient is >65 years old or has limb paresthesias or if there was a dangerous mechanism for the injury (e.g., bicycle collision with tree or parked car, fall from height >3 feet or five stairs, diving accident). These guidelines are helpful but must be tailored to individual circumstances; for example, patients with advanced osteoporosis, glucocorticoid use, or cancer may warrant imaging after even mild trauma. A CT scan is the diagnostic procedure of choice for detection of acute fractures following severe trauma; plain x-rays can be used for lesser degrees of trauma. When traumatic injury to the vertebral arteries or cervical spinal cord is suspected, visualization by MRI with magnetic resonance angiography is preferred. Whiplash injury is due to rapid flexion and extension of the neck, usually in automobile accidents. The exact mechanism of the injury is unclear. This diagnosis should not be applied to patients with fractures, disk herniation, head injury, focal neurologic findings, or altered consciousness. Up to 50% of persons reporting whiplash injury acutely have persistent neck pain 1 year later. Once personal C5 Biceps Lateral deltoid Rhomboidsa (elbow extends backward with hand on Lateral arm, medial scapula hip) Infraspinatusa (arm rotates externally with elbow flexed at the side) Deltoida (arm raised laterally 30–45° from the side) C6 Biceps Thumb/index finger; Bicepsa (arm flexed at the elbow in supination) Lateral forearm, thumb/index fingers Dorsal hand/lateral forearm Pronator teres (forearm pronated) C7 Triceps Middle fingers Tricepsa (forearm extension, flexed at elbow) Posterior arm, dorsal forearm, dorsal hand Dorsal forearm Wrist/finger extensorsa C8 Finger Palmar surface of little finger Abductor pollicis brevis (abduction of thumb) Fourth and fifth fingers, medial hand and flexors forearm Medial hand and forearm First dorsal interosseous (abduction of index finger) Abductor digiti minimi (abduction of little finger) T1 Finger Axilla and medial arm Abductor pollicis brevis (abduction of thumb) Medial arm, axilla flexors First dorsal interosseous (abduction of index finger) Abductor digiti minimi (abduction of little finger) aThese muscles receive the majority of innervation from this root. 122 compensation for pain and suffering was removed from the Australian health care system, the prognosis for recovery at 1 year from whiplash injury improved also. Imaging of the cervical spine is not cost-effective acutely but is useful to detect disk herniations when symptoms persist for >6 weeks following the injury. Severe initial symptoms have been associated with a poor long-term outcome. Herniation of a lower cervical disk is a common cause of pain or tingling in the neck, shoulder, arm, or hand. Neck pain, stiffness, and a range of motion limited by pain are the usual manifestations. Herniated cervical disks are responsible for ~25% of cervical radiculopathies. Extension and lateral rotation of the neck narrow the ipsilateral intervertebral foramen and may reproduce radicular symptoms (Spurling’s sign). In young adults, acute nerve root compression from a ruptured cervical disk is often due to trauma. Cervical disk herniations are usually posterolateral near the lateral recess. Typical patterns of reflex, sensory, and motor changes that accompany cervical nerve root lesions are summarized in Table 22-4. Although the classic patterns are clinically helpful, there are numerous exceptions because there is overlap in sensory function between adjacent nerve roots, symptoms and signs may be evident in only part of the injured nerve root territory, and (3) the location of pain is the most variable of the clinical features. Osteoarthritis of the cervical spine may produce neck pain that radiates into the back of the head, shoulders, or arms, or may be the source of headaches in the posterior occipital region (supplied by the C2-C4 nerve roots). Osteophytes, disk protrusions, or hypertrophic facet or uncovertebral joints may alone or in combination compress one or several nerve roots at the intervertebral foramina; these causes together account for 75% of cervical radiculopathies. The roots most commonly affected are C7 and C6. Narrowing of the spinal canal by osteophytes, ossification of the posterior longitudinal ligament (OPLL), or a large central disk may compress the cervical spinal cord and produce signs of radiculopathy and myelopathy in combination (myeloradiculopathy). When little or no neck pain accompanies cervical cord involvement, other diagnoses to be considered include amyotrophic lateral sclerosis (Chap. 452), multiple sclerosis (Chap. 458), spinal cord tumors, or syringomyelia (Chap. 456). The possibility of cervical spondylosis should be considered even when the patient presents with symptoms or signs in the legs only. MRI is the study of choice to define anatomic abnormalities of soft tissues in the cervical region including the spinal cord, but plain CT is adequate to assess bony spurs, foraminal narrowing, lateral recess stenosis, or OPLL. EMG and nerve conduction studies can localize and assess the severity of nerve root injury. Rheumatoid arthritis (RA) (Chap. 380) of the cervical facet joints produces neck pain, stiffness, and limitation of motion. Synovitis of the atlantoaxial joint (C1-C2; Fig. 22-2) may damage the transverse ligament of the atlas, producing forward displacement of the atlas on the axis (atlantoaxial subluxation). Radiologic evidence of atlantoaxial subluxation occurs in up to 30% of patients with RA. The degree of subluxation correlates with the severity of erosive disease. When subluxation is present, careful assessment is important to identify early signs of myelopathy. Occasional patients develop high spinal cord compression leading to quadriparesis, respiratory insufficiency, and death. Surgery should be considered when myelopathy or spinal instability is present. MRI is the imaging modality of choice. Ankylosing spondylitis can cause neck pain and less commonly atlantoaxial subluxation; surgery may be required to prevent spinal cord compression. Acute herpes zoster can presents as acute posterior occipital or neck pain prior to the outbreak of vesicles. Neoplasms metastatic to the cervical spine, infections (osteomyelitis and epidural abscess), and metabolic bone diseases may be the cause of neck pain, as discussed above PART 2 Cardinal Manifestations and Presentation of Diseases among causes of low back pain. Neck pain may also be referred from the heart with coronary artery ischemia (cervical angina syndrome). The thoracic outlet contains the first rib, the subclavian artery and vein, the brachial plexus, the clavicle, and the lung apex. Injury to these structures may result in postural or movement-induced pain around the shoulder and supraclavicular region, classified as follows. True neurogenic thoracic outlet syndrome (TOS) is an uncommon disorder resulting from compression of the lower trunk of the brachial plexus or ventral rami of the C8 or T1 nerve roots, caused most often by an anomalous band of tissue connecting an elongate transverse process at C7 with the first rib. Pain is mild or may be absent. Signs include weakness and wasting of intrinsic muscles of the hand and diminished sensation on the palmar aspect of the fifth digit. An anteroposterior cervical spine x-ray will show an elongate C7 transverse process (an anatomic marker for the anomalous cartilaginous band), and EMG and nerve conduction studies confirm the diagnosis. Treatment consists of surgical resection of the anomalous band. The weakness and wasting of intrinsic hand muscles typically does not improve, but surgery halts the insidious progression of weakness. Arterial TOS results from compression of the subclavian artery by a cervical rib, resulting in poststenotic dilatation of the artery and in some cases secondary thrombus formation. Blood pressure is reduced in the affected limb, and signs of emboli may be present in the hand. Neurologic signs are absent. Ultrasound can confirm the diagnosis noninvasively. Treatment is with thrombolysis or anticoagulation (with or without embolectomy) and surgical excision of the cervical rib compressing the subclavian artery. Venous TOS is due to subclavian vein thrombosis resulting in swelling of the arm and pain. The vein may be compressed by a cervical rib or anomalous scalene muscle. Venography is the diagnostic test of choice. Disputed TOS accounts for 95% of patients diagnosed with TOS; chronic arm and shoulder pain are prominent and of unclear cause. The lack of sensitive and specific findings on physical examination or specific markers for this condition results in diagnostic uncertainty. The role of surgery in disputed TOS is controversial. Multidisciplinary pain management is a conservative approach, although treatment is often unsuccessful. Pain from injury to the brachial plexus or peripheral nerves of the arm can occasionally mimic referred pain of cervical spine origin including cervical radiculopathy. Neoplastic infiltration of the lower trunk of the brachial plexus may produce shoulder or supraclavicular pain radiating down the arm, numbness of the fourth and fifth fingers or medial forearm, and weakness of intrinsic hand muscles innervated by the ulnar and median nerves. Delayed radiation injury may produce similar findings, although pain is less often present and almost always less severe. A Pancoast tumor of the lung (Chap. 107) is another cause and should be considered, especially when a concurrent Horner’s syndrome is present. Suprascapular neuropathy may produce severe shoulder pain, weakness, and wasting of the supraspinatus and infraspinatus muscles. Acute brachial neuritis is often confused with radiculopathy; the acute onset of severe shoulder or scapular pain is followed typically over days by weakness of the proximal arm and shoulder girdle muscles innervated by the upper brachial plexus. The onset may be preceded by an infection, vaccination, or minor surgical procedure. The long thoracic nerve may be affected resulting in a winged scapula. Brachial neuritis may also present as an isolated paralysis of the diaphragm with or without involvement of other nerves of the upper limb. Recovery may take up to 3 years. Occasional cases of carpal tunnel syndrome produce pain and paresthesias extending into the forearm, arm, and shoulder resembling a C5 or C6 root lesion. Lesions of the radial or ulnar nerve can mimic a radiculopathy at C7 or C8, respectively. EMG and nerve conduction studies can accurately localize lesions to the nerve roots, brachial plexus, or peripheral nerves. For further discussion of peripheral nerve disorders, see Chap. 459. Pain arising from the shoulder can on occasion mimic pain from the spine. If symptoms and signs of radiculopathy are absent, then the differential diagnosis includes mechanical shoulder pain (tendonitis, bursitis, rotator cuff tear, dislocation, adhesive capsulitis, or rotator cuff impingement under the acromion) and referred pain (subdiaphragmatic irritation, angina, Pancoast tumor). Mechanical pain is often worse at night, associated with local shoulder tenderness and aggravated by passive abduction, internal rotation, or extension of the arm. Pain from shoulder disease may radiate into the arm or hand, but focal neurologic signs (sensory, motor, or reflex changes) are absent. The evidence regarding treatment for neck pain is less complete than that for low back pain, but the approach is remarkably similar in many respects. As with low back pain, spontaneous improvement is the norm for acute neck pain. The usual goals of therapy are to promote a rapid return to normal function and provide symptom relief while healing proceeds. The evidence in support of nonsurgical treatments for whiplash-associated disorders is generally of limited quality and neither supports nor refutes the common treatments used for symptom relief. Gentle mobilization of the cervical spine combined with exercise programs may be beneficial. Evidence is insufficient to recommend for or against the routine use of acupuncture, cervical traction, TENS, ultrasound, diathermy, or massage. Some patients obtain modest relief using a soft neck collar; there is little risk or cost. For patients with neck pain unassociated with trauma, supervised exercise with or without mobilization appears to be effective. Exercises often include shoulder rolls and neck stretches. The evidence for the use of muscle relaxants, analgesics, and NSAIDs in acute and chronic neck pain is of lower quality and less consistent than for low back pain. Low-level laser therapy directed at areas of tenderness, local acupuncture points, or a grid of predetermined points is a controversial approach to the treatment of neck pain. A 2009 meta-analysis suggested that this treatment may provide greater pain relief than sham fever Charles A. Dinarello, Reuven Porat Body temperature is controlled by the hypothalamus. Neurons in both the preoptic anterior hypothalamus and the posterior hypothalamus receive two kinds of signals: one from peripheral nerves that transmit information from warmth/cold receptors in the skin and the other 23 SECTion 2 from the temperature of the blood bathing the region. These two types of signals are integrated by the thermoregulatory center of the hypothalamus to maintain normal temperature. In a neutral temperature environment, the human metabolic rate produces more heat than is necessary to maintain the core body temperature in the range of 36.5–37.5°C (97.7–99.5°F). therapy for both acute and chronic neck pain, but comparison to 123 other conservative and less expensive treatment measures is needed. Although some surgical studies have proposed a role for anterior diskectomy and fusion in patients with neck pain, these studies generally have not been rigorously conducted. A systematic review suggested that there was no valid clinical evidence to support either cervical fusion or cervical disk arthroplasty in patients with neck pain without radiculopathy. Similarly, there is no evidence to support radiofrequency neurotomy or cervical facet injections for neck pain without radiculopathy. The natural history of neck pain with acute radiculopathy due to disk disease is favorable, and many patients will improve without specific therapy. Although there are no randomized trials of NSAIDs for neck pain, a course of NSAIDs, acetaminophen, or both, with or without muscle relaxants, is reasonable as initial therapy. Other nonsurgical treatments are commonly used, including opioid analgesics, oral glucocorticoids, cervical traction, and immobilization with a hard or soft cervical collar. However, there are no randomized trials that establish the effectiveness of these treatments. Soft cervical collars can be modestly helpful by limiting spontaneous and reflex neck movements that exacerbate pain. As for lumbar radiculopathy, epidural glucocorticoids appear to provide short-term symptom relief in cervical radiculopathy, but rigorous studies addressing this question have not been conducted. If cervical radiculopathy is due to bony compression from cervical spondylosis with foraminal narrowing, periodic follow-up to assess for progression is indicated and consideration of surgical decompression is reasonable. Surgical treatment can produce rapid pain relief, although it is unclear whether long-term outcomes are improved over nonsurgical therapy. Indications for cervical disk surgery include a progressive radicular motor deficit, functionally limiting pain that fails to respond to conservative management, or spinal cord compression. Surgical treatments include anterior cervical diskectomy alone, laminectomy with diskectomy, or diskectomy with fusion. The risk of subsequent radiculopathy or myelopathy at cervical segments adjacent to a fusion is ~3% per year and 26% per decade. Although this risk is sometimes portrayed as a late complication of surgery, it may also reflect the natural history of degenerative cervical disk disease. A normal body temperature is ordinarily maintained despite environmental variations because the hypothalamic thermoregulatory center balances the excess heat production derived from metabolic activity in muscle and the liver with heat dissipation from the skin and lungs. According to studies of healthy individuals 18–40 years of age, the mean oral temperature is 36.8° ± 0.4°C (98.2° ± 0.7°F), with low levels at 6 a.m. and higher levels at 4–6 p.m. The maximal normal oral temperature is 37.2°C (98.9°F) at 6 a.m. and 37.7°C (99.9°F) at 4 p.m.; these values define the 99th percentile for healthy individuals. In light of these studies, an a.m. temperature of >37.2°C (>98.9°F) or a p.m. temperature of >37.7°C (>99.9°F) would define a fever. The normal daily temperature variation is typically 0.5°C (0.9°F). However, in some individuals recovering from a febrile illness, this daily variation can be as great as 1.0°C. During a febrile illness, the diurnal variation is usually maintained, but at higher, febrile levels. The daily temperature 124 variation appears to be fixed in early childhood; in contrast, elderly individuals can exhibit a reduced ability to develop fever, with only a modest fever even in severe infections. Rectal temperatures are generally 0.4°C (0.7°F) higher than oral readings. The lower oral readings are probably attributable to mouth breathing, which is a factor in patients with respiratory infections and rapid breathing. Lower-esophageal temperatures closely reflect core temperature. Tympanic membrane thermometers measure radiant heat from the tympanic membrane and nearby ear canal and display that absolute value (unadjusted mode) or a value automatically calculated from the absolute reading on the basis of nomograms relating the radiant temperature measured to actual core temperatures obtained in clinical studies (adjusted mode). These measurements, although convenient, may be more variable than directly determined oral or rectal values. Studies in adults show that readings are lower with unadjusted-mode than with adjusted-mode tympanic membrane thermometers and that unadjusted-mode tympanic membrane values are 0.8°C (1.6°F) lower than rectal temperatures. In women who menstruate, the a.m. temperature is generally lower in the 2 weeks before ovulation; it then rises by ∼0.6°C (1°F) with ovulation and remains at that level until menses occur. Body temperature can be elevated in the postprandial state. Pregnancy and endocrinologic dysfunction also affect body temperature. Fever is an elevation of body temperature that exceeds the normal daily variation and occurs in conjunction with an increase in the hypothalamic set point (e.g., from 37°C to 39°C). This shift of the set point from “normothermic” to febrile levels very much resembles the resetting of the home thermostat to a higher level in order to raise the ambient temperature in a room. Once the hypothalamic set point is raised, neurons in the vasomotor center are activated and vasoconstriction commences. The individual first notices vasoconstriction in the hands and feet. Shunting of blood away from the periphery to the internal organs essentially decreases heat loss from the skin, and the person feels cold. For most fevers, body temperature increases by 1–2°C. Shivering, which increases heat production from the muscles, may begin at this time; however, shivering is not required if heat conservation mechanisms raise blood temperature sufficiently. Nonshivering heat production from the liver also contributes to increasing core temperature. Behavioral adjustments (e.g., putting on more clothing or bedding) help raise body temperature by decreasing heat loss. The processes of heat conservation (vasoconstriction) and heat production (shivering and increased nonshivering thermogenesis) continue until the temperature of the blood bathing the hypothalamic neurons matches the new thermostat setting. Once that point is reached, the hypothalamus maintains the temperature at the febrile level by the same mechanisms of heat balance that function in the afebrile state. When the hypothalamic set point is again reset downward (in response to either a reduction in the concentration of pyrogens or the use of antipyretics), the processes of heat loss through vasodilation and sweating are initiated. Loss of heat by sweating and vasodilation continues until the blood temperature at the hypothalamic level matches the lower setting. Behavioral changes (e.g., removal of clothing) facilitate heat loss. A fever of >41.5°C (>106.7°F) is called hyperpyrexia. This extraordinarily high fever can develop in patients with severe infections but most commonly occurs in patients with central nervous system (CNS) hemorrhages. In the preantibiotic era, fever due to a variety of infectious diseases rarely exceeded 106°F, and there has been speculation that this natural “thermal ceiling” is mediated by neuropeptides functioning as central antipyretics. In rare cases, the hypothalamic set point is elevated as a result of local trauma, hemorrhage, tumor, or intrinsic hypothalamic malfunction. The term hypothalamic fever is sometimes used to describe elevated temperature caused by abnormal hypothalamic function. However, most patients with hypothalamic damage have subnormal, not supra-normal, body temperatures. PART 2 Cardinal Manifestations and Presentation of Diseases Although most patients with elevated body temperature have fever, there are circumstances in which elevated temperature represents not fever but hyperthermia (heat stroke). Hyperthermia is characterized by an uncontrolled increase in body temperature that exceeds the body’s ability to lose heat. The setting of the hypothalamic thermoregulatory center is unchanged. In contrast to fever in infections, hyperthermia does not involve pyrogenic molecules. Exogenous heat exposure and endogenous heat production are two mechanisms by which hyperthermia can result in dangerously high internal temperatures. Excessive heat production can easily cause hyperthermia despite physiologic and behavioral control of body temperature. For example, work or exercise in hot environments can produce heat faster than peripheral mechanisms can lose it. For a detailed discussion of hyperthermia, see Chap. 479e. It is important to distinguish between fever and hyperthermia since hyperthermia can be rapidly fatal and characteristically does not respond to antipyretics. In an emergency situation, however, making this distinction can be difficult. For example, in systemic sepsis, fever (hyperpyrexia) can be rapid in onset, and temperatures can exceed 40.5°C (104.9°F). Hyperthermia is often diagnosed on the basis of the events immediately preceding the elevation of core temperature— e.g., heat exposure or treatment with drugs that interfere with thermoregulation. In patients with heat stroke syndromes and in those taking drugs that block sweating, the skin is hot but dry, whereas in fever the skin can be cold as a consequence of vasoconstriction. Antipyretics do not reduce the elevated temperature in hyperthermia, whereas in fever—and even in hyperpyrexia—adequate doses of either aspirin or acetaminophen usually result in some decrease in body temperature. The term pyrogen (Greek pyro, “fire”) is used to describe any substance that causes fever. Exogenous pyrogens are derived from outside the patient; most are microbial products, microbial toxins, or whole microorganisms (including viruses). The classic example of an exogenous pyrogen is the lipopolysaccharide (endotoxin) produced by all gram-negative bacteria. Pyrogenic products of gram-positive organisms include the enterotoxins of Staphylococcus aureus and the groups A and B streptococcal toxins, also called superantigens. One staphylococcal toxin of clinical importance is that associated with isolates of S. aureus from patients with toxic shock syndrome. These products of staphylococci and streptococci cause fever in experimental animals when injected intravenously at concentrations of 1–10 μg/kg. Endotoxin is a highly pyrogenic molecule in humans: when injected intravenously into volunteers, a dose of 2–3 ng/kg produces fever, leukocytosis, acute-phase proteins, and generalized symptoms of malaise. Cytokines are small proteins (molecular mass, 10,000–20,000 Da) that regulate immune, inflammatory, and hematopoietic processes. For example, the elevated leukocytosis seen in several infections with an absolute neutrophilia is attributable to the cytokines interleukin (IL) 1 and IL-6. Some cytokines also cause fever; formerly referred to as endogenous pyrogens, they are now called pyrogenic cytokines. The pyrogenic cytokines include IL-1, IL-6, tumor necrosis factor (TNF), and ciliary neurotropic factor, a member of the IL-6 family. Interferons (IFNs), particularly IFN-α, also are pyrogenic cytokines; fever is a prominent side effect of IFN-α used in the treatment of hepatitis. Each pyrogenic cytokine is encoded by a separate gene, and each has been shown to cause fever in laboratory animals and in humans. When injected into humans at low doses (10–100 ng/kg), IL-1 and TNF produce fever; in contrast, for IL-6, a dose of 1–10 μg/kg is required for fever production. A wide spectrum of bacterial and fungal products induce the synthesis and release of pyrogenic cytokines. However, fever can be a manifestation of disease in the absence of microbial infection. For example, inflammatory processes, trauma, tissue necrosis, and antigen-antibody complexes induce the production of IL-1, TNF, and/or IL-6; individually or in combination, these cytokines trigger the hypothalamus to raise the set point to febrile levels. During fever, levels of prostaglandin E2 (PGE2) are elevated in hypothalamic tissue and the third cerebral ventricle. The concentrations of PGE2 are highest near the circumventricular vascular organs (organum vasculosum of lamina terminalis)—networks of enlarged capillaries surrounding the hypothalamic regulatory centers. Destruction of these organs reduces the ability of pyrogens to produce fever. Most studies in animals have failed to show, however, that pyrogenic cytokines pass from the circulation into the brain itself. Thus, it appears that both exogenous pyrogens and pyrogenic cytokines interact with the endothelium of these capillaries and that this interaction is the first step in initiating fever—i.e., in raising the set point to febrile levels. The key events in the production of fever are illustrated in Fig. 23-1. Myeloid and endothelial cells are the primary cell types that produce pyrogenic cytokines. Pyrogenic cytokines such as IL-1, IL-6, and TNF are released from these cells and enter the systemic circulation. Although these circulating cytokines lead to fever by inducing the synthesis of PGE2, they also induce PGE2 in peripheral tissues. The increase in PGE2 in the periphery accounts for the nonspecific myalgias and arthralgias that often accompany fever. It is thought that some systemic PGE2 escapes destruction by the lung and gains access to the hypothalamus via the internal carotid. However, it is the elevation of PGE2 in the brain that starts the process of raising the hypothalamic set point for core temperature. There are four receptors for PGE2, and each signals the cell in different ways. Of the four receptors, the third (EP-3) is essential for fever: when the gene for this receptor is deleted in mice, no fever follows the injection of IL-1 or endotoxin. Deletion of the other PGE2 receptor genes leaves the fever mechanism intact. Although PGE2 is essential for fever, it is not a neurotransmitter. Rather, the release of PGE2 from the brain side of the hypothalamic endothelium triggers the PGE2 receptor on glial cells, and this stimulation results in the rapid release of cyclic adenosine 5′-monophosphate (cAMP), which is a neurotransmitter. As shown in Fig. 23-1, the release of cAMP from glial cells activates neuronal endings from the thermoregulatory center that extend into the area. The elevation of cAMP is thought to account for changes in the hypothalamic set point either directly or indirectly (by inducing the release of neurotransmitters). Distinct receptors for microbial products are located on the hypothalamic endothelium. These receptors are called Toll-like receptors and are similar in many ways to IL-1 receptors. IL-1 receptors and Toll-like receptors share the same signaltransducing mechanism. Thus, the direct activation of Toll-like receptors or IL-1 receptors results in PGE2 production and fever. Infection, microbial toxins, mediators of inflammation, immune reactions Microbial toxins Fever Monocytes/macrophages, endothelial cells, others Heat conservation, heat production Hypothalamic endotheliumPyrogenic cytokines IL-1, IL-6, TNF, IFN Elevated thermoregulatory set point Circulation PGE2 Cyclic AMP FIguRE 23-1 Chronology of events required for the induction of fever. AMP, adenosine 5′-monophosphate; IFN, interferon; IL, interleukin; PGE2, prostaglandin E2; TNF, tumor necrosis factor. Cytokines produced in the brain may account for the hyperpyrexia of CNS hemorrhage, trauma, or infection. Viral infections of the CNS induce microglial and possibly neuronal production of IL-1, TNF, and IL-6. In experimental animals, the concentration of a cytokine required to cause fever is several orders of magnitude lower with direct injection into the brain substance or brain ventricles than with systemic injection. Therefore, cytokines produced in the CNS can raise the hypothalamic set point, bypassing the circumventricular organs. CNS cytokines likely account for the hyperpyrexia of CNS hemorrhage, trauma, or infection. APPROACH TO THE PATIENT: The chronology of events preceding fever, including exposure to other infected individuals or to vectors of disease, should be ascertained. Electronic devices for measuring oral, tympanic membrane, or rectal temperatures are reliable, but the same site should be used consistently to monitor a febrile disease. Moreover, physicians should be aware that newborns, elderly patients, patients with chronic liver or renal failure, and patients taking glucocorticoids or being treated with an anticytokine may have active infection in the absence of fever due to a blunted febrile response. The workup should include a complete blood count; a differential count should be performed manually or with an instrument sensitive to the identification of juvenile or band forms, toxic granulations, and Döhle bodies, which are suggestive of bacterial infection. Neutropenia may be present with some viral infections. Measurement of circulating cytokines in patients with fever is not helpful since levels of cytokines such as IL-1 and TNF in the circulation often are below the detection limit of the assay or do not coincide with fever. However, in patients with low-grade fevers or possible disease, the most valuable measurements are the C-reactive protein level and the erythrocyte sedimentation rate. These markers of inflammatory processes are particularly helpful in detecting occult disease. Measurement of circulating IL-6 is useful because IL-6 induces C-reactive protein. Acute-phase reactants are discussed in Chap. 325. Patients receiving long-term treatment with anticytokine-based regimens are at a disadvantage because of lowered host defense against infection. Even when the results of tests for latent Mycobacterium tuberculosis infection are negative, active tuberculosis can develop in patients receiving anti-TNF therapy. With the increasing use of anticytokines to reduce the activity of IL-1, IL-6, IL-12, or TNF in patients with Crohn’s disease, rheumatoid arthritis, or psoriasis, the possibility that these therapies blunt the febrile response must be kept in mind. The blocking of cytokine activity has the distinct clinical drawback of lowering the level of host defenses against both routine bacterial and opportunistic infections. The opportunistic infections reported in patients treated with agents that neutralize TNF-α are similar to those reported in the HIV-1-infected population (e.g., a new infection with or reactivation of Mycobacterium tuberculosis, with dissemination). In nearly all reported cases of infection associated with anticytokine therapy, fever is among the presenting signs. However, the extent to which the febrile response is blunted in these patients remains unknown. A similar situation is seen in patients receiving high-dose glucocorticoid therapy or anti-inflammatory agents such as ibuprofen. Therefore, low-grade fever is of considerable concern in patients receiving anticytokine therapies. The physician should conduct an early and rigorous diagnostic evaluation in these patients. PART 2 Cardinal Manifestations and Presentation of Diseases Most fevers are associated with self-limited infections, such as common viral diseases. The use of antipyretics is not contraindicated in these infections: no significant clinical evidence indicates either that antipyretics delay the resolution of viral or bacterial infections or that fever facilitates recovery from infection or acts as an adjuvant to the immune system. In short, treatment of fever and its symptoms with routine antipyretics does no harm and does not slow the resolution of common viral and bacterial infections. However, in bacterial infections, the withholding of antipyretic therapy can be helpful in evaluating the effectiveness of a particular antibiotic, especially in the absence of positive cultures of the infecting organism, and the routine use of antipyretics can mask an inadequately treated bacterial infection. Withholding antipyretics in some cases may facilitate the diagnosis of an unusual febrile disease. Temperature-pulse dissociation (relative bradycardia) occurs in typhoid fever, brucellosis, leptospirosis, some drug-induced fevers, and factitious fever. As stated earlier, in newborns, elderly patients, patients with chronic liver or kidney failure, and patients taking glucocorticoids, fever may not be present despite infection. Hypothermia can develop in patients with septic shock. Some infections have characteristic patterns in which febrile episodes are separated by intervals of normal temperature. For example, Plasmodium vivax causes fever every third day, whereas fever occurs every fourth day with P. malariae. Another relapsing fever is related to Borrelia infection, with days of fever followed by a several-day afebrile period and then a relapse into additional days of fever. In the Pel-Ebstein pattern, fever lasting 3–10 days is followed by afebrile periods of 3–10 days; this pattern can be classic for Hodgkin’s disease and other lymphomas. In cyclic neutropenia, fevers occur every 21 days and accompany the neutropenia. There is no periodicity of fever in patients with familial Mediterranean fever. However, these patterns have limited or no diagnostic value compared with specific and rapid laboratory tests. Recurrent fever is documented at some point in most autoimmune diseases and nearly all autoinflammatory diseases. Although fever can be a manifestation of autoimmune diseases, recurrent fevers are characteristic of autoinflammatory diseases (Table 23-1), including adult and juvenile Still’s disease, familial Mediterranean fever, and hyper-IgD syndrome. In addition to recurrent fevers, neutrophilia and serosal inflammation characterize autoinflammatory diseases. The fevers associated with these illnesses are dramatically reduced aPyogenic arthritis, pyoderma gangrenosum, and acne. by blocking of IL-1β activity. Anticytokines therefore reduce fever in autoimmune and autoinflammatory diseases. Although fevers in autoinflammatory diseases are mediated by IL-1β, patients also respond to antipyretics. The reduction of fever by lowering of the elevated hypothalamic set point is a direct function of reduction of the PGE2 level in the thermoregulatory center. The synthesis of PGE2 depends on the constitutively expressed enzyme cyclooxygenase. The substrate for cyclooxygenase is arachidonic acid released from the cell membrane, and this release is the rate-limiting step in the synthesis of PGE2. Therefore, inhibitors of cyclooxygenase are potent antipyretics. The antipyretic potency of various drugs is directly correlated with the inhibition of brain cyclooxygenase. Acetaminophen is a poor cyclooxygenase inhibitor in peripheral tissue and lacks noteworthy anti-inflammatory activity; in the brain, however, acetaminophen is oxidized by the p450 cytochrome system, and the oxidized form inhibits cyclooxygenase activity. Moreover, in the brain, the inhibition of another enzyme, COX-3, by acetaminophen may account for the antipyretic effect of this agent. However, COX-3 is not found outside the CNS. Oral aspirin and acetaminophen are equally effective in reducing fever in humans. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen and specific inhibitors of COX-2 also are excellent antipyretics. Chronic, high-dose therapy with antipyretics such as aspirin or any NSAID does not reduce normal core body temperature. Thus, PGE2 appears to play no role in normal thermoregulation. As effective antipyretics, glucocorticoids act at two levels. First, similar to the cyclooxygenase inhibitors, glucocorticoids reduce PGE2 synthesis by inhibiting the activity of phospholipase A2, which is needed to release arachidonic acid from the cell membrane. Second, glucocorticoids block the transcription of the mRNA for the pyrogenic cytokines. Limited experimental evidence indicates that ibuprofen and COX-2 inhibitors reduce IL-1induced IL-6 production and may contribute to the antipyretic activity of NSAIDs. The objectives in treating fever are first to reduce the elevated hypothalamic set point and second to facilitate heat loss. Reducing fever with antipyretics also reduces systemic symptoms of headache, myalgias, and arthralgias. Oral aspirin and NSAIDs effectively reduce fever but can adversely affect platelets and the gastrointestinal tract. Therefore, acetaminophen is preferred as an antipyretic. In children, acetaminophen or oral ibuprofen must be used because aspirin increases the risk of Reye’s syndrome. If the patient cannot take oral antipyretics, parenteral preparations of NSAIDs and rectal suppositories of various antipyretics can be used. Treatment of fever in some patients is highly recommended. Fever increases the demand for oxygen (i.e., for every increase of 1°C over 37°C, there is a 13% increase in oxygen consumption) and can aggravate the condition of patients with preexisting impairment of cardiac, pulmonary, or CNS function. Children with a history of febrile or nonfebrile seizure should be aggressively treated to reduce fever. However, it is unclear what triggers the febrile seizure, and there is no correlation between absolute temperature elevation and onset of a febrile seizure in susceptible children. In hyperpyrexia, the use of cooling blankets facilitates the reduction of temperature; however, cooling blankets should not be used without oral antipyretics. In hyperpyretic patients with CNS disease or trauma (CNS bleeding), reducing core temperature mitigates the detrimental effects of high temperature on the brain. For a discussion of treatment for hyperthermia, see Chap. 479e. Diseases with fever and rash may be classified by type of erup-127 fever and Rash tion: centrally distributed maculopapular, peripheral, confluent desquamative erythematous, vesiculobullous, urticaria-like, nodular, Elaine T. Kaye, Kenneth M. Kaye purpuric, ulcerated, or with eschars. Diseases are listed by these cat- The acutely ill patient with fever and rash may present a diagnostic challenge for physicians. However, the distinctive appearance of an eruption in concert with a clinical syndrome can facilitate a prompt diagnosis and the institution of life-saving therapy or critical infection-control interventions. Representative images of many of the rashes discussed in this chapter are included in Chap. 25e. APPROACH TO THE PATIENT: A thorough history of patients with fever and rash includes the following relevant information: immune status, medications taken within the previous month, specific travel history, immunization status, exposure to domestic pets and other animals, history of animal (including arthropod) bites, recent dietary exposures, existence of cardiac abnormalities, presence of prosthetic material, recent exposure to ill individuals, and exposure to sexually transmitted diseases. The history should also include the site of onset of the rash and its direction and rate of spread. A thorough physical examination entails close attention to the rash, with an assessment and precise definition of its salient features. First, it is critical to determine what type of lesions make up the eruption. Macules are flat lesions defined by an area of changed color (i.e., a blanchable erythema). Papules are raised, solid lesions <5 mm in diameter; plaques are lesions >5 mm in diameter with a flat, plateau-like surface; and nodules are lesions >5 mm in diameter with a more rounded configuration. Wheals (urticaria, hives) are papules or plaques that are pale pink and may appear annular (ringlike) as they enlarge; classic (nonvasculitic) wheals are transient, lasting only 24 h in any defined area. Vesicles (<5 mm) and bullae (>5 mm) are circumscribed, elevated lesions containing fluid. Pustules are raised lesions containing purulent exudate; vesicular processes such as varicella or herpes simplex may evolve to pustules. Nonpalpable purpura is a flat lesion that is due to bleeding into the skin. If <3 mm in diameter, the purpuric lesions are termed petechiae; if >3 mm, they are termed ecchymoses. Palpable purpura is a raised lesion that is due to inflammation of the vessel wall (vasculitis) with subsequent hemorrhage. An ulcer is a defect in the skin extending at least into the upper layer of the dermis, and an eschar (tâche noire) is a necrotic lesion covered with a black crust. Other pertinent features of rashes include their configuration (i.e., annular or target), the arrangement of their lesions, and their distribution (i.e., central or peripheral). For further discussion, see Chaps. 70, 72, and 147. This chapter reviews rashes that reflect systemic disease, but it does not include localized skin eruptions (i.e., cellulitis, impetigo) that may also be associated with fever (Chap. 156). The chapter is not intended to be all-inclusive, but it covers the most important and most common diseases associated with fever and rash. Rashes are classified herein on the basis of lesion morphology and distribution. For practical purposes, this classification system is based on the most typical disease presentations. However, morphology may vary as rashes evolve, and the presentation of diseases with rashes is subject to many variations (Chap. 72). For instance, the classic petechial rash of Rocky Mountain spotted fever (Chap. 211) may initially consist of blanchable erythematous macules distributed peripherally; at times, however, the rash associated with this disease may not be predominantly acral, or no rash may develop at all. egories in Table 24-1, and many are highlighted in the text. However, for a more detailed discussion of each disease associated with a rash, the reader is referred to the chapter dealing with that specific disease. (Reference chapters are cited in the text and listed in Table 24-1.) Centrally distributed rashes, in which lesions are primarily truncal, are the most common type of eruption. The rash of rubeola (measles) starts at the hairline 2–3 days into the illness and moves down the body, typically sparing the palms and soles (Chap. 229). It begins as discrete erythematous lesions, which become confluent as the rash spreads. Koplik’s spots (1to 2-mm white or bluish lesions with an erythematous halo on the buccal mucosa) are pathognomonic for measles and are generally seen during the first 2 days of symptoms. They should not be confused with Fordyce’s spots (ectopic sebaceous glands), which have no erythematous halos and are found in the mouth of healthy individuals. Koplik’s spots may briefly overlap with the measles exanthem. Rubella (German measles) also spreads from the hairline downward; unlike that of measles, however, the rash of rubella tends to clear from originally affected areas as it migrates, and it may be pruritic (Chap. 230e). Forchheimer spots (palatal petechiae) may develop but are nonspecific because they also develop in infectious mononucleosis (Chap. 218) and scarlet fever (Chap. 173). Postauricular and suboccipital adenopathy and arthritis are common among adults with rubella. Exposure of pregnant women to ill individuals should be avoided, as rubella causes severe congenital abnormalities. Numerous strains of enteroviruses (Chap. 228), primarily echoviruses and coxsackieviruses, cause nonspecific syndromes of fever and eruptions that may mimic rubella or measles. Patients with infectious mononucleosis caused by Epstein-Barr virus (Chap. 218) or with primary HIV infection (Chap. 226) may exhibit pharyngitis, lymphadenopathy, and a nonspecific maculopapular exanthem. The rash of erythema infectiosum (fifth disease), which is caused by human parvovirus B19, primarily affects children 3–12 years old; it develops after fever has resolved as a bright blanchable erythema on the cheeks (“slapped cheeks”) with perioral pallor (Chap. 221). A more diffuse rash (often pruritic) appears the next day on the trunk and extremities and then rapidly develops into a lacy reticular eruption that may wax and wane (especially with temperature change) over 3 weeks. Adults with fifth disease often have arthritis, and fetal hydrops can develop in association with this condition in pregnant women. Exanthem subitum (roseola) is caused by human herpesvirus 6 and is most common among children <3 years of age (Chap. 219). As in erythema infectiosum, the rash usually appears after fever has subsided. It consists of 2to 3-mm rose-pink macules and papules that coalesce only rarely, occur initially on the trunk and sometimes on the extremities (sparing the face), and fade within 2 days. Although drug reactions have many manifestations, including urticaria, exanthematous drug-induced eruptions (Chap. 74) are most common and are often difficult to distinguish from viral exanthems. Eruptions elicited by drugs are usually more intensely erythematous and pruritic than viral exanthems, but this distinction is not reliable. A history of new medications and an absence of prostration may help to distinguish a drug-related rash from an eruption of another etiology. Rashes may persist for up to two weeks after administration of the offending agent is discontinued. Certain populations are more prone than others to drug rashes. Of HIV-infected patients, 50–60% develop a rash in response to sulfa drugs; 90% of patients with mononucleosis due to Epstein-Barr virus develop a rash when given ampicillin. Rickettsial illnesses (Chap. 211) should be considered in the evaluation of individuals with centrally distributed maculopapular eruptions. The usual setting for epidemic typhus is a site of war or natural disaster in which people are exposed to body lice. Endemic typhus or leptospirosis (the latter caused by a spirochete) (Chap. 208) may be CHAPTER 24 Fever and Rash PART 2 Cardinal Manifestations and Presentation of Diseases Rash beginning on wrists and ankles and spreading centripetally; appears on palms and soles later in disease; lesion evolution from blanchable macules to petechiae Coincident primary chancre in 10% of cases; copper-colored, scaly papular eruption, diffuse but prominent on palms and soles; rash never vesicular in adults; condyloma latum, mucous patches, and alopecia in some cases Maculopapular eruption; prominent on upper extremities and face, but can also occur on trunk and lower extremities Tender vesicles, erosions in mouth; 0.25-cm papules on hands and feet with rim of erythema evolving into tender vesicles Target lesions (central erythema surrounded by area of clearing and another rim of erythema) up to 2 cm; symmetric on knees, elbows, palms, soles; spreads centripetally; papular, sometimes vesicular; when extensive and involving mucous membranes, termed EM major Maculopapular eruption over palms, soles, and extremities; tends to be more severe at joints; eruption sometimes becoming generalized; may be purpuric; may desquamate Subacute course: Osler’s nodes (tender pink nodules on finger or toe pads); petechiae on skin and mucosa; splinter hemorrhages. Acute course (e.g., Staphylococcus aureus): Janeway lesions (painless erythematous or hemorrhagic macules, usually on palms and soles) — Tick vector; widespread but more common in southeastern and southwest-central U.S. Aedes aegypti and A. albopictus mosquito bites; primarily in Africa and Indian Ocean region infection; drug intake (i.e., sulfa, phenytoin, penicillin) Rat bite, ingestion of contaminated food Abnormal heart valve (e.g., viridans group streptococci), intravenous drug use Headache, myalgias, abdominal pain; mortality rates up to 40% if untreated Fever, constitutional symptoms Severe polyarticular, migratory arthralgias, especially involving small joints (e.g., hands, wrists, ankles) 50% of patients <20 years old; fever more common in most severe form, EM major, which can be confused with Stevens-Johnson syndrome (but EM major lacks prominent skin sloughing) 180, 181, 221 Chronic meningococcemia, disseminated gonococcal infection,a human parvovirus B19 infectione Infection, drugs, idiopathic causes Streptococcus, Staphylococcus, etc. PART 2 Cardinal Manifestations and Presentation of Diseases disease) Streptococcus (pyrogenic exotoxins A, B, C) shock syndrome Streptococcus (associated with pyrogenic exotoxin A and/or B or certain M types) Staphylococcal toxic S. aureus (toxic shock shock syndrome syndrome toxin 1, enterotoxins B and others) Diffuse blanchable erythema beginning on face and spreading to trunk and extremities; circumoral pallor; “sandpaper” texture to skin; accentuation of linear erythema in skin folds (Pastia’s lines); enanthem of white evolving into red “strawberry” tongue; desquamation in second week Rash similar to scarlet fever (scarlatiniform) or EM; fissuring of lips, strawberry tongue; conjunctivitis; edema of hands, feet; desquamation later in disease When present, rash often scarlatiniform Diffuse erythema involving palms; pronounced erythema of mucosal surfaces; conjunctivitis; desquamation 7–10 days into illness Most common among children 2–10 years old; usually follows group A streptococcal pharyngitis May occur in setting of severe group A streptococcal infections (e.g., necrotizing fasciitis, bacteremia, pneumonia) Colonization with toxin-producing S. aureus Fever, pharyngitis, headache Cervical adenopathy, pharyngitis, coronary artery vasculitis Multiorgan failure, hypo-tension; mortality rate 30% Fever >39°C (>102°F), hypotension, multiorgan dysfunction 72, 385 Staphylococcal S. aureus, phage scalded-skin syndrome group II Diffuse tender erythema, often with bullae and desquamation; Nikolsky’s sign Diffuse erythema (often scaling) interspersed with lesions of underlying condition Maculopapular eruption (mimicking exanthematous drug rash), sometimes progressing to exfoliative erythroderma; profound edema, especially facial; pustules may occur Erythematous and purpuric macules, sometimes targetoid, or diffuse erythema progressing to bullae, with sloughing and necrosis of entire epidermis; Nikolsky’s sign; involves mucosal surfaces; TEN (>30% epidermal necrosis) is maximal form; SJS involves <10% of epidermis; SJS/TEN overlap involves 10–30% of epidermis Colonization with toxin-producing S. aureus; occurs in children <10 years old (termed Ritter’s disease in neonates) or adults with renal dysfunction Individuals genetically unable to detoxify arene oxides (anticonvulsants), patients with slow Nacetylating capacity (sulfonamides) Uncommon among children; more common among patients with HIV infection, SLE, certain HLA types, or slow acetylators Irritability; nasal or con-172 junctival secretions Macules (2–3 mm) evolving into papules, then vesicles (sometimes umbilicated), on an erythematous base (“dewdrops on a rose petal”); pustules then forming and crusting; lesions appearing in crops; may involve scalp, mouth; intensely pruritic Pruritic erythematous follicular, papular, vesicular, or pustular lesions that may involve axillae, buttocks, abdomen, and especially areas occluded by bathing suits; can manifest as tender isolated nodules on palmar or plantar surfaces (the latter designated “Pseudomonas hot-foot syndrome”) Red macules on tongue and palate evolving to papules and vesicles; skin macules evolving to papules, then vesicles, then pustules over 1 week, with subsequent lesion crusting; lesions initially appearing on face and spreading centrifugally from trunk to extremities; differs from varicella in that (1) skin lesions in any given area are at same stage of development and (2) there is a prominent distribution of lesions on face and extremities (including palms, soles) Erythema rapidly followed by hallmark painful grouped vesicles that may evolve into pustules that ulcerate, especially on mucosal surfaces; lesions at site of inoculation: commonly gingivostomatitis for HSV-1 and genital lesions for HSV-2; recurrent disease milder (e.g., herpes labialis does not involve oral mucosa) — Usually affects children; 10% of adults susceptible; most common in late winter and spring; incidence down by 90% in U.S. as a result of varicella vaccination Nonimmune individuals exposed to smallpox Malaise; generally mild disease in healthy children; more severe disease with complications in adults and immunocompromised children Earache, sore eyes and/ or throat; fever may be absent; generally self-limited Prodrome of fever, headache, backache, myalgias; vomiting in 50% of cases PART 2 Cardinal Manifestations and Presentation of Diseases Disseminated Vibrio V. vulnificus vulnificus infection Ecthyma gangrenosum P. aeruginosa, other gram-negative rods, fungi Generalized vesicles that can evolve to pustules and ulcerations; individual lesions similar for VZV and HSV. Zoster cutaneous dissemination: >25 lesions extending outside involved dermatome. HSV: extensive, progressive mucocutaneous lesions that may occur in absence of dissemination, sometimes disseminate in eczematous skin (eczema herpeticum); HSV visceral dissemination may occur with only localized mucocutaneous disease; in disseminated neonatal disease, skin lesions diagnostically helpful when present, but rash absent in a substantial minority of cases Eschar found at site of mite bite; generalized rash involving face, trunk, extremities; may involve palms and soles; <100 papules and plaques (2–10 mm); tops of lesions developing vesicles that may evolve into pustules Tiny sterile nonfollicular pustules on erythematous, edematous skin; begins on face and in body folds, then becomes generalized Indurated plaque evolving into hemorrhagic bulla or pustule that sloughs, resulting in eschar formation; erythematous halo; most common in axillary, groin, perianal regions Patients with immunosuppression, eczema; neonates Appears 2–21 days after start of drug therapy, depending on whether patient has been sensitized Patients with cirrhosis, diabetes, renal failure; exposure by ingestion of contaminated saltwater, seafood Usually affects neutropenic patients; occurs in up to 28% of individuals with Pseudomonas bacteremia Visceral organ involvement (e.g., liver, lungs) in some cases; neonatal disease particularly severe Headache, myalgias, regional adenopathy; mild disease Acute fever, pruritus, leukocytosis Clinical signs of sepsis 164, 216, Urticarial vasculitis Serum sickness, Erythematous, edematous “urticaria-like” Patients with serum sick-Fever variable; arthralgias/ 385f often due to infec-plaques, pruritic or burning; unlike urticaria: ness (including hepatitis arthritis tion (including typical lesion duration >24 h (up to 5 days) B), connective tissue hepatitis B viral, and lack of complete lesion blanching with disease enteroviral, parasitic), compression due to hemorrhage drugs; connective tissue disease Disseminated infection Fungal infections (e.g., candidiasis, histoplasmosis, cryptococcosis, sporotrichosis, coccidioidomycosis); mycobacteria Erythema nodosum Infections (e.g., (septal panniculitis) streptococcal, fungal, mycobacterial, yersinial); drugs (e.g., sulfas, penicillins, oral contraceptives); sarcoidosis; idiopathic causes dermatosis) inflammatory bowel disease; pregnancy; malignancy (usually hematologic); drugs (G-CSF) Subcutaneous nodules (up to 3 cm); fluctuance, draining common with mycobacteria; necrotic nodules (extremities, periorbital or nasal regions) common with Aspergillus, Mucor Large, violaceous, nonulcerative, subcutaneous nodules; exquisitely tender; usually on lower legs but also on upper extremities Tender red or blue edematous nodules giving impression of vesiculation; usually on face, neck, upper extremities; when on lower extremities, may mimic erythema nodosum Immunocompromised Features vary with —f hosts (i.e., bone marrow organism transplant recipients, patients undergoing chemotherapy, HIV-infected patients, alcoholics) More common among Arthralgias (50%); features —f girls and women 15–30 vary with associated years old condition PART 2 Cardinal Manifestations and Presentation of Diseases aSee “Purpuric Eruptions.” bSee “Confluent Desquamative Erythemas.” cIn human granulocytotropic ehrlichiosis or anaplasmosis (caused by Anaplasma phagocytophila; most common in the upper midwestern and northeastern United States), rash is rare. dSee “Viral hemorrhagic fever” under “Purpuric Eruptions” for dengue hemorrhagic fever/dengue shock syndrome. eSee “Centrally Distributed Maculopapular Eruptions.” fSee etiology-specific chapters. gSee “Peripheral Eruptions.” hSee “Vesiculobullous or Pustular Eruptions.” Abbreviations: CNS, central nervous system; DIC, disseminated intravascular coagulation; G-CSF, granulocyte colony-stimulating factor; HLA, human leukocyte antigen. seen in urban environments where rodents proliferate. Outside the United States, other rickettsial diseases cause a spotted-fever syndrome and should be considered in residents of or travelers to endemic areas. Similarly, typhoid fever, a nonrickettsial disease caused by Salmonella typhi (Chap. 190), is usually acquired during travel outside the United States. Dengue fever, caused by a mosquito-transmitted flavivirus, occurs in tropical and subtropical regions of the world (Chap. 233). Some centrally distributed maculopapular eruptions have distinctive features. Erythema migrans, the rash of Lyme disease (Chap. 210), typically manifests as single or multiple annular plaques. Untreated erythema migrans lesions usually fade within a month but may persist for more than a year. Southern tick-associated rash illness (STARI) (Chap. 210) has an erythema migrans–like rash but is less severe than Lyme disease and often occurs in regions where Lyme is not endemic. Erythema marginatum, the rash of acute rheumatic fever (Chap. 381), has a distinctive pattern of enlarging and shifting transient annular lesions. Collagen vascular diseases may cause fever and rash. Patients with systemic lupus erythematosus (Chap. 378) typically develop a sharply defined, erythematous eruption in a butterfly distribution on the cheeks (malar rash) as well as many other skin manifestations. Still’s disease (Chap. 398) presents as an evanescent, salmon-colored rash on the trunk and proximal extremities that coincides with fever spikes. These rashes are alike in that they are most prominent peripherally or begin in peripheral (acral) areas before spreading centripetally. Early diagnosis and therapy are critical in Rocky Mountain spotted fever (Chap. 211) because of its grave prognosis if untreated. Lesions evolve from macular to petechial, start on the wrists and ankles, spread centripetally, and appear on the palms and soles only later in the disease. The rash of secondary syphilis (Chap. 206), which may be generalized but is prominent on the palms and soles, should be considered in the differential diagnosis of pityriasis rosea, especially in sexually active patients. Chikungunya fever (Chap. 233), which is transmitted by mosquito bite in Africa and the Indian Ocean region, is associated with a maculopapular eruption and severe polyarticular small-joint arthralgias. Hand-foot-and-mouth disease (Chap. 228), most commonly caused by coxsackievirus A16, is distinguished by tender vesicles distributed peripherally and in the mouth; outbreaks commonly occur within families. The classic target lesions of erythema multiforme appear symmetrically on the elbows, knees, palms, soles, and face. In severe cases, these lesions spread diffusely and involve mucosal surfaces. Lesions may develop on the hands and feet in endocarditis (Chap. 155). These eruptions consist of diffuse erythema frequently followed by desquamation. The eruptions caused by group A Streptococcus or Staphylococcus aureus are toxin-mediated. Scarlet fever (Chap. 173) usually follows pharyngitis; patients have a facial flush, a “strawberry” tongue, and accentuated petechiae in body folds (Pastia’s lines). Kawasaki disease (Chaps. 72 and 385) presents in the pediatric population as fissuring of the lips, a strawberry tongue, conjunctivitis, adenopathy, and sometimes cardiac abnormalities. Streptococcal toxic shock syndrome (Chap. 173) manifests with hypotension, multiorgan failure, and, often, a severe group A streptococcal infection (e.g., necrotizing fasciitis). Staphylococcal toxic shock syndrome (Chap. 172) also presents with hypotension and multiorgan failure, but usually only S. aureus colonization—not a severe S. aureus infection—is documented. Staphylococcal scalded-skin syndrome (Chap. 172) is seen primarily in children and in immunocompromised adults. Generalized erythema is often evident during the prodrome of fever and malaise; profound tenderness of the skin is distinctive. In the exfoliative stage, the skin can be induced to form bullae with light lateral pressure (Nikolsky’s sign). In a mild form, a scarlatiniform eruption mimics scarlet fever, but the patient does not exhibit a strawberry tongue or circumoral pallor. In contrast to the staphylococcal scalded-skin syndrome, in which the cleavage plane is superficial in the epidermis, toxic epidermal necrolysis (Chap. 74), a maximal variant of Stevens-Johnson syndrome, involves sloughing of the entire epidermis, resulting in severe disease. Exfoliative erythroderma syndrome (Chaps. 72 and 74) is a serious reaction associated with systemic toxicity that is often due to eczema, psoriasis, a drug reaction, or mycosis fungoides. Drug rash with eosinophilia and systemic symptoms (DRESS), often due to antiepileptic and antibiotic agents (Chap. 74), initially appears similar to an exanthematous drug reaction but may progress to exfoliative erythroderma; it is accompanied by multi-organ failure and has an associated mortality rate of ~10%. Varicella (Chap. 217) is highly contagious, often occurring in winter or spring. At any point in time, within a given region of the body, varicella lesions are in different stages of development. In immunocompromised hosts, varicella vesicles may lack the characteristic erythematous base or may appear hemorrhagic. Lesions of Pseudomonas “hot-tub” folliculitis (Chap. 189) are also pruritic and may appear similar to those of varicella. However, hot-tub folliculitis generally occurs in outbreaks after bathing in hot tubs or swimming pools, and lesions occur in regions occluded by bathing suits. Lesions of variola (smallpox) (Chap. 261e) also appear similar to those of varicella but are all at the same stage of development in a given region of the body. Variola lesions are most prominent on the face and extremities, while varicella lesions are most prominent on the trunk. Herpes simplex virus infection (Chap. 216) is characterized by hallmark grouped vesicles on an erythematous base. Primary herpes infection is accompanied by fever and toxicity, while recurrent disease is milder. Rickettsialpox (Chap. 211) is often documented in urban settings and is characterized by vesicles followed by pustules. It can be distinguished from varicella by an eschar at the site of the mouse-mite bite and the papule/plaque base of each vesicle. Acute generalized eruptive pustulosis should be considered in individuals who are acutely febrile and are taking new medications, especially anticonvulsant or antimicrobial agents (Chap. 74). Disseminated Vibrio vulnificus infection (Chap. 193) or ecthyma gangrenosum due to Pseudomonas aeruginosa (Chap. 189) should be considered in immunosuppressed individuals with sepsis and hemorrhagic bullae. Individuals with classic urticaria (“hives”) usually have a hypersensitivity reaction without associated fever. In the presence of fever, urticaria-like eruptions are most often due to urticarial vasculitis (Chap. 385). Unlike individual lesions of classic urticaria, which last up to 24 h, these lesions may last 3–5 days. Etiologies include serum sickness (often induced by drugs such as penicillins, sulfas, salicylates, or barbiturates), connective-tissue disease (e.g., systemic lupus erythematosus or Sjren’s syndrome), and infection (e.g., with hepatitis B virus, enteroviruses, or parasites). Malignancy, especially lymphoma, may be associated with fever and chronic urticaria (Chap. 72). In immunocompromised hosts, nodular lesions often represent disseminated infection. Patients with disseminated candidiasis (often due to Candida tropicalis) may have a triad of fever, myalgias, and eruptive nodules (Chap. 240). Disseminated cryptococcosis lesions (Chap. 239) may resemble molluscum contagiosum (Chap. 220e). Necrosis of nodules should raise the suspicion of aspergillosis (Chap. 241) or mucormycosis (Chap. 242). Erythema nodosum presents with exquisitely tender nodules on the lower extremities. Sweet syndrome (Chap. 72) should be considered in individuals with multiple nodules and plaques, often so edematous that they give the appearance of vesicles or bullae. Sweet syndrome may occur in individuals with infection, inflammatory bowel disease, or malignancy and can also be induced by drugs. Acute meningococcemia (Chap. 180) classically presents in children as a petechial eruption, but initial lesions may appear as blanch-able macules or urticaria. Rocky Mountain spotted fever should be considered in the differential diagnosis of acute meningococcemia. Echovirus infection (Chap. 228) may mimic acute meningococcemia; patients should be treated as if they have bacterial sepsis because prompt differentiation of these conditions may be impossible. Large ecchymotic areas of purpura fulminans (Chaps. 180 and 325) reflect severe underlying disseminated intravascular coagulation, which may be due to infectious or noninfectious causes. The lesions of chronic meningococcemia (Chap. 180) may have a variety of morphologies, including petechial. Purpuric nodules may develop on the legs and resemble erythema nodosum but lack its exquisite tenderness. Lesions of disseminated gonococcemia (Chap. 181) are distinctive, sparse, countable hemorrhagic pustules, usually located near joints. The lesions of chronic meningococcemia and those of gonococcemia may be indistinguishable in terms of appearance and distribution. Viral hemorrhagic fever (Chaps. 233 and 234) should be considered in patients with an appropriate travel history and a petechial rash. Thrombotic thrombocytopenic purpura (Chaps. 72, 129, and 140) and hemolytic-uremic syndrome (Chaps. 140, 186, and 191) are closely related and are noninfectious causes of fever and petechiae. Cutaneous small-vessel vasculitis (leukocytoclastic vasculitis) typically manifests as palpable purpura and has a wide variety of causes (Chap. 72). The presence of an ulcer or eschar in the setting of a more widespread eruption can provide an important diagnostic clue. For example, the presence of an eschar may suggest the diagnosis of scrub typhus or rickettsialpox (Chap. 211) in the appropriate setting. In other illnesses (e.g., anthrax) (Chap. 261e), an ulcer or eschar may be the only skin manifestation. ChaPter 26 Fever of Unknown Origin Atlas of Rashes Associated with Fever Kenneth M. Kaye, Elaine T. Kaye Given the extremely broad differential diagnosis, the presentation of a patient with fever and rash often poses a thorny diagnostic challenge 25e for even the most astute and experienced clinician. Rapid narrowing of the differential by prompt recognition of a rash’s key features can result in appropriate and sometimes life-saving therapy. This atlas presents high-quality images of a variety of rashes that have an infectious etiology and are commonly associated with fever. CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-2 Koplik’s spots, which manifest as white or bluish lesions with an erythematous halo on the buccal mucosa, usually occur in the first 2 days of measles symptoms and may briefly overlap the measles exanthem. The presence of the erythematous halo (arrow indicates one example) differentiates Koplik’s spots from Fordyce’s spots (ectopic sebaceous glands), which occur in the mouths of healthy individuals. (Courtesy of the Centers for Disease Control and Prevention.) Figure 25e-3 In measles, discrete erythematous lesions become confluent on the face and neck over 2–3 days as the rash spreads downward to the trunk and arms, where lesions remain discrete. (Reprinted from K Wolff, RA Johnson: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 5th ed. New York, McGraw-Hill, 2005.) Figure 25e-1 A. Erythema leading to “slapped cheeks” appearance in erythema infectiosum (fifth disease) caused by parvovirus B19. B. Lacy reticular rash of erythema infectiosum. (Panel A reprinted from K Wolff, RA Johnson: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 25e-4 In rubella, an erythematous exanthem spreads from the hairline downward and clears as it spreads. (Courtesy of Stephen E. Gellis, MD; with permission.) Figure 25e-7 This exanthematous, drug-induced eruption con-sists of brightly erythematous macules and papules, some of which are confluent, distributed symmetrically on the trunk and extremities. Ampicillin caused this rash. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 5th ed. New York, McGraw-Hill, 2005.) Figure 25e-5 Exanthem subitum (roseola) occurs most commonly in young children. A diffuse maculopapular exanthem follows resolu-tion of fever. (Courtesy of Stephen E. Gellis, MD; with permission.) Figure 25e-6 Erythematous macules and papules are apparent on the trunk and arm of this patient with primary HIV infection. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 5th ed. New York, McGraw-Hill, 2005.) Figure 25e-8 Erythema migrans is the early cutaneous manifestation of Lyme disease and is characterized by erythematous annular patches, often with a central erythematous focus at the tick-bite site. (Reprinted from RP Usatine et al: Color Atlas of Family Medicine, 2nd ed. New York, McGraw-Hill, 2013. Courtesy of Thomas Corson, MD.) Figure 25e-9 Rose spots are evident as erythematous macules on the trunk of this patient with typhoid fever. (Courtesy of the Centers for Disease Control and Prevention.) Figure 25e-10 Systemic lupus erythematosus showing prominent malar erythema and minimal scaling. Involvement of other sun-exposed sites is also common. (Reprinted from K Wolff, RA Johnson: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) Figure 25e-11 Subacute lupus erythematosus on the upper chest, with brightly erythematous and slightly edematous coalescent pap-ules and plaques. (Reprinted from K Wolff, RA Johnson: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) Figure 25e-12 Chronic discoid lupus erythematosus. Violaceous, hyperpigmented, atrophic plaques, often with evidence of follicular plugging (which may result in scarring), are characteristic of this cuta-neous form of lupus. (Reprinted from K Wolff, RA Johnson, AP Saavedra: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 7th ed. New York, McGraw-Hill, 2013.) Figure 25e-13 The rash of Still’s disease typically exhibits evanes-cent, erythematous papules that appear at the height of fever on the trunk and proximal extremities. (Courtesy of Stephen E. Gellis, MD; with permission.) CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-14 Impetigo is a superficial group A streptococcal or Staphylococcus aureus infection consisting of honey-colored crusts and erythematous weeping erosions. (Reprinted from K Wolff, RA Johnson: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 25e-15 Erysipelas is a group A streptococcal infection of the superficial dermis and consists of well-demarcated, erythematous, edematous, warm plaques. (Reprinted from K Wolff, RA Johnson, AP Saavedra: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 7th ed. New York, McGraw-Hill, 2013.) Figure 25e-16 Top: Petechial lesions of Rocky Mountain spotted fever on the lower legs and soles of a young, otherwise healthy patient. Bottom: Close-up of lesions from the same patient. (Courtesy of Lindsey Baden, MD; with permission.) Figure 25e-17 Primary syphilis with firm, nontender chancres. (Courtesy of M. Rein and the Centers for Disease Control and Prevention.) Figure 25e-19 Secondary syphilis commonly affects the palms and soles with scaling, firm, red-brown papules. Figure 25e-18 Secondary syphilis, demonstrating the papulosqua-mous truncal eruption. Figure 25e-21 Mucous patches on the tongue of a patient with secondary syphilis. (Courtesy of Ron Roddy; with permission.) Figure 25e-22 Petechial lesions in a patient with atypical measles. (Courtesy of Stephen E. Gellis, MD; with permission.) CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-20 Condylomata lata are moist, somewhat verrucous intertriginous plaques seen in secondary syphilis. Figure 25e-25 Erythema multiforme is characterized by erythematous plaques with a target or iris morphology, sometimes with a vesicle in the center. It usually represents a hypersensitivity reaction to infections (especially herpes simplex virus or Mycoplasma pneumoniae) or drugs. (Reprinted from K Wolff, RA Johnson: Fitzpatrick’s Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) Figure 25e-23 Tender vesicles and erosions in the mouth of a patient with hand-foot-and-mouth disease. (Courtesy of Stephen E. Gellis, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 25e-24 Septic emboli with hemorrhage and infarction due to acute Staphylococcus aureus endocarditis. (Courtesy of Lindsey Baden, MD; with permission.) Figure 25e-26 Scarlet fever exanthem. Finely punctuated ery-thema has become confluent (scarlatiniform); accentuation of linear erythema in body folds (Pastia’s lines) is seen here. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) Figure 25e-28 Diffuse erythema and scaling are present in this patient with psoriasis and the exfoliative erythroderma syndrome. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) Figure 25e-27 Erythema progressing to bullae with result-ing sloughing of the entire thickness of the epidermis occurs in toxic epidermal necrolysis. This reaction was due to a sulfonamide. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 5th ed. New York, McGraw-Hill, 2005.) Figure 25e-29 This infant with staphylococcal scalded skin syn-drome demonstrates generalized desquamation. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-30 Fissuring of the lips and an erythematous exan-them are evident in this patient with Kawasaki disease. (Courtesy of Stephen E. Gellis, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 25e-31 Numerous varicella lesions at various stages of evolution: vesicles on an erythematous base and umbilicated vesicles, which then develop into crusting lesions. (Courtesy of the Centers for Disease Control and Prevention.) Figure 25e-33 Herpes zoster is seen in this patient taking predni-sone. Grouped vesicles and crusted lesions are seen in the T2 derma-tome on the back and arm (A) and on the right side of the chest (B). (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) Figure 25e-32 Lesions of disseminated zoster at different stages of evolution, including pustules and crusting, similar to varicella. Note nongrouping of lesions, in contrast to herpes simplex or zos-ter. (Reprinted from K Wolff, RA Johnson, AP Saavedra: Color Atlas and Synopsis of Clinical Dermatology, 7th ed. New York, McGraw-Hill, 2013.) 25e-9 CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-35 Ecthyma gangrenosum in a neutropenic patient with Pseudomonas aeruginosa bacteremia. Figure 25e-36 Urticaria showing characteristic discrete and confluent, edematous, erythematous papules and plaques. (Reprinted from K Wolff, RA Johnson, AP Saavedra: Color Atlas and Synopsis of Clinical Dermatology, 7th ed. New York, McGraw-Hill, 2013.) Figure 25e-34 Top: Eschar at the site of the mite bite in a patient with rickettsialpox. Middle: Papulovesicular lesions on the trunk of the same patient. Bottom: Close-up of lesions from the same patient. (Reprinted from A Krusell et al: Emerg Infect Dis 8:727, 2002.) Figure 25e-37 Disseminated cryptococcal infection. A liver transplant recipient developed six cutaneous lesions similar to the one shown. Biopsy and serum antigen testing demonstrated Cryptococcus. Important features of the lesion include a benign-appearing fleshy papule with central umbilication resembling molluscum contagiosum. (Courtesy of Lindsey Baden, MD; with permission.) Figure 25e-38 Disseminated candidiasis. Tender, erythematous, nodular lesions developed in a neutropenic patient with leukemia who was undergoing induction chemotherapy. (Courtesy of Lindsey Baden, MD; with permission.) Figure 25e-40 Erythema nodosum is a panniculitis characterized by tender, deep-seated nodules and plaques usually located on the lower extremities. (Courtesy of Robert Swerlick, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 25e-39 Disseminated Aspergillus infection. Multiple necrotic lesions developed in this neutropenic patient undergoing hematopoietic stem cell transplantation. The lesion in the photograph is on the inner thigh and is several centimeters in diameter. Biopsy demonstrated infarction caused by Aspergillus fumigatus. (Courtesy of Lindsey Baden, MD; with permission.) Figure 25e-41 Sweet syndrome is an erythematous indurated plaque with a pseudovesicular border. (Courtesy of Robert Swerlick, MD; with permission.) Figure 25e-42 Fulminant meningococcemia with extensive angu-lar purpuric patches. (Courtesy of Stephen E. Gellis, MD; with permission.) Figure 25e-43 Erythematous papular lesions are seen on the leg of this patient with chronic meningococcemia (arrow indicates a lesion). Figure 25e-45 Palpable purpuric papules on the lower leg are seen in this patient with cutaneous small-vessel hypersensitivity vasculitis. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-44 Disseminated gonococcemia in the skin is seen as hemorrhagic papules and pustules with purpuric centers in a centrifu-gal distribution. (Courtesy of Daniel M. Musher, MD; with permission.) Figure 25e-46 The thumb of a patient with a necrotic ulcer of tula-remia. (Courtesy of the Centers for Disease Control and Prevention.) Figure 25e-47 This 50-year-old man developed high fever and massive inguinal lymphadenopathy after a small ulcer healed on his foot. Tularemia was diagnosed. (Courtesy of Lindsey Baden, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 25e-48 This painful trypanosomal chancre developed at the site of a tsetse fly bite on the dorsum of the foot. Trypanosoma brucei was diagnosed from an aspirate of the ulcer. (Courtesy of Edward T. Ryan, MD. N Engl J Med 346:2069, 2002; with permission.) Figure 25e-49 Drug reaction with eosinophilia and systemic symptoms/drug-induced hypersensitivity syndrome (DRESS/ DIHS). This patient developed a progressive eruption exhibiting early desquamation after taking phenobarbital. There was also associated lymphadenopathy and hepatomegaly. (Courtesy of Peter Lio, MD; with permission.) Figure 25e-50 Many small, nonfollicular pustules are seen against a background of erythema in this patient with acute generalized erup-tive pustulosis (AGEP). The rash began in body folds and progressed to cover the trunk and face. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) CHAPTER 25e Atlas of Rashes Associated with Fever Figure 25e-51 Smallpox is shown with many pustules on the face, becoming confluent (A), and on the trunk (B). Pustules are all in the same stage of development. C. Crusting, healing lesions are noted on the trunk, arms, and hands. (Reprinted from K Wolff, RA Johnson: Color Atlas and Synopsis of Clinical Dermatology, 6th ed. New York, McGraw-Hill, 2009.) CHAPTER 26 Fever of Unknown Origin fever of unknown origin Chantal P. Bleeker-Rovers, Jos W. M. van der Meer DEFINITION Clinicians commonly refer to any febrile illness without an initially obvious etiology as fever of unknown origin (FUO). Most febrile ill-nesses either resolve before a diagnosis can be made or develop distin-26 guishing characteristics that lead to a diagnosis. The term FUO should be reserved for prolonged febrile illnesses without an established etiology despite intensive evaluation and diagnostic testing. This chapter focuses on classic FUO in the adult patient. FUO was originally defined by Petersdorf and Beeson in 1961 as an illness of >3 weeks’ duration with fever of ≥38.3°C (101°F) on two occasions and an uncertain diagnosis despite 1 week of inpatient evaluation. Nowadays, most patients with FUO are hospitalized if their clinical condition requires it, but not for diagnostic purposes only; thus the in-hospital evaluation requirement has been eliminated from the definition. The definition of FUO has been further modified by the exclusion of immunocompromised patients, whose workup requires an entirely different diagnostic and therapeutic approach. For the optimal comparison of patients with FUO in different geographic areas, it has been proposed that the quantitative criterion (diagnosis uncertain after 1 week of evaluation) be changed to a qualitative criterion that requires the performance of a specific list of investigations. Accordingly, FUO is now defined as: 1. Fever >38.3°C (101°F) on at least two occasions 2. Illness duration of ≥3 weeks 3. 4. Diagnosis that remains uncertain after a thorough history-taking, physical examination, and the following obligatory investigations: determination of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) level; platelet count; leukocyte count and differential; measurement of levels of hemoglobin, electrolytes, creatinine, total protein, alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, lactate dehydrogenase, creatine kinase, ferritin, antinuclear antibodies, and rheumatoid factor; protein electrophoresis; urinalysis; blood cultures (n = 3); urine culture; chest x-ray; abdominal ultrasonography; and tuberculin skin test (TST). The range of FUO etiologies has evolved over time as a result of changes in the spectrum of diseases causing FUO, the widespread Percentage of Cases Due to Indicated Cause PART 2 Cardinal Manifestations and Presentation of Diseases use of antibiotics, and the availability of new diagnostic techniques. The proportion of cases caused by intraabdominal abscesses and tumors, for example, has decreased because of earlier detection by CT and ultrasound. In addition, infective endocarditis is a less frequent cause because blood culture and echocardiographic techniques have improved. Conversely, some diagnoses, such as acute HIV infection, were unknown four decades ago. Table 26-1 summarizes the findings of several large studies on FUO conducted over the past 20 years. In general, infection accounts for about 20–25% of cases of FUO in Western countries; next in frequency are neoplasms and noninfectious inflammatory diseases (NIIDs), the latter including “collagen or rheumatic diseases,” vasculitis syndromes, and granulomatous disorders. In geographic areas outside the West, infections are a much more common cause of FUO (43% vs 22%), while the proportions of cases due to NIIDs and neoplasms are similar. Up to 50% of cases caused by infections in patients with FUO outside Western nations are due to tuberculosis, which is a less common cause in the United States and Western Europe. The number of FUO patients diagnosed with NIIDs probably will not decrease in the near future, as fever may precede more typical manifestations or serologic evidence by months in these diseases. Moreover, many NIIDs can be diagnosed only after prolonged observation and exclusion of other diseases. In the West, the percentage of undiagnosed cases of FUO has increased in more recent studies. An important factor contributing to the seemingly high diagnostic failure rate is that a diagnosis is more often being established before 3 weeks have elapsed, given that patients with fever tend to seek medical advice earlier and better diagnostic techniques, such as CT and MRI, are widely available; therefore, only the cases that are more difficult to diagnose continue to meet the criteria for FUO. Furthermore, most patients who have FUO without a diagnosis currently do well, and thus a less aggressive diagnostic approach may be used in clinically stable patients once diseases with immediate therapeutic or prognostic consequences have been ruled out to a reasonable extent. This factor may be especially relevant to patients with recurrent fever who are asymptomatic in between febrile episodes. In patients with recurrent fever (defined as repeated episodes of fever interspersed with fever-free intervals of at least 2 weeks and apparent remission of the underlying disease), the chance of attaining an etiologic diagnosis is <50%. The differential diagnosis for FUO is extensive, but it is important to remember that FUO is far more often caused by an atypical presentation of a rather common disease than by a very rare disease. Table 26-2 presents an overview of possible causes of FUO. An atypical presentation of endocarditis, diverticulitis, vertebral osteomyelitis, and extra-pulmonary tuberculosis are the more common infectious disease diagnoses. Q fever and Whipple’s disease are quite rare but should always be kept in mind as a cause of FUO since the presenting symptoms Bacterial, nonspecific Abdominal abscess, adnexitis, apical granuloma, appendicitis, cholangitis, cholecystitis, diverticulitis, endocarditis, endometritis, epidural abscess, infected vascular catheter, infected joint prosthesis, infected vascular prosthesis, infectious arthritis, infective myonecrosis, intracranial abscess, liver abscess, lung abscess, malakoplakia, mastoiditis, mediastinitis, mycotic aneurysm, osteomyelitis, pelvic inflammatory disease, prostatitis, pyelonephritis, pylephlebitis, renal abscess, septic phlebitis, sinusitis, spondylodiscitis, xanthogranulomatous urinary tract infection Bacterial, specific Actinomycosis, atypical mycobacterial infection, bartonellosis, brucellosis, Campylobacter infection, Chlamydia pneumoniae infection, chronic meningococcemia, ehrlichiosis, gonococcemia, legionellosis, leptospirosis, listeriosis, louse-borne relapsing fever (Borrelia recurrentis), Lyme disease, melioidosis (Pseudomonas pseudomallei), Mycoplasma infection, nocardiosis, psittacosis, Q fever (Coxiella burnetii), rickettsiosis, Spirillum minor infection, Streptobacillus moniliformis infection, syphilis, tick-borne relapsing fever (Borrelia duttonii), tuberculosis, tularemia, typhoid fever and other salmonelloses, Whipple’s disease (Tropheryma whipplei), yersiniosis Fungal Aspergillosis, blastomycosis, candidiasis, coccidioidomycosis, cryptococcosis, histoplasmosis, Malassezia furfur infection, paracoccidioidomycosis, Pneumocystis jirovecii pneumonia, sporotrichosis, zygomycosis Parasitic Amebiasis, babesiosis, echinococcosis, fascioliasis, malaria, schistosomiasis, strongyloidiasis, toxocariasis, toxoplasmosis, trichinellosis, trypanosomiasis, visceral leishmaniasis Viral Colorado tick fever, coxsackievirus infection, cytomegalovirus infection, dengue, Epstein-Barr virus infection, hantavirus infection, hepatitis (A, B, C, D, E), herpes simplex, HIV infection, human herpesvirus 6 infection, parvovirus infection, West Nile virus infection Systemic rheumatic and autoimmune Ankylosing spondylitis, antiphospholipid syndrome, autoimmune hemolytic anemia, autoimmune hepatitis, Behçet’s diseases disease, cryoglobulinemia, dermatomyositis, Felty syndrome, gout, mixed connective-tissue disease, polymyositis, pseudogout, reactive arthritis, relapsing polychondritis, rheumatic fever, rheumatoid arthritis, Sjögren’s syndrome, systemic lupus erythematosus, Vogt-Koyanagi-Harada syndrome Vasculitis Allergic vasculitis, Churg-Strauss syndrome, giant cell vasculitis/polymyalgia rheumatica, granulomatosis with polyangiitis, hypersensitivity vasculitis, Kawasaki’s disease, polyarteritis nodosa, Takayasu arteritis, urticarial vasculitis Granulomatous diseases Idiopathic granulomatous hepatitis, sarcoidosis Autoinflammatory syndromes Adult-onset Still’s disease, Blau syndrome, CAPSb (cryopyrin-associated periodic syndromes), Crohn’s disease, DIRA (deficiency of the interleukin 1 receptor antagonist), familial Mediterranean fever, hemophagocytic syndrome, hyper-IgD syndrome (HIDS, also known as mevalonate kinase deficiency), juvenile idiopathic arthritis, PAPA syndrome (pyogenic sterile arthritis, pyoderma gangrenosum, and acne), PFAPA syndrome (periodic fever, aphthous stomatitis, pharyngitis, adenitis), recurrent idiopathic pericarditis, SAPHO (synovitis, acne, pustulosis, hyperostosis, osteomyelitis), Schnitzler’s syndrome, TRAPS (tumor necrosis factor receptor–associated periodic syndrome) CHAPTER 26 Fever of Unknown Origin Hematologic malignancies Amyloidosis, angioimmunoblastic lymphoma, Castleman’s disease, Hodgkin’s disease, hypereosinophilic syndrome, leukemia, lymphomatoid granulomatosis, malignant histiocytosis, multiple myeloma, myelodysplastic syndrome, myelofibrosis, non-Hodgkin’s lymphoma, plasmacytoma, systemic mastocytosis, vaso-occlusive crisis in sickle cell disease Solid tumors Most solid tumors and metastases can cause fever. Those most commonly causing FUO are breast, colon, hepatocellular, lung, pancreatic, and renal cell carcinomas. Benign tumors Angiomyolipoma, cavernous hemangioma of the liver, craniopharyngioma, necrosis of dermoid tumor in Gardner’s syndrome ADEM (acute disseminated encephalomyelitis), adrenal insufficiency, aneurysms, anomalous thoracic duct, aortic dissection, aortic-enteral fistula, aseptic meningitis (Mollaret’s syndrome), atrial myxoma, brewer’s yeast ingestion, Caroli disease, cholesterol emboli, cirrhosis, complex partial status epilepticus, cyclic neutropenia, drug fever, Erdheim-Chester disease, extrinsic allergic alveolitis, Fabry’s disease, factitious disease, fire-eater’s lung, fraudulent fever, Gaucher’s disease, Hamman-Rich syndrome (acute interstitial pneumonia), Hashimoto’s encephalopathy, hematoma, hypersensitivity pneumonitis, hypertriglyceridemia, hypothalamic hypopituitarism, idiopathic normal-pressure hydrocephalus, inflammatory pseudotumor, Kikuchi’s disease, linear IgA dermatosis, mesenteric fibromatosis, metal fume fever, milk protein allergy, myotonic dystrophy, nonbacterial osteitis, organic dust toxic syndrome, panniculitis, POEMS (polyneuropathy, organomegaly, endocrinopathy, monoclonal protein, skin changes), polymer fume fever, post–cardiac injury syndrome, primary biliary cirrhosis, primary hyperparathyroidism, pulmonary embolism, pyoderma gangrenosum, retroperitoneal fibrosis, Rosai-Dorfman disease, sclerosing mesenteritis, silicone embolization, subacute thyroiditis (de Quervain’s), Sweet syndrome (acute febrile neutrophilic dermatosis), thrombosis, tubulointerstitial nephritis and uveitis syndrome (TINU), ulcerative colitis Central Brain tumor, cerebrovascular accident, encephalitis, hypothalamic dysfunction Peripheral Anhidrotic ectodermal dysplasia, exercise-induced hyperthermia, hyperthyroidism, pheochromocytoma aThis table includes all causes of FUO that have been described in the literature. bCAPS includes chronic infantile neurologic cutaneous and articular syndrome (CINCA, also known as neonatal-onset multisystem inflammatory disease, or NOMID), familial cold autoinflammatory syndrome (FCAS), and Muckle-Wells syndrome. can be nonspecific. Serologic testing for Q fever, which results from to consideration of infectious diseases such as malaria, leishmaniasis, exposure to animals or animal products, should be performed when histoplasmosis, or coccidioidomycosis. Fever with signs of endocardithe patient lives in a rural area or has a history of heart valve disease, tis and negative blood culture results poses a special problem. Culture-an aortic aneurysm, or a vascular prosthesis. In patients with unex-negative endocarditis may be due to difficult-to-culture bacteria such plained symptoms localized to the central nervous system (CNS), gas-as nutritionally variant bacteria, HACEK organisms (Haemophilus trointestinal tract, or joints, polymerase chain reaction (PCR) testing parainfluenzae, H. paraphrophilus, Aggregatibacter species [actinofor Tropheryma whipplei should be performed. Travel to or (former) mycetemcomitans, aphrophilus], Cardiobacterium species [hominis, residence in tropical countries or the American Southwest should lead valvarum], Eikenella corrodens, and Kingella kingae; discussed below), 138 Coxiella burnetii (as indicated above), T. whipplei, and Bartonella species. Marantic endocarditis is a sterile thrombotic disease that occurs as a paraneoplastic phenomenon, especially with adenocarcinomas. Sterile endocarditis is also seen in the context of systemic lupus erythematosus and antiphospholipid syndrome. Of the NIIDs, large-vessel vasculitis, polymyalgia rheumatica, sarcoidosis, familial Mediterranean fever, and adult-onset Still’s disease are rather common diagnoses in patients with FUO. The hereditary autoinflammatory syndromes are very rare and usually present in young patients. Schnitzler’s syndrome, which can present at any age, is uncommon but can often be diagnosed easily in a patient with FUO who presents with urticaria, bone pain, and monoclonal gammopathy. Although most tumors can present with fever, malignant lymphoma is by far the most common diagnosis of FUO among the neoplasms. Sometimes the fever even precedes lymphadenopathy detectable by physical examination. Apart from drug-induced fever and exercise-induced hyperthermia, none of the miscellaneous causes of fever is found very frequently in patients with FUO. Virtually all drugs can cause fever, even that commencing after long-term use. Drug-induced fever, including DRESS (drug reaction with eosinophilia and systemic symptoms; Fig. 25e-49), is often accompanied by eosinophilia and also by lymphadenopathy, which can be extensive. More common causes of drug-induced fever are allopurinol, carbamazepine, lamotrigine, phenytoin, sulfasalazine, furosemide, antimicrobial drugs (especially sulfonamides, minocycline, vancomycin, β-lactam antibiotics, and isoniazid), some cardiovascular drugs (e.g., quinidine), and some antiretroviral drugs (e.g., nevirapine). Exercise-induced hyperthermia (Chap. 479e) is characterized by an elevated body temperature that is associated with moderate to strenuous exercise lasting from half an hour up to several hours without an increase in CRP level or ESR; typically these patients sweat during the temperature elevation. Factitious fever (fever artificially induced by the patient—for example, by IV injection of contaminated water) should be considered in all patients but is more common among young women in health care professions. In fraudulent fever, the patient is normothermic but manipulates the thermometer. Simultaneous measurements at different body sites (rectum, ear, mouth) should rapidly identify this diagnosis. Another clue to fraudulent fever is a dissociation between pulse rate and temperature. Previous studies of FUO have shown that a diagnosis is more likely in elderly patients than in younger age groups. In many cases, FUO in the elderly results from an atypical manifestation of a common disease, among which giant cell arteritis and polymyalgia rheumatica are most frequently involved. Tuberculosis is the most common infectious disease associated with FUO in elderly patients, occurring much more often than in younger patients. As many of these diseases are treatable, it is well worth pursuing the cause of fever in elderly patients. APPROACH TO THE PATIENT: fever of unknown origin PART 2 Cardinal Manifestations and Presentation of Diseases Figure 26-1 shows a structured approach to patients presenting with FUO. The most important step in the diagnostic workup is the search for potentially diagnostic clues (PDCs) through complete and repeated history-taking and physical examination and the obligatory investigations listed above. PDCs are defined as all localizing signs, symptoms, and abnormalities potentially pointing toward a diagnosis. Although PDCs are often misleading, only with their help can a concise list of probable diagnoses be made. The history should include information about the fever pattern (continuous or recurrent) and duration, previous medical history, present and recent drug use, family history, sexual history, country of origin, recent and remote travel, unusual environmental exposures associated with travel or hobbies, and animal contacts. A complete physical examination should be performed, with special attention to the eyes, lymph nodes, temporal arteries, liver, spleen, sites of previous surgery, entire skin surface, and mucous membranes. Before further diagnostic tests are initiated, antibiotic and glucocorticoid treatment, which can mask many diseases, should be stopped. For example, blood and other cultures are not reliable when samples are obtained during antibiotic treatment, and the size of enlarged lymph nodes usually decreases during glucocorticoid treatment, regardless of the cause of the lymphadenopathy. Despite the high number of false-positive ultrasounds and the relatively low sensitivity of chest x-rays, the performance of these simple, low-cost diagnostic tests remains obligatory in all patients with FUO in order to separate cases that are caused by easily diagnosed diseases from those that are not. Abdominal ultrasound is preferred to abdominal CT as an obligatory test because of relatively low cost, lack of radiation burden, and absence of side effects. Only rarely do biochemical tests (beyond the obligatory tests needed to classify a patient’s fever as FUO) lead directly to a definitive diagnosis in the absence of PDCs. The diagnostic yield of immunologic serology other than that included in the obligatory tests is relatively low. These tests more often yield false-positive rather than true-positive results and are of little use without PDCs pointing to specific immunologic disorders. Given the absence of specific symptoms in many patients and the relatively low cost of the test, investigation of cryoglobulins appears to be a valuable screening test in patients with FUO. Multiple blood samples should be cultured in the laboratory long enough to ensure ample growth time for any fastidious organisms, such as HACEK organisms. It is critical to inform the laboratory of the intent to test for unusual organisms. Specialized media should be used when the history suggests uncommon microorganisms, such as Histoplasma or Legionella. Performing more than three blood cultures or more than one urine culture is useless in patients with FUO in the absence of PDCs (e.g., a high level of clinical suspicion of endocarditis). Repeating blood or urine cultures is useful only when previously cultured samples were collected during antibiotic treatment or within 1 week after its discontinuation. FUO with headache should prompt microbiologic examination of cerebrospinal fluid (CSF) for organisms including herpes simplex virus (HSV; especially HSV-2), Cryptococcus neoformans, and Mycobacterium tuberculosis. In CNS tuberculosis, the CSF typically has elevated protein and lowered glucose concentrations, with a mononuclear pleocytosis. CSF protein levels range from 100 to 500 mg/dL in most patients, the CSF glucose concentration is <45 mg/dL in 80% of cases, and the usual CSF cell count is between 100 and 500 cells/μL. Microbiologic serology should not be included in the diagnostic workup in patients without PDCs for specific infections. A TST is included in the obligatory investigations, but it may yield false-negative results in patients with miliary tuberculosis, malnutrition, or immunosuppression. Although the interferon γ release assay is less influenced by prior vaccination with bacille Calmette-Guérin or by infection with nontuberculous mycobacteria, its sensitivity is similar to that of the TST; a negative TST or interferon γ release assay therefore does not exclude a diagnosis of tuberculosis. Miliary tuberculosis is especially difficult to diagnose. Granulomatous disease in liver or bone marrow biopsy samples, for example, should always lead to a (re)consideration of this diagnosis. If miliary tuberculosis is suspected, liver biopsy for acid-fast smear, culture, and PCR probably still has the highest diagnostic yield; however, biopsies of bone marrow, lymph nodes, or other involved organs also can be considered. The diagnostic yield of echocardiography, sinus radiography, radiologic or endoscopic evaluation of the gastrointestinal tract, and bronchoscopy is very low in the absence of PDCs. Therefore, these tests should not be used as screening procedures. After identification of all PDCs retrieved from the history, physical examination, and obligatory tests, a limited list of the most probable diagnoses should be made. Since most investigations are helpful only for patients who have PDCs for the diagnoses sought, further diagnostic procedures should be limited to specific investigations aimed at confirming or excluding diseases on this list. In FUO, the Fever ˜38.3° C (101° F) and illness lasting ˜3 weeks and no known immunocompromised state History and physical examination Stop antibiotic treatment and glucocorticoids Obligatory investigations: ESR and CRP, hemoglobin, platelet count, leukocyte count and differential, electrolytes, creatinine, total protein, protein electrophoresis, alkaline phosphatase, AST, ALT, LDH, creatine kinase, antinuclear antibodies, rheumatoid factor, urinalysis, blood cultures (n=3), urine culture, chest x-ray, abdominal ultrasonography, and tuberculin skin test Stable condition: Follow-up for new PDCs Consider NSAID Deterioration: Further diagnostic tests Consider therapeutic trial PDCs present PDCs absent or misleading Exclude manipulation with thermometer Stop or replace medication to exclude drug fever Guided diagnostic tests DIAGNOSIS Cryoglobulin and funduscopy NO DIAGNOSIS FDG-PET/CT (or labeled leukocyte scintigraphy or gallium scan) Scintigraphy abnormal Scintigraphy normal Confirmation of abnormality (e.g., biopsy, culture) Repeat history and physical examination Perform PDC-driven invasive testing DIAGNOSIS NO DIAGNOSIS DIAGNOSIS NO DIAGNOSIS Chest and abdominal CT Temporal artery biopsy (˜55 years) DIAGNOSIS NO DIAGNOSIS CHAPTER 26 Fever of Unknown Origin FIguRE 26-1 Structured approach to patients with FUO. ALT, alanine aminotransferase; AST, aspartate aminotransferase; CRP, C-reactive protein; ESR, erythrocyte sedimentation rate; FDG-PET/CT, 18F-fluorodeoxyglucose positron emission tomography combined with low-dose computed tomography; LDH, lactate dehydrogenase; PDCs, potentially diagnostic clues (all localizing signs, symptoms, and abnormalities potentially pointing toward a diagnosis); NSAID, nonsteroidal anti-inflammatory drug. diagnostic pointers are numerous and diverse but may be missed drug is the cause. In patients without PDCs or with only misleading on initial examination, often being detected only by a very care-PDCs, funduscopy by an ophthalmologist may be useful in the early ful examination performed subsequently. In the absence of PDCs, stage of the diagnostic workup. When the first-stage diagnostic tests the history and physical examination should therefore be repeated do not lead to a diagnosis, scintigraphy should be performed, esperegularly. One of the first steps should be to rule out factitious or cially when the ESR or CRP level is elevated. fraudulent fever, particularly in patients without signs of inflammation in laboratory tests. All medications, including nonprescription Recurrent Fever In patients with recurrent fever, the diagnostic drugs and nutritional supplements, should be discontinued early in workup should consist of thorough history-taking, physical examithe evaluation to exclude drug fever. If fever persists beyond 72 h nation, and obligatory tests. The search for PDCs should be directed after discontinuation of the suspected drug, it is unlikely that this to clues matching known recurrent syndromes (Table 26-3). Patients PART 2 Cardinal Manifestations and Presentation of Diseases Systemic rheumatic and autoimmune Ankylosing spondylitis, antiphospholipid syndrome, autoimmune hemolytic anemia, autoimmune hepatitis, Behçet’s diseases disease, cryoglobulinemia, gout, polymyositis, pseudogout, reactive arthritis, relapsing polychondritis, systemic lupus erythematosus Vasculitis Churg-Strauss syndrome, giant cell vasculitis/polymyalgia rheumatica, hypersensitivity vasculitis, polyarteritis nodosa, urticarial vasculitis Granulomatous diseases Idiopathic granulomatous hepatitis, sarcoidosis Autoinflammatory syndromes Adult-onset Still’s disease, Blau syndrome, CAPSb (cryopyrin-associated periodic syndrome), Crohn’s disease, DIRA (deficiency of the IL-1 receptor antagonist), familial Mediterranean fever, hemophagocytic syndrome, hyper-IgD syndrome (HIDS, also known as mevalonate kinase deficiency), juvenile idiopathic arthritis, PAPA syndrome (pyogenic sterile arthritis, pyoderma gangrenosum, and acne), PFAPA syndrome (periodic fever, aphthous stomatitis, pharyngitis, adenitis), recurrent idiopathic pericarditis, SAPHO (synovitis, acne, pustulosis, hyperostosis, osteomyelitis), Schnitzler’s syndrome, TRAPS (tumor necrosis factor receptor–associated periodic syndrome) Angioimmunoblastic lymphoma, Castleman’s disease, colon carcinoma, craniopharyngioma, Hodgkin’s disease, non-Hodgkin lymphoma, malignant histiocytosis, mesothelioma Adrenal insufficiency, aortic-enteral fistula, aseptic meningitis (Mollaret’s syndrome), atrial myxoma, brewer’s yeast ingestion, cholesterol emboli, cyclic neutropenia, drug fever, extrinsic allergic alveolitis, Fabry’s disease, factitious disease, fraudulent fever, Gaucher’s disease, hypersensitivity pneumonitis, hypertriglyceridemia, hypothalamic hypopituitarism, inflammatory pseudotumor, metal fume fever, milk protein allergy, polymer fume fever, pulmonary embolism, sclerosing mesenteritis Central Hypothalamic dysfunction Peripheral Anhidrotic ectodermal dysplasia, exercise-induced hyperthermia, pheochromocytoma aThis table includes all causes of recurrent fever that have been described in the literature. bCAPS includes chronic infantile neurologic cutaneous and articular syndrome (CINCA, also known as neonatal-onset multisystem inflammatory disease, or NOMID), familial cold autoinflammatory syndrome (FCAS), and Muckle-Wells syndrome. should be asked to return during a febrile episode so that the history, physical examination, and laboratory tests can be repeated during a symptomatic phase. Further diagnostic tests, such as scintigraphic imaging (see below), should be performed only during a febrile episode because abnormalities may be absent between episodes. In patients with recurrent fever lasting >2 years, it is very unlikely that the fever is caused by infection or malignancy. Further diagnostic tests in that direction should be considered only when PDCs for infections, vasculitis syndromes, or malignancy are present or when the patient’s clinical condition is deteriorating. Scintigraphy Scintigraphic imaging is a noninvasive method allowing delineation of foci in all parts of the body on the basis of functional changes in tissues. This procedure plays an important role in the diagnosis of patients with FUO in clinical practice. Conventional scintigraphic methods used in clinical practice are 67Ga-citrate scintigraphy and 111Inor 99mTc-labeled leukocyte scintigraphy. Focal infectious and inflammatory processes can also be detected by several radiologic techniques, such as CT, MRI, and ultrasound. However, because of the lack of substantial pathologic changes in the early phase, infectious and inflammatory foci cannot be detected at this time. Furthermore, distinguishing active infectious or inflammatory lesions from residual changes due to cured processes or surgery remains critical. Finally, CT and MRI routinely provide information only on part of the body, while scintigraphy readily allows whole-body imaging. Fluorodeoxyglucose Positron Emission Tomography 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) has become an established imaging procedure in FUO. FDG accumulates in tissues with a high rate of glycolysis, which occurs not only in malignant cells but also in activated leukocytes, and thus permits the imaging of acute and chronic inflammatory processes. Normal uptake may obscure pathologic foci in the brain, heart, bowel, kidneys, and bladder. In patients with fever, bone marrow uptake is frequently increased in a nonspecific way due to cytokine activation, which upregulates glucose transporters in bone marrow cells. Compared with conventional scintigraphy, FDG-PET offers the advantages of higher resolution, greater sensitivity in chronic low-grade infections, and a high degree of accuracy in the central skeleton. Furthermore, vascular uptake of FDG is increased in patients with vasculitis. The mechanisms responsible for FDG uptake do not allow differentiation among infection, sterile inflammation, and malignancy. However, in patients with FUO, since all of these disorders are causes of FUO, FDG-PET can be used to guide additional diagnostic tests (e.g., targeted biopsies) that may yield the final diagnosis. Improved anatomic resolution by direct integration with CT (FDG-PET/CT) has further improved the accuracy of this modality. Overall rates of helpfulness in final diagnosis of FUO are 40% for FDG-PET and 54% for FDG-PET/CT. In one study, FDG-PET was never helpful in diagnosing FUO in patients with a normal CRP level and a normal ESR. In two prospective studies in patients with FUO, FDG-PET was superior to 67Ga-citrate scintigraphy, with a similar or better diagnostic yield and results that were available within hours instead of days. In one study, the sensitivity of FDGPET was greater than that of 111In-granulocyte scintigraphy (86% vs 20%) in patients with FUO. Although scintigraphic techniques do not directly provide a definitive diagnosis, they often identify the anatomic location of a particular ongoing metabolic process and, with the help of other techniques such as biopsy and culture, facilitate timely diagnosis and treatment. Pathologic FDG uptake is quickly eradicated by treatment with glucocorticoids in many diseases, including vasculitis and lymphoma; therefore, glucocorticoid use should be stopped or postponed until after FDG-PET is performed. Results reported in the literature and the advantages offered by FDG-PET indicate that conventional scintigraphic techniques should be replaced by FDG-PET/CT in the investigation of patients with FUO at institutions where this technique is available. FDG-PET/CT is a relatively expensive procedure whose availability is still limited compared with that of CT and conventional scintigraphy. Nevertheless, FDGPET/CT can be cost-effective in the FUO diagnostic workup if used at an early stage, helping to establish an early diagnosis, reducing days of hospitalization for diagnostic purposes, and obviating unnecessary and unhelpful tests. In some cases, more invasive tests are appropriate. Abnormalities found with scintigraphic techniques often need to be confirmed by pathology and/or culture of biopsy specimens. If lymphadenopathy is found, lymph node biopsy is necessary, even when the affected lymph nodes are hard to reach. In the case of skin lesions, skin biopsy should be undertaken. In one study, pulmonary wedge excision, histologic examination of an excised tonsil, and biopsy of the peritoneum were performed in light of PDCs or abnormal FDGPET results and yielded a diagnosis. If no diagnosis is reached despite scintigraphic and PDC-driven histologic investigations or culture, second-stage screening diagnostic tests should be considered (Fig. 26-1). In three studies, the diagnostic yield of screening chest and abdominal CT in patients with FUO was ~20%. The specificity of chest CT was ~80%, but that of abdominal CT varied between 63% and 80%. Despite the relatively limited specificity of abdominal CT and the probably limited additional value of chest CT after normal FDG-PET, chest and abdominal CT may be used as screening procedures at a later stage of the diagnostic protocol because of their noninvasive nature and high sensitivity. Bone marrow aspiration is seldom useful in the absence of PDCs for bone marrow disorders. With addition of FDG-PET, which is very sensitive in detecting lymphoma, carcinoma, and osteomyelitis, the value of bone marrow biopsy as a screening procedure is probably further reduced. Several studies have shown a high prevalence of giant cell arteritis among patients with FUO, with rates up to 17% among elderly patients. Giant cell arteritis often involves large arteries and in most cases can be diagnosed by FDG-PET. However, temporal artery biopsy is still recommended for patients ≥55 years of age in a later stage of the diagnostic protocol: FDG-PET will not be useful in vasculitis limited to the temporal arteries because of the small diameter of these vessels and the high levels of FDG uptake in the brain that overlies them. In the past, liver biopsies have often been performed as a screening procedure in patients with FUO. In each of two recent studies, liver biopsy as part of the later stage of a screening diagnostic protocol was helpful in only one patient. Moreover, abnormal liver tests are not predictive of a diagnostic liver biopsy in FUO. Liver biopsy is an invasive procedure that carries the possibility of complications and even death. Therefore, it should not be used for screening purposes in patients with FUO except in those with PDCs for liver disease. In patients with unexplained fever after all of the above procedures, the last step in the diagnostic workup—with only a marginal diagnostic yield—comes at an extraordinarily high cost in terms of both expense and discomfort for the patient. Repetition of a thorough history-taking and physical examination and review of laboratory results and imaging studies (including those from other hospitals) are recommended. Diagnostic delay often results from a failure to recognize PDCs in the available information. In these patients with persisting FUO, waiting for new PDCs to appear probably is better than ordering more screening investigations. Only when a patient’s condition deteriorates without providing new PDCs should a further diagnostic workup be performed. Empirical therapeutic trials with antibiotics, glucocorticoids, or antituberculous agents should be avoided in FUO except when a patient’s condition is rapidly deteriorating after the aforementioned diagnostic tests have failed to provide a definite diagnosis. Antibiotic or antituberculous therapy may irrevocably diminish the ability to culture fastidious bacteria or mycobacteria. However, hemodynamic instability or neutropenia is a good indication for empirical antibiotic therapy. If the TST is positive or if granulomatous disease is present with anergy and sarcoidosis seems unlikely, a therapeutic trial for tuberculosis should be started. Especially in miliary tuberculosis, it may be very difficult to obtain a rapid diagnosis. If the fever does not respond after 6 weeks of empirical antituberculous treatment, another diagnosis should be considered. COLCHICINE, NONSTEROIDAL ANTI-INFLAMMATORY DRugS, AND gLuCOCORTICOIDS Colchicine is highly effective in preventing attacks of familial Mediterranean fever but is not always effective once an attack is well under way. When familial Mediterranean fever is suspected, the response to colchicine is not a completely reliable diagnostic tool in the acute phase, but with colchicine treatment most patients show remarkable improvements in the frequency and severity of subsequent febrile episodes within weeks to months. If the fever persists and the source remains elusive after completion of the later-stage investigations, supportive treatment with nonsteroidal anti-inflammatory drugs (NSAIDs) can be helpful. The response of adult-onset Still’s disease to NSAIDs is dramatic in some cases. The effects of glucocorticoids on giant cell arteritis and polymyalgia rheumatica are equally impressive. Early empirical trials with glucocorticoids, however, decrease the chances of reaching a diagnosis for which more specific and sometimes life-saving treatment might be more appropriate, such as malignant lymphoma. The ability of NSAIDs and glucocorticoids to mask fever while permitting the spread of infection or lymphoma dictates that their use should be avoided unless infectious diseases and malignant lymphoma have been largely ruled out and inflammatory disease is probable and is likely to be debilitating or threatening. Interleukin (IL) 1 is a key cytokine in local and systemic inflammation and the febrile response. The availability of specific IL-1-targeting agents has revealed a pathologic role of IL-1-mediated inflammation in a growing list of diseases. Anakinra, a recombinant form of the naturally occurring IL-1 receptor antagonist (IL-1Ra), blocks the activity of both IL-1α and IL-1β. Anakinra is extremely effective in the treatment of many autoinflammatory syndromes, such as familial Mediterranean fever, cryopyrin-associated periodic syndrome, tumor necrosis factor receptor–associated periodic syndrome, hyper-IgD syndrome, and Schnitzler’s syndrome. There is a growing list of other chronic inflammatory disorders in which the reduction of IL-1 activity can be highly effective. A therapeutic trial with anakinra can be considered in patients whose FUO has not been diagnosed after later-stage diagnostic tests. Although most chronic inflammatory conditions without a known basis can be controlled with glucocorticoids, monotherapy with IL-1 blockade can provide improved control without the metabolic, immunologic, and gastrointestinal side effects of glucocorticoid administration. CHAPTER 26 Fever of Unknown Origin 142 PROgNOSIS FUO-related mortality rates have continuously declined over recent decades. The majority of fevers are caused by treatable diseases, and the risk of death related to FUO is, of course, dependent on the underlying disease. In a study by our group (Table 26-1), none of 37 FUO patients without a diagnosis died during a follow-up period of at least 6 months; 4 of 36 patients with a diagnosis died during follow-up due to infection (n = 1) or malignancy (n = 3). Other studies have also shown that malignancy accounts for most FUO-related deaths. NonHodgkin’s lymphoma carries a disproportionately high death toll. In nonmalignant FUO, fatality rates are very low. The good outcome in patients without a diagnosis confirms that potentially lethal occult diseases are very unusual and that empirical therapy with antibiotics, antituberculous agents, or glucocorticoids is rarely required in stable patients. In less affluent regions, infectious diseases are still a major cause of FUO, and outcomes may be different. Cardinal Manifestations and Presentation of Diseases Syncope is a transient, self-limited loss of consciousness due to acute global impairment of cerebral blood flow. The onset is rapid, duration brief, and recovery spontaneous and complete. Other causes of transient loss of consciousness need to be distinguished from syncope; these include seizures, vertebrobasilar ischemia, hypoxemia, and hypoglycemia. A syncopal prodrome (presyncope) is common, although loss of consciousness may occur without any warning symptoms. Typical presyncopal symptoms include dizziness, lightheadedness or faintness, weakness, fatigue, and visual and auditory disturbances. The causes of syncope can be divided into three general categories: (1) neurally mediated syncope (also called reflex or vasovagal syncope), (2) orthostatic hypotension, and (3) cardiac syncope. Neurally mediated syncope comprises a heterogeneous group of functional disorders that are characterized by a transient change in the reflexes responsible for maintaining cardiovascular homeostasis. Episodic vasodilation (or loss of vasoconstrictor tone) and bradycardia occur in varying combinations, resulting in temporary failure of blood pressure control. In contrast, in patients with orthostatic hypotension due to autonomic failure, these cardiovascular homeostatic reflexes are chronically impaired. Cardiac syncope may be due to arrhythmias or structural cardiac diseases that cause a decrease in cardiac output. The clinical features, underlying pathophysiologic mechanisms, therapeutic interventions, and prognoses differ markedly among these three causes. Syncope is a common presenting problem, accounting for approximately 3% of all emergency room visits and 1% of all hospital admissions. The annual cost for syncope-related hospitalization in the United States is ~$2.4 billion. Syncope has a lifetime cumulative incidence of up to 35% in the general population. The peak incidence in the young occurs between ages 10 and 30 years, with a median peak around 15 years. Neurally mediated syncope is the etiology in the vast majority of these cases. In elderly adults, there is a sharp rise in the incidence of syncope after 70 years. In population-based studies, neurally mediated syncope is the most common cause of syncope. The incidence is slightly higher in females than males. In young subjects, there is often a family history in first-degree relatives. Cardiovascular disease due to structural disease or arrhythmias is the next most common cause in most series, particularly in emergency room settings and in older patients. Orthostatic hypotension also increases in prevalence with age because of the reduced baroreflex responsiveness, decreased cardiac compliance, and attenuation of the vestibulosympathetic reflex associated with aging. In the elderly, orthostatic hypotension is substantially more common in institutionalized (54–68%) than community-dwelling (6%) individuals, an observation most likely explained by the greater prevalence of HigH-RiSK fEATuRES inDiCATing HoSPiTALizATion oR inTEnSivE EvALuATion of SynCoPE Chest pain suggesting coronary ischemia Features of congestive heart failure Moderate or severe valvular disease Moderate or severe structural cardiac disease Electrocardiographic features of ischemia History of ventricular arrhythmias Prolonged QT interval (>500 ms) Repetitive sinoatrial block or sinus pauses Persistent sinus bradycardia Bior trifascicular block or intraventricular conduction delay with QRS duration≥120 ms Atrial fibrillation Nonsustained ventricular tachycardia Family history of sudden death Preexcitation syndromes Brugada pattern on ECG Palpitations at time of syncope Syncope at rest or during exercise predisposing neurologic disorders, physiologic impairment, and vasoactive medication use among institutionalized patients. The prognosis after a single syncopal event for all age groups is generally benign. In particular, syncope of noncardiac and unexplained origin in younger individuals has an excellent prognosis; life expectancy is unaffected. By contrast, syncope due to a cardiac cause, either structural heart disease or primary arrhythmic disease, is associated with an increased risk of sudden cardiac death and mortality from other causes. Similarly, mortality rate is increased in individuals with syncope due to orthostatic hypotension related to age and the associated comorbid conditions (Table 27-1). The upright posture imposes a unique physiologic stress upon humans; most, although not all, syncopal episodes occur from a standing position. Standing results in pooling of 500–1000 mL of blood in the lower extremities and splanchnic circulation. There is a decrease in venous return to the heart and reduced ventricular filling that result in diminished cardiac output and blood pressure. These hemodynamic changes provoke a compensatory reflex response, initiated by the baroreceptors in the carotid sinus and aortic arch, resulting in increased sympathetic outflow and decreased vagal nerve activity (Fig. 27-1). The reflex increases peripheral resistance, venous return to the heart, and cardiac output and thus limits the fall in blood pressure. If this response fails, as is the case chronically in orthostatic hypotension and transiently in neurally mediated syncope, cerebral hypoperfusion occurs. Syncope is a consequence of global cerebral hypoperfusion and thus represents a failure of cerebral blood flow autoregulatory mechanisms. FIguRE 27-1 The baroreflex. A decrease in arterial pressure unloads the baroreceptors—the terminals of afferent fibers of the glossopharyngeal and vagus nerves—that are situated in the carotid sinus and aortic arch. This leads to a reduction in the afferent impulses that are relayed from these mechanoreceptors through the glossopharyngeal and vagus nerves to the nucleus of the tractus solitarius (NTS) in the dorsomedial medulla. The reduced baroreceptor afferent activity produces a decrease in vagal nerve input to the sinus node that is mediated via connections of the NTS to the nucleus ambiguus (NA). There is an increase in sympathetic efferent activity that is mediated by the NTS projections to the caudal ventrolateral medulla (CVLM) (an excitatory pathway) and from there to the rostral ventrolateral medulla (RVLM) (an inhibitory pathway). The activation of RVLM presympathetic neurons in response to hypotension is thus predominantly due to disinhibition. In response to a sustained fall in blood pressure, vasopressin release is mediated by projections from the A1 noradrenergic cell group in the ventrolateral medulla. This projection activates vasopressin-synthesizing neurons in the magnocellular portion of the paraventricular nucleus (PVN) and the supraoptic nucleus (SON) of the hypothalamus. Blue denotes sympathetic neurons, and green denotes parasympathetic neurons. (From R Freeman: N Engl J Med 358:615, 2008.) Myogenic factors, local metabolites, and to a lesser extent autonomic neurovascular control are responsible for the autoregulation of cerebral blood flow (Chap. 330). The latency of the autoregulatory response is 5–10 s. Typically cerebral blood flow ranges from 50 to 60 mL/min per 100 g brain tissue and remains relatively constant over perfusion pressures ranging from 50 to 150 mmHg. Cessation of blood flow for 6–8 s will result in loss of consciousness, while impairment of consciousness ensues when blood flow decreases to 25 mL/min per 100 g brain tissue. From the clinical standpoint, a fall in systemic systolic blood pressure to ~50 mmHg or lower will result in syncope. A decrease in cardiac output and/or systemic vascular resistance—the determinants of blood pressure—thus underlies the pathophysiology of syncope. Common causes of impaired cardiac output include decreased effective circulating blood volume; increased thoracic pressure; massive pulmonary embolus; cardiac bradyand tachyarrhythmias; valvular heart disease; and myocardial dysfunction. Systemic vascular resistance may be decreased by central and peripheral autonomic nervous system diseases, sympatholytic medications, and transiently during neurally mediated syncope. Increased cerebral vascular resistance, most frequently due to hypocarbia induced by hyperventilation, may also contribute to the pathophysiology of syncope. Two patterns of electroencephalographic (EEG) changes occur in syncopal subjects. The first is a “slow-flat-slow” pattern (Fig. 27-2) in which normal background activity is replaced with high-amplitude slow delta waves. This is followed by sudden flattening of the EEG—a cessation or attenuation of cortical activity—followed by the return of slow waves, and then normal activity. A second pattern, the “slow pattern,” is characterized by increasing and decreasing slow wave activity only. The EEG flattening that occurs in the slow-flat-slow pattern is a marker of more severe cerebral hypoperfusion. Despite the presence of myoclonic movements and other motor activity during some syncopal events, EEG seizure discharges are not detected. Neurally mediated (reflex; vasovagal) syncope is the final pathway of a complex central and peripheral nervous system reflex arc. There is a sudden, transient change in autonomic efferent activity with increased parasympathetic outflow, plus sympathoinhibition (the vasodepressor response), resulting in bradycardia, vasodilation, and/or reduced vasoconstrictor tone. The resulting fall in systemic blood pressure can then reduce cerebral blood flow to below the compensatory limits of autoregulation (Fig. 27-3). In order to elicit neutrally mediated syncope, a functioning autonomic nervous system is necessary, in contrast to syncope resulting from autonomic failure (discussed below). FIguRE 27-2 The electroencephalogram (EEG) in vasovagal syncope. A 1-min segment of a tilt-table test with typical vasovagal syncope demonstrating the “slow-flat-slow” EEG pattern. Finger beat-to-beat blood pressure, electrocardiogram (ECG), and selected EEG channels are shown. EEG slowing starts when systolic blood pressure drops to ~50 mmHg; heart rate is then approximately 45 beats/min (bpm). Asystole occurred, lasting about 8 s. The EEG flattens for a similar period, but with a delay. A transient loss of consciousness, lasting 14 s, was observed. There were muscle jerks just before and just after the flat period of the EEG. (Figure reproduced with permission from W Wieling et al: Brain 132:2630, 2009.) Multiple triggers of the afferent limb of the reflex arc can result network within the medulla that integrates the neural impulses and in neutrally mediated syncope. In some situations, these can mediates the vasodepressor-bradycardic response. be clearly defined, e.g., the carotid sinus, the gastrointestinal tract, or the bladder. Often, however, the trigger is less easily recognized Classification of Neurally Mediated Syncope Neurally mediated syncope and the cause is multifactorial. Under these circumstances, it is likely may be subdivided based on the afferent pathway and provocathat different afferent pathways converge on the central autonomic tive trigger. Vasovagal syncope (the common faint) is provoked by intense emotion, pain, and/or orthostatic stress, whereas the situational reflex 120 syncopes have specific localized stimuli125 that provoke the reflex vasodilation and 100 bradycardia that leads to syncope. The PART 2 Cardinal Manifestations and Presentation of Diseases 60 most of these situational reflex syncopes. The afferent trigger may originate in the pulmonary system, gastrointestinal system, urogenital system, heart, and carotid artery (Table 27-2). Hyperventilation leading to hypocarbia and cerebral vaso constriction, and raised intrathoracic pressure that impairs venous return to the 80 heart, play a central role in many of the situational reflex syncopes. The afferent pathway of the reflex arc differs among 40 these disorders, but the efferent response FIguRE 27-3 A. The paroxysmal hypotensive-bradycardic response that is characteristic of neurally mediated syncope. Noninvasive beat-to-beat blood pressure and heart rate are shown over 5 min (from 60 to 360 s) of an upright tilt on a tilt table. B. The same tracing expanded to show 80 s of the episode (from 80 to 200 s). BP, blood pressure; bpm, beats per minute; HR, heart rate. via the vagus and sympathetic pathways is similar. Alternately, neurally mediated syncope may be subdivided based on the predominant efferent pathway. Vasodepressor syncope describes syncope predominantly due to efferent, sympathetic, vasoconstrictor failure; cardioinhibitory syncope describes syncope predominantly associated with bradycardia or asystole due CAuSES of SynCoPE A. Neurally Mediated Syncope Vasovagal syncope Provoked fear, pain, anxiety, intense emotion, sight of blood, unpleasant sights and odors, orthostatic stress Situational reflex syncope Pulmonary Cough syncope, wind instrument player’s syncope, weightlifter’s syncope, “mess trick”a and “fainting lark,”b sneeze syncope, airway instrumentation Urogenital Postmicturition syncope, urogenital tract instrumentation, prostatic massage Gastrointestinal Swallow syncope, glossopharyngeal neuralgia, esophageal stimulation, gastrointestinal tract instrumentation, rectal examination, defecation syncope Cardiac Bezold-Jarisch reflex, cardiac outflow obstruction Carotid sinus Carotid sinus sensitivity, carotid sinus massage Ocular Ocular pressure, ocular examination, ocular surgery B. Orthostatic Hypotension Primary autonomic failure due to idiopathic central and peripheral neurodegenerative diseases—the “synucleinopathies” Lewy body diseases Parkinson’s disease Lewy body dementia Pure autonomic failure Multiple system atrophy (the Shy-Drager syndrome) Secondary autonomic failure due to autonomic peripheral neuropathies Diabetes C. Cardiac Syncope aHyperventilation for ~1 minute, followed by sudden chest compression. bHyperventilation (~20 breaths) in a squatting position, rapid rise to standing, then Valsalva. to increased vagal outflow; and mixed syncope describes syncope in 145 which there are both vagal and sympathetic reflex changes. Features of Neurally Mediated Syncope In addition to symptoms of orthostatic intolerance such as dizziness, lightheadedness, and fatigue, premonitory features of autonomic activation may be present in patients with neurally mediated syncope. These include diaphoresis, pallor, palpitations, nausea, hyperventilation, and yawning. During the syncopal event, proximal and distal myoclonus (typically arrhythmic and multifocal) may occur, raising the possibility of epilepsy. The eyes typically remain open and usually deviate upward. Pupils are usually dilated. Roving eye movements may occur. Grunting, moaning, snorting, and stertorous breathing may be present. Urinary incontinence may occur. Fecal incontinence is very rare. Postictal confusion is also rare, although visual and auditory hallucinations and near death and out-of-body experiences are sometimes reported. Although some predisposing factors and provocative stimuli are well established (for example, motionless upright posture, warm ambient temperature, intravascular volume depletion, alcohol ingestion, hypoxemia, anemia, pain, the sight of blood, venipuncture, and intense emotion), the underlying basis for the widely different thresholds for syncope among individuals exposed to the same provocative stimulus is not known. A genetic basis for neurally mediated syncope may exist; several studies have reported an increased incidence of syncope in first-degree relatives of fainters, but no gene or genetic marker has been identified, and environmental, social, and cultural factors have not been excluded by these studies. Reassurance, avoidance of provocative stimuli, and plasma volume expansion with fluid and salt are the cornerstones of the management of neurally mediated syncope. Isometric counterpressure maneuvers of the limbs (leg crossing or handgrip and arm tensing) may raise blood pressure by increasing central blood volume and cardiac output. By maintaining pressure in the autoregulatory zone, these maneuvers avoid or delay the onset of syncope. Randomized controlled trials support this intervention. Fludrocortisone, vasoconstricting agents, and betaadrenoreceptor antagonists are widely used by experts to treat refractory patients, although there is no consistent evidence from randomized controlled trials for any pharmacotherapy to treat neurally mediated syncope. Because vasodilation is the dominant pathophysiologic syncopal mechanism in most patients, use of a cardiac pacemaker is rarely beneficial. Possible exceptions are older patients (>40 years) in whom syncope is associated with asystole or severe bradycardia and patients with prominent cardioinhibition due to carotid sinus syndrome. In these patients, dual-chamber pacing may be helpful. Orthostatic hypotension, defined as a reduction in systolic blood pressure of at least 20 mmHg or diastolic blood pressure of at least 10 mmHg within 3 min of standing or head-up tilt on a tilt table, is a manifestation of sympathetic vasoconstrictor (autonomic) failure (Fig. 27-4). In many (but not all) cases, there is no compensatory increase in heart rate despite hypotension; with partial autonomic failure, heart rate may increase to some degree but is insufficient to maintain cardiac output. A variant of orthostatic hypotension is “delayed” orthostatic hypotension, which occurs beyond 3 min of standing; this may reflect a mild or early form of sympathetic adrenergic dysfunction. In some cases, orthostatic hypotension occurs within 15 s of standing (so-called “initial” orthostatic hypotension), a finding that may reflect a transient mismatch between cardiac output and peripheral vascular resistance and does not represent autonomic failure. Characteristic symptoms of orthostatic hypotension include lightheadedness, dizziness, and presyncope (near-faintness) occurring in response to sudden postural change. However, symptoms may be 75 74 and inflammatory neuropathies (Chaps. 459 and 460). Less frequently, orthostatic hypotension is associated with the 72 peripheral neuropathies that accompany vitamin B12 deficiency, neurotoxic exposure, HIV and other infections, and 70 porphyria. Patients with autonomic failure and the elderly are susceptible to falls in blood pressure associated with meals. 200 180 The magnitude of the blood pressure fall is exacerbated by large meals, meals high 150 in carbohydrate, and alcohol intake. The PART 2 Cardinal Manifestations and Presentation of Diseases mechanism of postprandial syncope is not fully elucidated. Orthostatic hypotension is often iatrogenic. Drugs from several classes may lower peripheral resistance (e.g., alphaadrenoreceptor antagonists used to treat hypertension and prostatic hypertrophy; antihypertensive agents of several classes; 60 120180 240 300 360 180190 200210 220 nitrates and other vasodilators; tricyclic A Time (sec) Time (sec) agents and phenothiazines). Iatrogenic B volume depletion due to diuresis and FIguRE 27-4 A. The gradual fall in blood pressure without a compensatory heart rate increase volume depletion due to medical causes that is characteristic of orthostatic hypotension due to autonomic failure. Blood pressure and heart (hemorrhage, vomiting, diarrhea, or rate are shown over 5 min (from 60 to 360 s) of an upright tilt on a tilt table. B. The same tracing decreased fluid intake) may also result in expanded to show 40 s of the episode (from 180 to 220 s). BP, blood pressure; bpm, beats per decreased effective circulatory volume, minute; HR, heart rate. absent or nonspecific, such as generalized weakness, fatigue, cognitive slowing, leg buckling, or headache. Visual blurring may occur, likely due to retinal or occipital lobe ischemia. Neck pain, typically in the suboccipital, posterior cervical, and shoulder region (the “coat-hanger headache”), most likely due to neck muscle ischemia, may be the only symptom. Patients may report orthostatic dyspnea (thought to reflect ventilation-perfusion mismatch due to inadequate perfusion of ventilated lung apices) or angina (attributed to impaired myocardial perfusion even with normal coronary arteries). Symptoms may be exacerbated by exertion, prolonged standing, increased ambient temperature, or meals. Syncope is usually preceded by warning symptoms, but may occur suddenly, suggesting the possibility of a seizure or cardiac cause. Supine hypertension is common in patients with orthostatic hypotension due to autonomic failure, affecting over 50% of patients in some series. Orthostatic hypotension may present after initiation of therapy for hypertension, and supine hypertension may follow treatment of orthostatic hypotension. However, in other cases, the association of the two conditions is unrelated to therapy; it may in part be explained by baroreflex dysfunction in the presence of residual sympathetic outflow, particularly in patients with central autonomic degeneration. Causes of Neurogenic Orthostatic Hypotension Causes of neurogenic orthostatic hypotension include central and peripheral autonomic nervous system dysfunction (Chap. 454). Autonomic dysfunction of other organ systems (including the bladder, bowels, sexual organs, and sudomotor system) of varying severity frequently accompanies orthostatic hypotension in these disorders (Table 27-2). The primary autonomic degenerative disorders are multiple system atrophy (the Shy-Drager syndrome; Chap. 454), Parkinson’s disease (Chap. 449), dementia with Lewy bodies (Chap. 448), and pure autonomic failure (Chap. 454). These are often grouped together as “synucleinopathies” due to the presence of alpha-synuclein, a small protein that precipitates predominantly in the cytoplasm of neurons in the Lewy body disorders (Parkinson’s disease, dementia with Lewy bodies, and pure autonomic failure) and in the glia in multiple system atrophy. Peripheral autonomic dysfunction may also accompany small-fiber peripheral neuropathies such as those seen in diabetes, amyloid, immune-mediated neuropathies, hereditary sensory and autonomic neuropathies (HSAN; particularly HSAN type III, familial dysautonomia), orthostatic hypotension, and syncope. The first step is to remove reversible causes—usually vasoactive medications (Table 454-6). Next, nonpharmacologic interventions should be introduced. These interventions include patient education regarding staged moves from supine to upright; warnings about the hypotensive effects of large meals; instructions about the isometric counterpressure maneuvers that increase intravascular pressure (see above); and raising the head of the bed to reduce supine hypertension. Intravascular volume should be expanded by increasing dietary fluid and salt. If these nonpharmacologic measures fail, pharmacologic intervention with fludrocortisone acetate and vasoconstricting agents such as midodrine, L-dihydroxyphenylserine, and pseudoephedrine should be introduced. Some patients with intractable symptoms require additional therapy with supplementary agents that include pyridostigmine, yohimbine, desmopressin acetate (DDAVP), and erythropoietin (Chap. 454). Cardiac (or cardiovascular) syncope is caused by arrhythmias and structural heart disease. These may occur in combination because structural disease renders the heart more vulnerable to abnormal electrical activity. Arrhythmias Bradyarrhythmias that cause syncope include those due to severe sinus node dysfunction (e.g., sinus arrest or sinoatrial block) and atrioventricular (AV) block (e.g., Mobitz type II, high-grade, and complete AV block). The bradyarrhythmias due to sinus node dysfunction are often associated with an atrial tachyarrhythmia, a disorder known as the tachycardia-bradycardia syndrome. A prolonged pause following the termination of a tachycardic episode is a frequent cause of syncope in patients with the tachycardia-bradycardia syndrome. Medications of several classes may also cause bradyarrhythmias of sufficient severity to cause syncope. Syncope due to bradycardia or asystole is referred to as a Stokes-Adams attack. Ventricular tachyarrhythmias frequently cause syncope. The likelihood of syncope with ventricular tachycardia is in part dependent on the ventricular rate; rates below 200 beats/min are less likely to cause syncope. The compromised hemodynamic function during ventricular tachycardia is caused by ineffective ventricular contraction, reduced diastolic filling due to abbreviated filling periods, loss of AV synchrony, and concurrent myocardial ischemia. Several disorders associated with cardiac electrophysiologic instability and arrhythmogenesis are due to mutations in ion channel subunit genes. These include the long QT syndrome, Brugada syndrome, and catecholaminergic polymorphic ventricular tachycardia. The long QT syndrome is a genetically heterogeneous disorder associated with prolonged cardiac repolarization and a predisposition to ventricular arrhythmias. Syncope and sudden death in patients with long QT syndrome result from a unique polymorphic ventricular tachycardia called torsades des pointes that degenerates into ventricular fibrillation. The long QT syndrome has been linked to genes encoding K+ channel α-subunits, K+ channel β-subunits, voltage-gated Na+ channel, and a scaffolding protein, ankyrin B (ANK2). Brugada syndrome is characterized by idiopathic ventricular fibrillation in association with right ventricular electrocardiogram (ECG) abnormalities without structural heart disease. This disorder is also genetically heterogeneous, although it is most frequently linked to mutations in the Na+ channel α-subunit, SCN5A. Catecholaminergic polymorphic tachycardia is an inherited, genetically heterogeneous disorder associated with exerciseor stress-induced ventricular arrhythmias, syncope, or sudden death. Acquired QT interval prolongation, most commonly due to drugs, may also result in ventricular arrhythmias and syncope. These disorders are discussed in detail in Chap. 277. Structural Disease Structural heart disease (e.g., valvular disease, myocardial ischemia, hypertrophic and other cardiomyopathies, cardiac masses such as atrial myxoma, and pericardial effusions) may lead to syncope by compromising cardiac output. Structural disease may also contribute to other pathophysiologic mechanisms of syncope. For example, cardiac structural disease may predispose to arrhythmogenesis; aggressive treatment of cardiac failure with diuretics and/or vasodilators may lead to orthostatic hypotension; and inappropriate reflex vasodilation may occur with structural disorders such as aortic stenosis and hypertrophic cardiomyopathy, possibly provoked by increased ventricular contractility. Treatment of cardiac disease depends on the underlying disorder. Therapies for arrhythmias include cardiac pacing for sinus node disease and AV block, and ablation, antiarrhythmic drugs, and cardioverter-defibrillators for atrial and ventricular tachyarrhythmias. These disorders are best managed by physicians with specialized skills in this area. APPROACH TO THE PATIENT: Syncope is easily diagnosed when the characteristic features are present; however, several disorders with transient real or apparent loss of consciousness may create diagnostic confusion. Generalized and partial seizures may be confused with syncope; however, there are a number of differentiating features. Whereas tonic-clonic movements are the hallmark of a generalized seizure, myoclonic and other movements also may occur in up to 90% of syncopal episodes. Myoclonic jerks associated with syncope may be multifocal or generalized. They are typically arrhythmic and of short duration (<30 s). Mild flexor and extensor posturing also may occur. Partial or partial-complex seizures with secondary generalization are usually preceded by an aura, commonly an unpleasant smell; fear; anxiety; abdominal discomfort; or other visceral sensations. These phenomena should be differentiated from the premonitory features of syncope. Autonomic manifestations of seizures (autonomic epilepsy) may provide a more difficult diagnostic challenge. Autonomic seizures have cardiovascular, gastrointestinal, pulmonary, urogenital, pupillary, and cutaneous manifestations that are similar to the premonitory features of syncope. Furthermore, the cardiovascular manifestations of autonomic epilepsy include clinically significant tachycardias and bradycardias that may be of sufficient magnitude to cause loss of consciousness. The presence of accompanying non-autonomic auras may help differentiate these episodes from syncope. Loss of consciousness associated with a seizure usually lasts longer than 5 min and is associated with prolonged postictal drowsiness and disorientation, whereas reorientation occurs almost immediately after a syncopal event. Muscle aches may occur after both syncope and seizures, although they tend to last longer and be more severe following a seizure. Seizures, unlike syncope, are rarely provoked by emotions or pain. Incontinence of urine may occur with both seizures and syncope; however, fecal incontinence occurs very rarely with syncope. Hypoglycemia may cause transient loss of consciousness, typically in individuals with type 1 or type 2 diabetes treated with insulin. The clinical features associated with impending or actual hypoglycemia include tremor, palpitations, anxiety, diaphoresis, hunger, and paresthesias. These symptoms are due to autonomic activation to counter the falling blood glucose. Hunger, in particular, is not a typical premonitory feature of syncope. Hypoglycemia also impairs neuronal function, leading to fatigue, weakness, dizziness, and cognitive and behavioral symptoms. Diagnostic difficulties may occur in individuals in strict glycemic control; repeated hypoglycemia impairs the counterregulatory response and leads to a loss of the characteristic warning symptoms that are the hallmark of hypoglycemia. Patients with cataplexy experience an abrupt partial or complete loss of muscular tone triggered by strong emotions, typically anger or laughter. Unlike syncope, consciousness is maintained throughout the attacks, which typically last between 30 s and 2 min. There are no premonitory symptoms. Cataplexy occurs in 60–75% of patients with narcolepsy. The clinical interview and interrogation of eyewitnesses usually allow differentiation of syncope from falls due to vestibular dysfunction, cerebellar disease, extrapyramidal system dysfunction, and other gait disorders. If the fall is accompanied by head trauma, a postconcussive syndrome, amnesia for the precipitating events, and/or the presence of loss of consciousness may contribute to diagnostic difficulty. Apparent loss of consciousness can be a manifestation of psychiatric disorders such as generalized anxiety, panic disorders, major depression, and somatization disorder. These possibilities should be considered in individuals who faint frequently without prodromal symptoms. Such patients are rarely injured despite numerous falls. There are no clinically significant hemodynamic changes concurrent with these episodes. In contrast, transient loss of consciousness due to vasovagal syncope precipitated by fear, stress, anxiety, and emotional distress is accompanied by hypotension, bradycardia, or both. The goals of the initial evaluation are to determine whether the transient loss of consciousness was due to syncope; to identify the cause; and to assess risk for future episodes and serious harm (Table 27-1). The initial evaluation should include a detailed history, thorough questioning of eyewitnesses, and a complete physical and neurologic examination. Blood pressure and heart rate should be measured in the supine position and after 3 min of standing to determine whether orthostatic hypotension is present. An ECG should be performed if there is suspicion of syncope due to an arrhythmia or underlying cardiac disease. Relevant electrocardiographic abnormalities include bradyarrhythmias or tachyarrhythmias, AV block, ischemia, old myocardial infarction, long QT syndrome, and bundle branch block. This initial assessment will lead to the identification of a cause of syncope in approximately 50% of patients and also allows stratification of patients at risk for cardiac mortality. Laboratory Tests Baseline laboratory blood tests are rarely helpful in identifying the cause of syncope. Blood tests should be performed when specific disorders, e.g., myocardial infarction, anemia, and secondary autonomic failure, are suspected (Table 27-2). Autonomic Nervous System Testing (Chap. 454) Autonomic testing, including tilt-table testing, can be performed in specialized centers. Autonomic testing is helpful to uncover objective evidence of autonomic failure and also to demonstrate a predisposition to neurally mediated syncope. Autonomic testing includes assessments of parasympathetic autonomic nervous system function (e.g., heart rate variability to deep respiration and a Valsalva maneuver), sympathetic cholinergic function (e.g., thermoregulatory sweat response and quantitative sudomotor axon reflex test), and sympathetic adrenergic function (e.g., blood pressure response to a Valsalva maneuver and a tilt-table test with beat-to-beat blood pressure measurement). The hemodynamic abnormalities demonstrated on tilt-table test (Figs. 27-3 and 27-4) may be useful in distinguishing orthostatic hypotension due to autonomic failure from the hypotensive bradycardic response of neurally mediated syncope. Similarly, the tilt-table test may help identify patients with syncope due to immediate or delayed orthostatic hypotension. Carotid sinus massage should be considered in patients with symptoms suggestive of carotid sinus syncope and in patients over age 50 years with recurrent syncope of unknown etiology. This test should only be carried out under continuous ECG and blood pressure monitoring and should be avoided in patients with carotid bruits, plaques, or stenosis. Cardiac Evaluation ECG monitoring is indicated for patients with a high pretest probability of arrhythmia causing syncope. Patients should be monitored in hospital if the likelihood of a life-threatening arrhythmia is high, e.g., patients with severe structural or coronary artery disease, nonsustained ventricular tachycardia, trifascicular heart block, prolonged QT interval, Brugada syndrome ECG pattern, or family history of sudden cardiac death (Table 27-1). Outpatient Holter monitoring is recommended for patients who experience frequent syncopal episodes (one or more per week), whereas loop recorders, which continually record and erase cardiac rhythm, are indicated for patients with suspected arrhythmias with low risk of sudden cardiac death. Loop recorders may be external (recommended for evaluation of episodes that occur at a frequency of greater than one per month) or implantable (if syncope occurs less frequently). Echocardiography should be performed in patients with a history of cardiac disease or if abnormalities are found on physical examination or the ECG. Echocardiographic diagnoses that may be responsible for syncope include aortic stenosis, hyper-trophic cardiomyopathy, cardiac tumors, aortic dissection, and pericardial tamponade. Echocardiography also has a role in risk stratification based on the left ventricular ejection fraction. Treadmill exercise testing with ECG and blood pressure monitoring should be performed in patients who have experienced syncope during or shortly after exercise. Treadmill testing may help identify exercise-induced arrhythmias (e.g., tachycardia-related AV block) and exercise-induced exaggerated vasodilation. Electrophysiologic studies are indicated in patients with structural heart disease and ECG abnormalities in whom noninvasive investigations have failed to yield a diagnosis. Electrophysiologic studies have low sensitivity and specificity and should only be performed when a high pretest probability exists. Currently, this test is rarely performed to evaluate patients with syncope. Psychiatric Evaluation Screening for psychiatric disorders may be appropriate in patients with recurrent unexplained syncope episodes. Tilt-table testing, with demonstration of symptoms in the absence of hemodynamic change, may be useful in reproducing syncope in patients with suspected psychogenic syncope. PART 2 Cardinal Manifestations and Presentation of Diseases Mark F. Walker, Robert B. Daroff Dizziness is an imprecise symptom used to describe a variety of sensations that include vertigo, light-headedness, faintness, and imbalance. When used to describe a sense of spinning or other motion, dizziness is designated as vertigo. Vertigo may be physiologic, occurring during or after a sustained head rotation, or it may be pathologic, due to vestibular dysfunction. The term light-headedness is commonly applied to presyncopal sensations due to brain hypoperfusion but also may refer to disequilibrium and imbalance. A challenge to diagnosis is that patients often have difficulty distinguishing among these various symptoms, and the words they choose do not reliably indicate the underlying etiology. There are a number of potential causes of dizziness. Vascular disorders cause presyncopal dizziness as a result of cardiac dysrhythmia, orthostatic hypotension, medication effects, or other causes. Such presyncopal sensations vary in duration; they may increase in severity until loss of consciousness occurs, or they may resolve before loss of consciousness if the cerebral ischemia is corrected. Faintness and syncope, which are discussed in detail in Chap. 27, should always be considered when one is evaluating patients with brief episodes of dizziness or dizziness that occurs with upright posture. Vestibular causes of dizziness (vertigo or imbalance) may be due to peripheral lesions that affect the labyrinths or vestibular nerves or to involvement of the central vestibular pathways. They may be paroxysmal or due to a fixed unilateral or bilateral vestibular deficit. Acute unilateral lesions cause vertigo due to a sudden imbalance in vestibular inputs from the two labyrinths. Bilateral lesions cause imbalance and instability of vision (oscillopsia) when the head moves. Other causes of dizziness include nonvestibular imbalance and gait disorders (e.g., loss of proprioception from sensory neuropathy, parkinsonism) and anxiety. When evaluating patients with dizziness, questions to consider include the following: (1) Is it dangerous (e.g., arrhythmia, transient ischemic attack/stroke)? (2) Is it vestibular? (3) If vestibular, is it peripheral or central? A careful history and examination often provide sufficient information to answer these questions and determine whether additional studies or referral to a specialist is necessary. APPROACH TO THE PATIENT: When a patient presents with dizziness, the first step is to delineate more precisely the nature of the symptom. In the case of vestibular disorders, the physical symptoms depend on whether the lesion is unilateral or bilateral, and whether it is acute or chronic and progressive. Vertigo, an illusion of self or environmental motion, implies asymmetry of vestibular inputs from the two labyrinths or in their central pathways that is usually acute. Symmetric bilateral vestibular hypofunction causes imbalance but no vertigo. Because of the ambiguity in patients’ descriptions of their symptoms, diagnosis based simply on symptom characteristics is typically unreliable. The history should focus closely on other features, including whether this is the first attack, the duration of this and any prior episodes, provoking factors, and accompanying symptoms. Dizziness can be divided into episodes that last for seconds, minutes, hours, or days. Common causes of brief dizziness (seconds) include benign paroxysmal positional vertigo (BPPV) and orthostatic hypotension, both of which typically are provoked by changes in head and body position. Attacks of vestibular migraine and Ménière’s disease often last hours. When episodes are of intermediate duration (minutes), transient ischemic attacks of the posterior circulation should be considered, although migraine and a number of other causes are also possible. Symptoms that accompany vertigo may be helpful in distinguishing peripheral vestibular lesions from central causes. Unilateral hearing loss and other aural symptoms (ear pain, pressure, fullness) typically point to a peripheral cause. Because the auditory pathways quickly become bilateral upon entering the brainstem, central lesions are unlikely to cause unilateral hearing loss, unless the lesion lies near the root entry zone of the auditory nerve. Symptoms such as double vision, numbness, and limb ataxia suggest a brainstem or cerebellar lesion. Because dizziness and imbalance can be a manifestation of a variety of neurologic disorders, the neurologic examination is important in the evaluation of these patients. Particular focus should be given to assessment of eye movements, vestibular function, and hearing. The range of eye movements and whether they are equal in each eye should be observed. Peripheral eye movement disorders (e.g., cranial neuropathies, eye muscle weakness) are usually disconjugate (different in the two eyes). One should check pursuit (the ability to follow a smoothly moving target) and saccades (the ability to look back and forth accurately between two targets). Poor pursuit or inaccurate (dysmetric) saccades usually indicates central pathology, often involving the cerebellum. Finally, one should look for spon taneous nystagmus, an involuntary back-and-forth movement of the eyes. Nystagmus is most often of the jerk type, in which a slow drift (slow phase) in one direction alternates with a rapid saccadic movement (quick phase or fast phase) in the opposite direction that resets the position of the eyes in the orbits. Except in the case of acute vestibulopathy (e.g., vestibular neuritis), if primary position nystagmus is easily seen in the light, it is probably due to a central cause. Two forms of nystagmus that are characteristic of lesions of the cerebellar pathways are vertical nystagmus with downward changes direction with gaze (gaze-evoked nystagmus). By contrast, tagmus. Use of Frenzel eyeglasses (self-illuminated goggles with convex lenses that blur the patient’s vision but allow the examiner to see the eyes greatly magnified) can aid in the detection of peripheral vestibular nystagmus, because they reduce the patient’s ability to use visual fixation to suppress nystagmus. Table 28-1 outlines key find ings that help distinguish peripheral from central causes of vertigo. The most useful bedside test of peripheral vestibular function is the head impulse test, in which the vestibuloocular reflex (VOR) is assessed with small-amplitude (~20 degrees) rapid head rotations. While the patient fixates on a target, the head is rotated to the left or right. If the VOR is deficient, the rotation is followed by a catch up saccade in the opposite direction (e.g., a leftward saccade after a rightward rotation). The head impulse test can identify both uni lateral (catch-up saccades after rotations toward the weak side) and in both directions). All patients with episodic dizziness, especially if provoked by positional change, should be tested with the Dix-Hallpike maneu ver. The patient begins in a sitting position with the head turned fEATuRES of PERiPHERAL AnD CEnTRAL vERTigo • Nystagmus from an acute peripheral lesion is unidirectional, with fast phases beating away from the ear with the lesion. Nystagmus that changes direction with gaze is due to a central lesion. mixed vertical-torsional nystagmus occurs in BPPV, but pure vertical or pure torsional nystagmus is a central sign. from a peripheral lesion may be inhibited by visual fixation, whereas central nystagmus is not suppressed. • Absence of a head impulse sign in a patient with acute prolonged vertigo should suggest a central cause. • Unilateral hearing loss suggests peripheral vertigo. Findings such as diplopia, dysarthria, and limb ataxia suggest a central disorder. 45 degrees; holding the back of the head, the examiner then lowers the patient into a supine position with the head extended backward by about 20 degrees while watching the eyes. Posterior canal BPPV can be diagnosed confidently if transient upbeating-torsional nystagmus is seen. If no nystagmus is observed after 15–20 s, the patient is raised to the sitting position, and the procedure is repeated with the head turned to the other side. Again, Frenzel goggles may improve the sensitivity of the test. Dynamic visual acuity is a functional test that can be useful in assessing vestibular function. Visual acuity is measured with the head still and when the head is rotated back and forth by the examiner (about 1–2 Hz). A drop in visual acuity during head motion of more than one line on a near card or Snellen chart is abnormal and indicates vestibular dysfunction. The choice of ancillary tests should be guided by the history and examination findings. Audiometry should be performed whenever a vestibular disorder is suspected. Unilateral sensorineural hearing loss supports a peripheral disorder (e.g., vestibular schwannoma). Predominantly low-frequency hearing loss is characteristic of Ménière’s disease. Electronystagmography or videonystagmography includes recordings of spontaneous nystagmus (if present) and measurement of positional nystagmus. Caloric testing assesses the responses of the two horizontal semicircular canals. The test battery often includes recording of saccades and pursuit to assess central ocular motor function. Neuroimaging is important if a central vestibular disorder is suspected. In addition, patients with unexplained unilateral hearing loss or vestibular hypofunction should undergo magnetic resonance imaging (MRI) of the internal auditory canals, including administration of gadolinium, to rule out a schwannoma. Treatment of vestibular symptoms should be driven by the underlying diagnosis. Simply treating dizziness with vestibular suppressant medications is often not helpful and may make the symptoms worse and prolong recovery. The diagnostic and specific treatment approaches for the most commonly encountered vestibular disorders are discussed below. An acute unilateral vestibular lesion causes constant vertigo, nausea, vomiting, oscillopsia (motion of the visual scene), and imbalance. These symptoms are due to a sudden asymmetry of inputs from the two labyrinths or in their central connections, simulating a continuous rotation of the head. Unlike BPPV, continuous vertigo persists even when the head remains still. When a patient presents with an acute vestibular syndrome, the most important question is whether the lesion is central (e.g., a cerebellar or brainstem infarct or hemorrhage), which may be life-threatening, or peripheral, affecting the vestibular nerve or labyrinth (vestibular neuritis). Attention should be given to any symptoms or signs that point to central dysfunction (diplopia, weakness or numbness, dysarthria). The pattern of spontaneous nystagmus, if present, may be helpful (Table 28-1). If the head impulse test is normal, an acute peripheral vestibular lesion is unlikely. A central lesion cannot always be excluded with certainty based on symptoms and examination alone; thus, older patients with vascular risk factors who present with an acute vestibular syndrome should be evaluated for the possibility of stroke even when there are no specific findings that indicate a central lesion. Most patients with vestibular neuritis recover spontaneously, but glucocorticoids can improve outcome if administered within 3 days of symptom onset. Antiviral medications are of no proven benefit and are not typically given unless there is evidence to suggest herpes zoster oticus (Ramsay Hunt syndrome). Vestibular suppressant medications may reduce acute symptoms but should be avoided after the first several days because they may impede central compensation and recovery. Patients should be encouraged to resume a normal level of activity as soon as possible, and directed vestibular rehabilitation therapy may accelerate improvement. 150 BENIgN PAROXYSMAL POSITIONAL VERTIgO BPPV is a common cause of recurrent vertigo. Episodes are brief (<1 min and typically 15–20 s) and are always provoked by changes in head position relative to gravity, such as lying down, rolling over in bed, rising from a supine position, and extending the head to look upward. The attacks are caused by free-floating otoconia (calcium carbonate crystals) that have been dislodged from the utricular macula and have moved into one of the semicircular canals, usually the posterior canal. When head position changes, gravity causes the otoconia to move within the canal, producing vertigo and nystagmus. With posterior canal BPPV, the nystagmus beats upward and torsionally (the upper poles of the eyes beat toward the affected lower ear). Less commonly, the otoconia enter the horizontal canal, resulting in a horizontal nystagmus when the patient is lying with either ear down. Superior (also called anterior) canal involvement is rare. BPPV is treated with repositioning maneuvers that use gravity to remove the otoconia from the semicircular canal. For posterior canal BPPV, the Epley maneuver (Fig. 28-1) is the most commonly used procedure. For more refractory cases of BPPV, patients can be taught a variant of this maneuver that they can perform alone at home. A demonstration of the Epley maneuver is available online (http:// www.dizziness-and-balance.com/disorders/bppv/bppv.html). Vestibular symptoms occur frequently in migraineurs, sometimes as a headache aura but usually independent of headache. The duration of vertigo may be from minutes to hours, and some patients also experience more prolonged periods of disequilibrium (lasting days to weeks). Motion sensitivity and sensitivity to visual motion (e.g., movies) are common in patients with vestibular migraine. Although data from controlled studies are generally lacking, vestibular migraine typically is treated with medications that are used for prophylaxis of migraine headaches. Antiemetics may be helpful to relieve symptoms at the time of an attack. Attacks of Ménière’s disease consist of vertigo and hearing loss, as well as pain, pressure, and/or fullness in the affected ear. The low-frequency PART 2 Cardinal Manifestations and Presentation of Diseases hearing loss and aural symptoms are key features that distinguish Ménière’s disease from other peripheral vestibulopathies and from vestibular migraine. Audiometry at the time of an attack shows a characteristic asymmetric low-frequency hearing loss; hearing commonly improves between attacks, although permanent hearing loss may eventually occur. Ménière’s disease is thought to be due to excess fluid (endolymph) in the inner ear; hence the term endolymphatic hydrops. Patients suspected of having Ménière’s disease should be referred to an otolaryngologist for further evaluation. Diuretics and sodium restriction are the initial treatments. If attacks persist, injections of gentamicin into the middle ear are typically the next line of therapy. Full ablative procedures (vestibular nerve section, labyrinthectomy) are seldom required. Vestibular schwannomas (sometimes termed acoustic neuromas) and other tumors at the cerebellopontine angle cause slowly progressive unilateral sensorineural hearing loss and vestibular hypofunction. These patients typically do not have vertigo, because the gradual vestibular deficit is compensated centrally as it develops. The diagnosis often is not made until there is sufficient hearing loss to be noticed. The examination will show a deficient response to the head impulse test when the head is rotated toward the affected side. As noted above, patients with unexplained unilateral sensorineural hearing loss or vestibular hypofunction require MRI of the internal auditory canals to look for a schwannoma. Patients with bilateral loss of vestibular function also typically do not have vertigo, because vestibular function is lost on both sides simultaneously, and there is no asymmetry of vestibular input. Symptoms include loss of balance, particularly in the dark, where vestibular input is most critical, and oscillopsia during head movement, such as while walking or riding in a car. Bilateral vestibular hypofunction may be (1) idiopathic and progressive, (2) part of a neurodegenerative disorder, or (3) iatrogenic, due to medication ototoxicity (most commonly FIguRE 28-1 Modified Epley maneuver for treatment of benign paroxysmal positional vertigo of the right (top panels) and left (bottom panels) posterior semicircular canals. Step 1. With the patient seated, turn the head 45 degrees toward the affected ear. Step 2. Keeping the head turned, lower the patient to the head-hanging position and hold for at least 30 s and until nystagmus disappears. Step 3. Without lifting the head, turn it 90 degrees toward the other side. Hold for another 30 s. Step 4. Rotate the patient onto her side while turning the head another 90 degrees, so that the nose is pointed down 45 degrees. Hold again for 30 s. Step 5. Have the patient sit up on the side of the table. After a brief rest, the maneuver should be repeated to confirm successful treatment. (Figure adapted from http://www.dizziness-and-balance.com/ disorders/bppv/movies/Epley-480x640.avi.) gentamicin or other aminoglycoside antibiotics). Other causes include bilateral vestibular schwannomas (neurofibromatosis type 2), autoimmune disease, superficial siderosis, and meningeal-based infection or tumor. It also may occur in patients with peripheral polyneuropathy; in these patients, both vestibular loss and impaired proprioception may contribute to poor balance. Finally, unilateral processes such as vestibular neuritis and Ménière’s disease may involve both ears sequentially, resulting in bilateral vestibulopathy. Examination findings include diminished dynamic visual acuity (see above) due to loss of stable vision when the head is moving, abnormal head impulse responses in both directions, and a Romberg sign. Responses to caloric testing are reduced. Patients with bilateral vestibular hypofunction should be referred for vestibular rehabilitation therapy. Vestibular suppressant medications should not be used, as they will increase the imbalance. Evaluation by a neurologist is important not only to confirm the diagnosis but also to consider any other associated neurologic abnormalities that may clarify the etiology. Central lesions causing vertigo typically involve vestibular pathways in the brainstem and/or cerebellum. They may be due to discrete lesions, such as from ischemic or hemorrhagic stroke (Chap. 446), demyelination (Chap. 458), or tumors (Chap. 118), or they may be due to neurodegenerative conditions that include the vestibulocerebellum (Chap. 448). Subacute cerebellar degeneration may be due to immune, including paraneoplastic, processes (Chaps. 122 and 450). Table 28-1 outlines important features of the history and examination that help to identify central vestibular disorders. Acute central vertigo is a medical emergency, due to the possibility of life-threatening stroke or hemorrhage. All patients with suspected central vestibular disorders should undergo brain MRI, and the patient should be referred for full neurologic evaluation. Psychological factors play an important role in chronic dizziness. First, dizziness may be a somatic manifestation of a psychiatric condition such as major depression, anxiety, or panic disorder (Chap. 465e). Second, patients may develop anxiety and autonomic symptoms as a consequence or comorbidity of an independent vestibular disorder. One particular form of this has been termed variously phobic postural vertigo, psychophysiologic vertigo, or chronic subjective dizziness. These patients have a chronic feeling (months or longer) of dizziness and disequilibrium, an increased sensitivity to self-motion and visual motion (e.g., movies), and a particular intensification of symptoms when moving through complex visual environments such as supermarkets (visual vertigo). Although there may be a past history of an acute vestibular disorder (e.g., vestibular neuritis), the neurootologic examination and vestibular testing are normal or indicative of a compensated vestibular deficit, indicating that the ongoing subjective dizziness cannot be explained by a primary vestibular disorder. Anxiety disorders are particularly common in patients with chronic dizziness and contribute substantially to the morbidity. Thus, treatment with antianxiety medications (selective serotonin reuptake inhibitors [SSRIs]) and cognitive-behavioral therapy may be helpful. Vestibular rehabilitation therapy is also sometimes beneficial. Vestibular suppressant medications generally should be avoided. This condition should be suspected when the patient states, “My dizziness is so bad, I’m afraid to leave my house” (agoraphobia). Table 28-2 provides a list of commonly used medications for suppression of vertigo. As noted, these medications should be reserved for short-term control of active vertigo, such as during the first few days of acute vestibular neuritis, or for acute attacks of Ménière’s disease. They are less helpful for chronic dizziness and, as previously stated, may hinder central compensation. An exception is that benzodiazepines may attenuate psychosomatic dizziness and the associated anxiety, although SSRIs are generally preferable in such patients. Benzodiazepines Diazepam 2.5 mg 1–3 times daily Clonazepam 0.25 mg 1–3 times daily days 4–6; 60 mg daily days 7–9; 40 mg daily days 10–12; 20 mg daily days 13–15; 10 mg daily days 16–18, 20, 22 Selective serotonin reuptake inhibitorsh aAll listed drugs are approved by the U.S. Food and Drug Administration, but most are not approved for the treatment of vertigo. bUsual oral (unless otherwise stated) starting dose in adults; a higher maintenance dose can be reached by a gradual increase. cFor motion sickness only. dFor benign paroxysmal positional vertigo. eFor Ménière’s disease. fFor vestibular migraine. gFor acute vestibular neuritis (started within 3 days of onset). hFor psychosomatic vertigo. Vestibular rehabilitation therapy promotes central adaptation processes that compensate for vestibular loss and also may help habituate motion sensitivity and other symptoms of psychosomatic dizziness. The general approach is to use a graded series of exercises that progressively challenge gaze stabilization and balance. Jeffrey M. Gelfand, Vanja C. Douglas Fatigue is one of the most common symptoms in clinical medicine. It is a prominent manifestation of a number of systemic, neurologic, and psychiatric syndromes, although a precise cause will not be identified in a substantial minority of patients. Fatigue refers to an inherently subjective human experience of physical and mental weariness, sluggishness, and exhaustion. In the context of clinical medicine, fatigue is most typically and practically defined as difficulty initiating or maintaining voluntary mental or physical activity. Nearly everyone who has ever been ill with a self-limited infection has experienced this near-universal symptomatology, and fatigue is usually brought to medical attention only when it is either of unclear cause or the severity is out of proportion with what would be expected for the associated trigger. Fatigue should be distinguished from muscle weakness, a reduction of neuromuscular power (Chap. 30); most patients complaining of fatigue are not truly weak when direct muscle power is tested. By definition, fatigue is also distinct from somnolence and dyspnea on exertion, although patients may use the word fatigue to describe those two symptoms. The task facing clinicians when a patient presents with fatigue is to identify an underlying cause if one exists and to develop 152 a therapeutic alliance, the goal of which is to spare patients expensive and fruitless diagnostic workups and steer them toward effective therapy. Variability in the definitions of fatigue and the survey instruments used in different studies makes it difficult to arrive at precise figures about the global burden of fatigue. The point prevalence of fatigue was 6.7% and the lifetime prevalence was 25% in a large National Institute of Mental Health survey of the U.S. general population. In primary care clinics in Europe and the United States, between 10 and 25% of patients surveyed endorsed symptoms of prolonged (present for >1 month) or chronic (present for >6 months) fatigue, but fatigue was the primary reason for seeking medical attention in only a minority of patients. In a community survey of women in India, 12% reported chronic fatigue. By contrast, the prevalence of chronic fatigue syndrome, as defined by the U.S. Centers for Disease Control and Prevention, is low (Chap. 464e). DIFFERENTIAL DIAgNOSIS Psychiatric Disease Fatigue is a common somatic manifestation of many major psychiatric syndromes, including depression, anxiety, and somatoform disorders. Psychiatric symptoms are reported in more than three-quarters of patients with unexplained chronic fatigue. Even in patients with systemic or neurologic syndromes in which fatigue is independently recognized as a manifestation of disease, comorbid psychiatric symptoms or disease may still be an important source of interaction. Neurologic Disease Patients complaining of fatigue often say they feel weak, but upon careful examination, objective muscle weakness is rarely discernible. If found, muscle weakness must then be localized to the central nervous system, peripheral nervous system, neuromuscular junction, or muscle and the appropriate follow-up studies obtained (Chap. 30). Fatigability of muscle power is a cardinal manifestation of some neuromuscular disorders such as myasthenia gravis and can be distinguished from fatigue by finding clinically apparent diminution of the amount of force that a muscle generates upon repeated contraction (Chap. 461). Fatigue is one of the most common and bothersome symptoms reported in multiple sclerosis (MS) (Chap. 458), affecting nearly 90% of patients; fatigue in MS can persist between MS attacks and does not necessarily correlate with magnetic resonance imaging (MRI) disease activity. Fatigue is also increasingly identified as a troublesome feature of many other neurodegenerative diseases, including Parkinson’s disease, central dysautonomias, and amyotrophic lateral sclerosis. Poststroke fatigue is a well-described but poorly understood entity with a widely varying prevalence. Episodic fatigue can be a premonitory symptom of migraine. Fatigue is also a frequent result of traumatic brain injury, often occurring in association with depression and sleep disorders. Sleep Disorders Obstructive sleep apnea is an important cause of excessive daytime sleepiness in association with fatigue and should be investigated using overnight polysomnography, particularly in those with prominent snoring, obesity, or other predictors of obstructive sleep apnea (Chap. 319). Whether the cumulative sleep deprivation that is common in modern society contributes to clinically apparent fatigue is not known (Chap. 38). Endocrine Disorders Fatigue, sometimes in association with true muscle weakness, can be a heralding symptom of hypothyroidism, particularly in the context of hair loss, dry skin, cold intolerance, constipation, and weight gain. Fatigue in association with heat intolerance, sweating, and palpitations is typical of hyperthyroidism. Adrenal insufficiency can also manifest with unexplained fatigue as a primary or prominent symptom, often in association with anorexia, weight loss, nausea, myalgias, and arthralgias; hyponatremia and hyperkalemia may be present at time of diagnosis. Mild hypercalcemia can cause fatigue, which may be relatively vague, whereas severe hypercalcemia can lead PART 2 Cardinal Manifestations and Presentation of Diseases to lethargy, stupor, and coma. Both hypoglycemia and hyperglycemia can cause lethargy, often in association with confusion; chronic diabetes, particularly type 1 diabetes, is also associated with fatigue independent of glucose levels. Fatigue may also accompany Cushing’s disease, hypoaldosteronism, and hypogonadism. Liver and Kidney Disease Both chronic liver failure and chronic kidney disease can cause fatigue. Over 80% of hemodialysis patients complain of fatigue, which makes this one of the most common patient-reported symptoms in chronic kidney disease. Obesity Obesity is associated with fatigue and sleepiness independent of the presence of obstructive sleep apnea. Obese patients undergoing bariatric surgery experience improvement in daytime sleepiness sooner than would be expected if the improvement were solely the result of weight loss and resolution of sleep apnea. A number of other factors common in obese patients are likely contributors as well, including depression, physical inactivity, and diabetes. Malnutrition Although fatigue can be a presenting feature of malnutrition, nutritional status may also be an important comorbidity and contributor to fatigue in other chronic illnesses, including cancer-associated fatigue. Infection Both acute and chronic infections commonly lead to fatigue as part of the broader infectious syndrome. Evaluation for undiagnosed infection as the cause of unexplained fatigue, and particularly prolonged or chronic fatigue, should be guided by the history, physical examination, and infectious risk factors, with particular attention to risk for tuberculosis, HIV, chronic hepatitis B and C, and endocarditis. Infectious mononucleosis may cause prolonged fatigue that persists for weeks to months following the acute illness, but infection with the Epstein-Barr virus is only very rarely the cause of unexplained chronic fatigue. Drugs Many medications, drug use, drug withdrawal, and chronic alcohol use can all lead to fatigue. Medications that are more likely to be causative in this context include antidepressants, antipsychotics, anxiolytics, opiates, antispasticity agents, antiseizure agents, and beta blockers. Cardiovascular and Pulmonary Fatigue is one of the most taxing patient-reported symptoms of congestive heart failure and chronic obstructive pulmonary disease and negatively affects quality of life. Malignancy Fatigue, particularly in association with unexplained unintended weight loss, can be a sign of occult malignancy, but this is only rarely identified as causative in patients with unexplained chronic fatigue in the absence of other telltale signs or symptoms. Cancer-related fatigue is experienced by 40% of patients at time of diagnosis and greater than 80% of patients later in the disease course. Hematologic Chronic or progressive anemia may present with fatigue, sometimes in association with exertional tachycardia and breathlessness. Anemia may also contribute to fatigue in chronic illness. Low serum ferritin in the absence of anemia may also cause fatigue that is reversible with iron replacement. Systemic Inflammatory/Rheumatologic Disorders Fatigue is a prominent complaint in many chronic inflammatory disorders, including systemic lupus erythematosus, polymyalgia rheumatica, rheumatoid arthritis, inflammatory bowel disease, antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis, sarcoidosis, and Sjögren’s syndrome, but is not usually an isolated symptom. Pregnancy Fatigue is very commonly reported by women during all stages of pregnancy and postpartum. Disorders of unclear Cause Chronic fatigue syndrome (Chap. 464e) and fibromyalgia (Chap. 396) incorporate chronic fatigue as part of the syndromic definition when present in association with a number of other inclusion and exclusion criteria, as discussed in detail in their respective chapters. The pathophysiology of each is unknown. Idiopathic chronic fatigue is used to describe the syndrome of unexplained chronic fatigue in the absence of enough additional clinical features to meet the diagnostic criteria for chronic fatigue syndrome. APPROACH TO THE PATIENT: A detailed history focusing on the quality, pattern, time-course, associated symptoms, and alleviating factors of fatigue is critical in defining the syndrome, determining whether fatigue is the appropriate designation, determining whether the symptoms are acute or chronic, and determining whether fatigue is primarily mental, physical, or both in order to direct further evaluation and treatment. The review of systems should attempt to distinguish fatigue from excessive daytime sleepiness, dyspnea on exertion, exercise intolerance, and muscle weakness. The presence of fever, chills, night sweats, or weight loss should raise suspicion for an occult infection or malignancy. A careful review of prescription, over-the-counter, herbal, and recreational drug and alcohol use is mandatory. Circumstances surrounding the onset of symptoms and potential triggers should be investigated. The social history is important, with attention paid to job stress and work hours, the social support network, and domestic affairs including a screen for intimate partner violence. Sleep habits and sleep hygiene should be questioned. The impact of fatigue on daily functioning is important to understand the patient’s experience and gauge recovery and the success of treatment. The physical examination of patients with fatigue is guided by the history and differential diagnosis. A detailed mental status examination should be performed with particular attention to symptoms of depression and anxiety. A formal neurologic examination is required to determine whether objective muscle weakness is present. This is usually a straightforward exercise, although occasionally patients with fatigue have difficulty sustaining effort against resistance and sometimes report that generating full power requires substantial mental effort. On confrontational testing, they are able to generate full power for only a brief period before suddenly giving way to the examiner. This type of weakness is often referred to as breakaway weakness and may or may not be associated with pain. This is contrasted with weakness due to lesions in the motor tracts or lower motor unit, in which the patient’s resistance can be overcome in a smooth and steady fashion and full power can never be generated. Occasionally, a patient may demonstrate fatigable weakness, in which power is full when first tested but becomes weak upon repeat evaluation without interval rest. Fatigable weakness, which usually indicates a problem of neuromuscular transmission, never has the sudden breakaway quality that one occasionally observes in patients with fatigue. If the presence or absence of muscle weakness cannot be determined with the physical examination, electromyography with nerve conductions studies can be a helpful ancillary test. The general physical examination should screen for signs of cardiopulmonary disease, malignancy, lymphadenopathy, organomegaly, infection, liver failure, kidney disease, malnutrition, endocrine abnormalities, and connective tissue disease. Although the diagnostic yield of the general physical examination may be relatively low in the context of evaluation of unexplained chronic fatigue, elucidating the cause of 2% of cases in one prospective analysis, the yield of a detailed neuropsychiatric and mental status evaluation is likely to be much higher, revealing a potential explanation for fatigue in up to 75–80% of patients in some series. Furthermore, the rite of physical examination demonstrates a thorough and systematic approach to the patient’s complaint and helps build trust and a therapeutic alliance. Laboratory testing is likely to identify the cause of chronic fatigue in only about 5% of cases. Beyond a few standard screening tests, laboratory evaluation should be guided by the history and physical examination; extensive testing is more likely to lead to false-positive results that require explanation and unnecessary investigation and should be avoided in lieu of frequent clinical follow-up. A reasonable approach to screening includes a complete blood count with differential (to screen for anemia, infection, and malignancy), electrolytes (including sodium, potassium, and calcium), glucose, renal function, liver function, and thyroid function. Testing for HIV and adrenal function can also be considered. Published guidelines defining chronic fatigue syndrome also recommend an erythrocyte sedimentation rate (ESR) as part of the evaluation for mimics, but unless the value is very high, such nonspecific testing in the absence of other features is unlikely to clarify the situation. Routine screening with an antinuclear antibody (ANA) test is also unlikely to be informative in isolation and is frequently positive at low titers in otherwise healthy adults. Additional unfocused studies, such as whole-body imaging scans, are usually not indicated; in addition to their inconvenience, potential risk, and cost, they often reveal unrelated incidental findings that can prolong the workup unnecessarily. The first priority of treatment is to address the underlying disorder or disorders that account for fatigue, because this can be curative in select contexts and palliative in others. Unfortunately, in many chronic illnesses, fatigue may be refractory to traditional disease-modifying therapies, and it is important in such cases to evaluate for other potential contributors, because the cause may be multifactorial. Antidepressant treatment (Chap. 466) may be helpful for treatment of chronic fatigue when symptoms of depression are present and may be most effective in the context of a multimodal approach. However, antidepressants can also cause fatigue and should be discontinued if they are not clearly effective. Cognitive-behavioral therapy has also been demonstrated to be helpful in the context of chronic fatigue syndrome as well as cancer-associated fatigue. Graded exercise therapy in which physical exercise, most typically walking, is gradually increased with attention to target heart rates to avoid overexertion, was shown to modestly improve walking times and self-reported fatigue measures in patients in the United Kingdom with chronic fatigue syndrome in the large 2011 randomized controlled PACE trial. Psychostimulants such as amphetamines, modafinil, and armodafinil can help increase alertness and concentration and reduce excessive daytime sleepiness in certain clinical contexts, which may in turn help with symptoms of fatigue in a minority of patients, but they have generally proven to be unhelpful in randomized trials for treating fatigue in posttraumatic brain injury, Parkinson’s disease, and MS. Development of more effective therapy for fatigue is hampered by limited knowledge of the biologic basis of this symptom. Tentative data suggests that proinflammatory cytokines, such as interleukin 1β and tumor necrosis factor α, might mediate fatigue in some patients; thus, cytokine antagonists represent one possible future approach. Acute fatigue significant enough to require medical evaluation is more likely to lead to an identifiable medical, neurologic, or psychiatric cause than unexplained chronic fatigue. Evaluation of unexplained chronic fatigue most commonly leads to diagnosis of a psychiatric condition or remains unexplained. Identification of a previously undiagnosed serious or life-threatening culprit etiology is rare on longitudinal follow-up in patients with unexplained chronic fatigue. Complete resolution of unexplained chronic fatigue is uncommon, at least over the short term, but multidisciplinary treatment approaches can lead to symptomatic improvements that can substantially improve quality of life. neurologic Causes of weakness and Paralysis Michael J. Aminoff Normal motor function involves integrated muscle activity that is modulated by the activity of the cerebral cortex, basal ganglia, cer-30154 PART 2 Cardinal Manifestations and Presentation of Diseases ebellum, red nucleus, brainstem reticular formation, lateral vestibular nucleus, and spinal cord. Motor system dysfunction leads to weakness or paralysis, discussed in this chapter, or to ataxia (Chap. 450) or abnormal movements (Chap. 449). Weakness is a reduction in the power that can be exerted by one or more muscles. It must be distinguished from increased fatigability (i.e., the inability to sustain the performance of an activity that should be normal for a person of the same age, sex, and size), limitation in function due to pain or articular stiffness, or impaired motor activity because severe proprioceptive sensory loss prevents adequate feedback information about the direction and power of movements. It is also distinct from bradykinesia (in which increased time is required for full power to be exerted) and apraxia, a disorder of planning and initiating a skilled or learned movement unrelated to a significant motor or sensory deficit (Chap. 36). Paralysis or the suffix “-plegia” indicates weakness so severe that a muscle cannot be contracted at all, whereas paresis refers to less severe weakness. The prefix “hemi-” refers to one-half of the body, “para-” to both legs, and “quadri-” to all four limbs. The distribution of weakness helps to localize the underlying lesion. Weakness from involvement of upper motor neurons occurs particularly in the extensors and abductors of the upper limb and the flexors of the lower limb. Lower motor neuron weakness depends on whether involvement is at the level of the anterior horn cells, nerve root, limb plexus, or peripheral nerve—only muscles supplied by the affected structure are weak. Myopathic weakness is generally most marked in proximal muscles. Weakness from impaired neuromuscular transmission has no specific pattern of involvement. Weakness often is accompanied by other neurologic abnormalities that help indicate the site of the responsible lesion (Table 30-1). Tone is the resistance of a muscle to passive stretch. Increased tone may be of several types. Spasticity is the increase in tone associated with disease of upper motor neurons. It is velocity-dependent, has a sudden release after reaching a maximum (the “clasp-knife” phenomenon), and predominantly affects the antigravity muscles (i.e., upper-limb flexors and lower-limb extensors). Rigidity is hypertonia that is present throughout the range of motion (a “lead pipe” or “plastic” stiffness) and affects flexors and extensors equally; it sometimes has a cogwheel quality that is enhanced by voluntary movement of the contralateral limb (reinforcement). Rigidity occurs with certain extrapyramidal disorders, such as Parkinson’s disease. Paratonia (or gegenhalten) is increased tone that varies irregularly in a manner seemingly related to the degree of relaxation, is present throughout the range of motion, and affects flexors and extensors equally; it usually results from disease of the frontal lobes. Weakness with decreased tone (flaccidity) or normal tone occurs with disorders of motor units. A motor unit consists of a single lower motor neuron and all the muscle fibers that it innervates. Muscle bulk generally is not affected by upper motor neuron lesions, although mild disuse atrophy eventually may occur. By contrast, atrophy is often conspicuous when a lower motor neuron lesion is responsible for weakness and also may occur with advanced muscle disease. Muscle stretch (tendon) reflexes are usually increased with upper motor neuron lesions, but may be decreased or absent for a variable period immediately after onset of an acute lesion. Hyperreflexia is usually—but not invariably—accompanied by loss of cutaneous reflexes (such as superficial abdominals; Chap. 437) and, in particular, by an extensor plantar (Babinski) response. The muscle stretch reflexes are depressed with lower motor neuron lesions directly involving specific reflex arcs. They generally are preserved in patients with myopathic weakness except in advanced stages, when they sometimes are attenuated. In disorders of the neuromuscular junction, reflex responses may be affected by preceding voluntary activity of affected muscles; such activity may lead to enhancement of initially depressed reflexes in Lambert-Eaton myasthenic syndrome and, conversely, to depression of initially normal reflexes in myasthenia gravis (Chap. 461). The distinction of neuropathic (lower motor neuron) from myopathic weakness is sometimes difficult clinically, although distal weakness is likely to be neuropathic, and symmetric proximal weakness myopathic. Fasciculations (visible or palpable twitch within a muscle due to the spontaneous discharge of a motor unit) and early atrophy indicate that weakness is neuropathic. PATHOgENESIS upper Motor Neuron Weakness Lesions of the upper motor neurons or their descending axons to the spinal cord (Fig. 30-1) produce weakness through decreased activation of lower motor neurons. In general, distal muscle groups are affected more severely than proximal ones, and axial movements are spared unless the lesion is severe and bilateral. Spasticity is typical but may not be present acutely. Rapid repetitive movements are slowed and coarse, but normal rhythmicity is maintained. With corticobulbar involvement, weakness occurs in the lower face and tongue; extraocular, upper facial, pharyngeal, and jaw muscles are typically spared. Bilateral corticobulbar lesions produce a pseudobulbar palsy: dysarthria, dysphagia, dysphonia, and emotional lability accompany bilateral facial weakness and a brisk jaw jerk. Electromyogram (EMG) (Chap. 442e) shows that with weakness of the upper motor neuron type, motor units have a diminished maximal discharge frequency. Lower Motor Neuron Weakness This pattern results from disorders of lower motor neurons in the brainstem motor nuclei and the anterior horn of the spinal cord or from dysfunction of the axons of these neurons as they pass to skeletal muscle (Fig. 30-2). Weakness is due to a decrease in the number of muscle fibers that can be activated through a loss of α motor neurons or disruption of their connections to muscle. Loss of γ motor neurons does not cause weakness but decreases tension on the muscle spindles, which decreases muscle tone and attenuates the stretch reflexes. An absent stretch reflex suggests involvement of spindle afferent fibers. When a motor unit becomes diseased, especially in anterior horn cell diseases, it may discharge spontaneously, producing fasciculations. When α motor neurons or their axons degenerate, the denervated muscle fibers also may discharge spontaneously. These single muscle FIguRE 30-1 The corticospinal and bulbospinal upper motor neuron pathways. Upper motor neurons have their cell bodies in layer V of the primary motor cortex (the precentral gyrus, or Brodmann’s area 4) and in the premotor and supplemental motor cortex (area 6). The upper motor neurons in the primary motor cortex are somatotopically organized (right side of figure). Axons of the upper motor neurons descend through the sub-cortical white matter and the posterior limb of the internal capsule. Axons of the pyramidal or corticospinal system descend through the brainstem in the cerebral peduncle of the midbrain, the basis pontis, and the medullary pyramids. At the cervicomedullary junction, most corticospinal axons decussate into the contralateral corticospinal tract of the lateral spinal cord, but 10–30% remain ipsilateral in the anterior spinal cord. Corticospinal neurons synapse on premotor interneurons, but some—especially in the cervical enlargement and those connecting with motor neurons to distal limb muscles—make direct mono-synaptic connections with lower motor neurons. They innervate most densely the lower motor neurons of hand muscles and are involved in the execution of learned, fine movements. Corticobulbar neurons are similar to corticospinal neurons but innervate brainstem motor nuclei. Bulbospinal upper motor neurons influence strength and tone but are not part of the pyramidal system. The descending ventromedial bulbospinal pathways originate in the tectum of the midbrain (tectospinal pathway), the vestibular nuclei (vestibulospinal pathway), and the reticular formation (reticulospinal pathway). These pathways influence axial and proximal muscles and are involved in the maintenance of posture and integrated movements of the limbs and trunk. The descending ventrolateral bulbospinal pathways, which originate predominantly in the red nucleus (rubrospinal pathway), facilitate distal limb muscles. The bulbospinal system sometimes is referred to as the extrapyramidal upper motor neuron system. In all figures, nerve cell bodies and axon terminals are shown, respectively, as closed circles and forks. FIguRE 30-2 Lower motor neurons are divided into α and γ types. The larger α motor neurons are more numerous and innervate the extrafusal muscle fibers of the motor unit. Loss of α motor neurons or disruption of their axons produces lower motor neuron weakness. The smaller, less numerous γ motor neurons innervate the intrafusal muscle fibers of the muscle spindle and contribute to normal tone and stretch reflexes. The α motor neuron receives direct excitatory input from corticomotoneurons and primary muscle spindle afferents. The α and γ motor neurons also receive excitatory input from other descending upper motor neuron pathways, segmental sensory inputs, and interneurons. The α motor neurons receive direct inhibition from Renshaw cell interneurons, and other interneurons indirectly inhibit the α and γ motor neurons. A muscle stretch (tendon) reflex requires the function of all the illustrated structures. A tap on a tendon stretches muscle spindles (which are tonically activated by γ motor neurons) and activates the primary spindle afferent neurons. These neurons stimulate the α motor neurons in the spinal cord, producing a brief muscle contraction, which is the familiar tendon reflex. fiber discharges, or fibrillation potentials, cannot be seen but can be recorded with EMG. Weakness leads to delayed or reduced recruitment of motor units, with fewer than normal activated at a particular discharge frequency. Neuromuscular Junction Weakness Disorders of the neuromuscular junctions produce weakness of variable degree and distribution. The number of muscle fibers that are activated varies over time, depending on the state of rest of the neuromuscular junctions. Strength is influenced by preceding activity of the affected muscle. In myasthenia gravis, for example, sustained or repeated contractions of affected muscle decline in strength despite continuing effort (Chap. 461). Thus, fatigable weakness is suggestive of disorders of the neuromuscular junction, which cause functional loss of muscle fibers due to failure of their activation. Myopathic Weakness Myopathic weakness is produced by a decrease in the number or contractile force of muscle fibers activated within motor units. With muscular dystrophies, inflammatory myopathies, or myopathies with muscle fiber necrosis, the number of muscle fibers is reduced within many motor units. On EMG, the size of each motor unit action potential is decreased, and motor units must be recruited more rapidly than normal to produce the desired power. Some myopathies produce weakness through loss of contractile force of muscle fibers or through relatively selective involvement of type II (fast) 156 fibers. These myopathies may not affect the size of individual motor unit action potentials and are detected by a discrepancy between the electrical activity and force of a muscle. Psychogenic Weakness Weakness may occur without a recognizable organic basis. It tends to be variable, inconsistent, and with a pattern of distribution that cannot be explained on a neuroanatomic basis. On formal testing, antagonists may contract when the patient is supposedly activating the agonist muscle. The severity of weakness is out of keeping with the patient’s daily activities. Hemiparesis Hemiparesis results from an upper motor neuron lesion above the midcervical spinal cord; most such lesions are above the foramen magnum. The presence of other neurologic deficits helps localize the lesion. Thus, language disorders, for example, point to a cortical lesion. Homonymous visual field defects reflect either a cortical or a subcortical hemispheric lesion. A “pure motor” hemiparesis of the face, arm, and leg often is due to a small, discrete lesion in the posterior limb of the internal capsule, cerebral peduncle, or upper pons. Some brainstem lesions produce “crossed paralyses,” consisting of ipsilateral cranial nerve signs and contralateral hemiparesis (Chap. 446). The absence of cranial nerve signs or facial weakness suggests that a hemiparesis is due to a lesion in the high cervical spinal cord, especially if associated with the Brown-Séquard syndrome (Chap. 456). Acute or episodic hemiparesis usually results from focal structural lesions, particularly rapidly expanding lesions, or an inflammatory process. Subacute hemiparesis that evolves over days or weeks may relate to subdural hematoma, infectious or inflammatory disorders (e.g., cerebral abscess, fungal granuloma or meningitis, parasitic infection, multiple sclerosis, sarcoidosis), or primary and metastatic neoplasms. AIDS may present with subacute hemiparesis due to toxoplasmosis or primary central nervous system (CNS) lymphoma. Chronic hemiparesis that evolves over months usually is due to a neoplasm or vascular malformation, a chronic subdural hematoma, or a degenerative disease. PART 2 Cardinal Manifestations and Presentation of Diseases Investigation of hemiparesis (Fig. 30-3) of acute origin starts with a computed tomography (CT) scan of the brain and laboratory studies. If the CT is normal, or in subacute or chronic cases of hemiparesis, magnetic resonance imaging (MRI) of the brain and/or cervical spine (including the foramen magnum) is performed, depending on the clinical accompaniments. Paraparesis Acute paraparesis is caused most commonly by an intra-spinal lesion, but its spinal origin may not be recognized initially if the legs are flaccid and areflexic. Usually, however, there is sensory loss in the legs with an upper level on the trunk, a dissociated sensory loss suggestive of a central cord syndrome (Chap. 456), or hyperreflexia in the legs with normal reflexes in the arms. Imaging the spinal cord (Fig. 30-3) may reveal compressive lesions, infarction (proprioception usually is spared), arteriovenous fistulas or other vascular anomalies, or transverse myelitis (Chap. 456). Diseases of the cerebral hemispheres that produce acute paraparesis include anterior cerebral artery ischemia (shoulder shrug also is affected), superior sagittal sinus or cortical venous thrombosis, and acute hydrocephalus. Paraparesis may result from a cauda equina syndrome, for example, after trauma to the low back, a midline disk herniation, or an intraspinal tumor; although the sphincters are often affected, hip flexion often is spared, as is sensation over the anterolateral thighs. Rarely, paraparesis is caused by a rapidly evolving anterior horn cell disease (such as poliovirus or West Nile virus infection), peripheral neuropathy (such as Guillain-Barré syndrome; Chap. 460), or myopathy (Chap. 462e). Subacute or chronic spastic paraparesis is caused by upper motor neuron disease. When associated with lower-limb sensory loss and sphincter involvement, a chronic spinal cord disorder should be considered (Chap. 456). If hemispheric signs are present, a parasagittal meningioma or chronic hydrocephalus is likely. The absence of spasticity in a long-standing paraparesis suggests a lower motor neuron or myopathic etiology. Hemiparesis UMN signs Cerebral signs Proximal Restricted Paraparesis Quadriparesis Monoparesis Distal LMN signs* Alert UMN signs LMN signs* UMN signs LMN signs* EMG and NCS UMN pattern LMN pattern Myopathic pattern Anterior horn, root, or peripheral nerve disease Muscle or neuromuscular junction disease * or signs of myopathy † If no abnormality detected, consider spinal MRI. ‡ If no abnormality detected, consider myelogram or brain MRI. NoYes NoYes DISTRIBUTION OF WEAKNESS Brain CT or MRI† Spinal MRI‡ FIguRE 30-3 An algorithm for the initial workup of a patient with weakness. CT, computed tomography; EMG, electromyography; LMN, lower motor neuron; MRI, magnetic resonance imaging; NCS, nerve conduction studies; UMN, upper motor neuron. Investigations typically begin with spinal MRI, but when upper motor neuron signs are associated with drowsiness, confusion, seizures, or other hemispheric signs, brain MRI should also be performed, sometimes as the initial investigation. Electrophysiologic studies are diagnostically helpful when clinical findings suggest an underlying neuromuscular disorder. Quadriparesis or generalized Weakness Generalized weakness may be due to disorders of the CNS or the motor unit. Although the terms often are used interchangeably, quadriparesis is commonly used when an upper motor neuron cause is suspected, and generalized weakness is used when a disease of the motor units is likely. Weakness from CNS disorders usually is associated with changes in consciousness or cognition and accompanied by spasticity, hyperreflexia, and sensory disturbances. Most neuromuscular causes of generalized weakness are associated with normal mental function, hypotonia, and hypoactive muscle stretch reflexes. The major causes of intermittent weakness are listed in Table 30-2. A patient with generalized fatigability without objective weakness may have the chronic fatigue syndrome (Chap. 464e). ACUTE qUAdRIPARESIS Quadriparesis with onset over minutes may result from disorders of upper motor neurons (such as from anoxia, hypotension, brainstem or cervical cord ischemia, trauma, and systemic metabolic abnormalities) or muscle (electrolyte disturbances, certain inborn errors of muscle energy metabolism, toxins, and periodic paralyses). Onset over hours to weeks may, in addition to these disorders, be due to lower motor neuron disorders such as Guillain-Barré syndrome (Chap. 460). In obtunded patients, evaluation begins with a CT scan of the brain. If upper motor neuron signs are present but the patient is alert, the initial test is usually an MRI of the cervical cord. If weakness is lower motor neuron, myopathic, or uncertain in origin, the clinical approach begins with blood studies to determine the level of muscle enzymes and electrolytes and with EMG and nerve conduction studies. SUBACUTE OR CHRONIC qUAdRIPARESIS Quadriparesis due to upper motor neuron disease may develop over weeks to years from chronic myelopathies, multiple sclerosis, brain or spinal tumors, chronic subdural hematomas, and various metabolic, toxic, and infectious disorders. It may also result from lower motor neuron disease, a chronic neuropathy (in which weakness is often most profound distally), or myopathic weakness (typically proximal). When quadriparesis develops acutely in obtunded patients, evaluation begins with a CT scan of the brain. If upper motor neuron signs have developed acutely but the patient is alert, the initial test is usually CAuSES of EPiSoDiC gEnERALizED wEAKnESS 1. Electrolyte disturbances, e.g., hypokalemia, hyperkalemia, hypercalcemia, hypernatremia, hyponatremia, hypophosphatemia, hypermagnesemia 2. a. b. Metabolic defects of muscle (impaired carbohydrate or fatty acid utilization; abnormal mitochondrial function) 3. Neuromuscular junction disorders a. b. 4. Central nervous system disorders a. Transient ischemic attacks of the brainstem b. c. 5. Lack of voluntary effort a. b. c. an MRI of the cervical cord. When onset has been gradual, disorders 157 of the cerebral hemispheres, brainstem, and cervical spinal cord can usually be distinguished clinically, and imaging is directed first at the clinically suspected site of pathology. If weakness is lower motor neuron, myopathic, or uncertain in origin, laboratory studies to determine the levels of muscle enzymes and electrolytes, and EMG and nerve conduction studies help to localize the pathologic process. Monoparesis Monoparesis usually is due to lower motor neuron disease, with or without associated sensory involvement. Upper motor neuron weakness occasionally presents as a monoparesis of distal and nonantigravity muscles. Myopathic weakness rarely is limited to one limb. ACUTE MONOPARESIS If weakness is predominantly distal and of upper motor neuron type and is not associated with sensory impairment or pain, focal cortical ischemia is likely (Chap. 446); diagnostic possibilities are similar to those for acute hemiparesis. Sensory loss and pain usually accompany acute lower motor neuron weakness; the weakness commonly localizes to a single nerve root or peripheral nerve, but occasionally reflects plexus involvement. If lower motor neuron weakness is likely, evaluation begins with EMG and nerve conduction studies. SUBACUTE OR CHRONIC MONOPARESIS Weakness and atrophy that develop over weeks or months are usually of lower motor neuron origin. When associated with sensory symptoms, a peripheral cause (nerve, root, or plexus) is likely; otherwise, anterior horn cell disease should be considered. In either case, an electrodiagnostic study is indicated. If weakness is of the upper motor neuron type, a discrete cortical (precentral gyrus) or cord lesion may be responsible, and appropriate imaging is performed. Distal Weakness Involvement of two or more limbs distally suggests lower motor neuron or peripheral nerve disease. Acute distal lower-limb weakness results occasionally from an acute toxic polyneuropathy or cauda equina syndrome. Distal symmetric weakness usually develops over weeks, months, or years and, when associated with numbness, is due to peripheral neuropathy (Chap. 459). Anterior horn cell disease may begin distally but is typically asymmetric and without accompanying numbness (Chap. 452). Rarely, myopathies present with distal weakness (Chap. 462e). Electrodiagnostic studies help localize the disorder (Fig. 30-3). Proximal Weakness Myopathy often produces symmetric weakness of the pelvic or shoulder girdle muscles (Chap. 462e). Diseases of the neuromuscular junction, such as myasthenia gravis (Chap. 461), may present with symmetric proximal weakness often associated with ptosis, diplopia, or bulbar weakness and fluctuating in severity during the day. In anterior horn cell disease, proximal weakness is usually asymmetric, but it may be symmetric if familial. Numbness does not occur with any of these diseases. The evaluation usually begins with determination of the serum creatine kinase level and electrophysiologic studies. Weakness in a Restricted Distribution Weakness may not fit any of these patterns, being limited, for example, to the extraocular, hemifacial, bulbar, or respiratory muscles. If it is unilateral, restricted weakness usually is due to lower motor neuron or peripheral nerve disease, such as in a facial palsy. Weakness of part of a limb is commonly due to a peripheral nerve lesion such as an entrapment neuropathy. Relatively symmetric weakness of extraocular or bulbar muscles frequently is due to a myopathy (Chap. 462e) or neuromuscular junction disorder (Chap. 461). Bilateral facial palsy with areflexia suggests Guillain-Barré syndrome (Chap. 460). Worsening of relatively symmetric weakness with fatigue is characteristic of neuromuscular junction disorders. Asymmetric bulbar weakness usually is due to motor neuron disease. Weakness limited to respiratory muscles is uncommon and usually is due to motor neuron disease, myasthenia gravis, or polymyositis/ dermatomyositis (Chap. 388). CHAPTER 30 Neurologic Causes of Weakness and Paralysis numbness, Tingling, and Sensory Loss Michael J. Aminoff Normal somatic sensation reflects a continuous monitoring process, little of which reaches consciousness under ordinary conditions. 31158 PART 2 Cardinal Manifestations and Presentation of Diseases By contrast, disordered sensation, particularly when experienced as painful, is alarming and dominates the patient’s attention. Physicians should be able to recognize abnormal sensations by how they are described, know their type and likely site of origin, and understand their implications. Pain is considered separately in Chap. 18. Abnormal sensory symptoms can be divided into two categories: positive and negative. The prototypical positive symptom is tingling (pins and needles); other positive sensory phenomena include itch and altered sensations that are described as pricking, bandlike, lightning-like shooting feelings (lancinations), aching, knifelike, twisting, drawing, pulling, tightening, burning, searing, electrical, or raw feelings. Such symptoms are often painful. Positive phenomena usually result from trains of impulses generated at sites of lowered threshold or heightened excitability along a peripheral or central sensory pathway. The nature and severity of the abnormal sensation depend on the number, rate, timing, and distribution of ectopic impulses and the type and function of nervous tissue in which they arise. Because positive phenomena represent excessive activity in sensory pathways, they are not necessarily associated with a sensory deficit (loss) on examination. Negative phenomena represent loss of sensory function and are characterized by diminished or absent feeling that often is experienced as numbness and by abnormal findings on sensory examination. In disorders affecting peripheral sensation, at least one-half the afferent axons innervating a particular site are probably lost or functionless before a sensory deficit can be demonstrated by clinical examination. If the rate of loss is slow, however, lack of cutaneous feeling may be unnoticed by the patient and difficult to demonstrate on examination, even though few sensory fibers are functioning; if it is rapid, both positive and negative phenomena are usually conspicuous. Subclinical degrees of sensory dysfunction may be revealed by sensory nerve conduction studies or somatosensory evoked potentials (Chap. 442e). Whereas sensory symptoms may be either positive or negative, sensory signs on examination are always a measure of negative phenomena. Paresthesias and dysesthesias are general terms used to denote positive sensory symptoms. The term paresthesias typically refers to tingling or pins-and-needles sensations but may include a wide variety of other abnormal sensations, except pain; it sometimes implies that the abnormal sensations are perceived spontaneously. The more general term dysesthesias denotes all types of abnormal sensations, including painful ones, regardless of whether a stimulus is evident. Another set of terms refers to sensory abnormalities found on examination. Hypesthesia or hypoesthesia refers to a reduction of cutaneous sensation to a specific type of testing such as pressure, light touch, and warm or cold stimuli; anesthesia, to a complete absence of skin sensation to the same stimuli plus pinprick; and hypalgesia or analgesia, to reduced or absent pain perception (nociception). Hyperesthesia means pain or increased sensitivity in response to touch. Similarly, allodynia describes the situation in which a nonpainful stimulus, once perceived, is experienced as painful, even excruciating. An example is elicitation of a painful sensation by application of a vibrating tuning fork. Hyperalgesia denotes severe pain in response to a mildly noxious stimulus, and hyperpathia, a broad term, encompasses all the phenomena described by hyperesthesia, allodynia, and hyperalgesia. With hyperpathia, the threshold for a sensory stimulus is increased and perception is delayed, but once felt, it is unduly painful. Disorders of deep sensation arising from muscle spindles, tendons, and joints affect proprioception (position sense). Manifestations include imbalance (particularly with eyes closed or in the dark), clumsiness of precision movements, and unsteadiness of gait, which are referred to collectively as sensory ataxia. Other findings on examination usually, but not invariably, include reduced or absent joint position and vibratory sensibility and absent deep tendon reflexes in the affected limbs. The Romberg sign is positive, which means that the patient sways markedly or topples when asked to stand with feet close together and eyes closed. In severe states of deafferentation involving deep sensation, the patient cannot walk or stand unaided or even sit unsupported. Continuous involuntary movements (pseudoathetosis) of the outstretched hands and fingers occur, particularly with eyes closed. Cutaneous receptors are classified by the type of stimulus that optimally excites them. They consist of naked nerve endings (nociceptors, which respond to tissue-damaging stimuli, and thermoreceptors, which respond to noninjurious thermal stimuli) and encapsulated terminals (several types of mechanoreceptor, activated by physical deformation of the skin). Each type of receptor has its own set of sensitivities to specific stimuli, size and distinctness of receptive fields, and adaptational qualities. Afferent fibers in peripheral nerve trunks traverse the dorsal roots and enter the dorsal horn of the spinal cord (Fig. 31-1). From there, the polysynaptic projections of the smaller fibers (unmyelinated and small myelinated), which subserve mainly nociception, itch, temperature sensibility, and touch, cross and ascend in the opposite anterior and lateral columns of the spinal cord, through the brainstem, to the ventral posterolateral (VPL) nucleus of the thalamus and ultimately project to the postcentral gyrus of the parietal cortex (Chap. 18). This is the spinothalamic pathway or anterolateral system. The larger fibers, which subserve tactile and position sense and kinesthesia, project rostrally in the posterior and posterolateral columns on the same side of the spinal cord and make their first synapse in the gracile or cuneate nucleus of the lower medulla. Axons of second-order neurons decussate and ascend in the medial lemniscus located medially in the medulla and in the tegmentum of the pons and midbrain and synapse in the VPL nucleus; third-order neurons project to parietal cortex as well as to other cortical areas. This large-fiber system is referred to as the posterior column–medial lemniscal pathway (lemniscal, for short). Although the fiber types and functions that make up the spinothalamic and lemniscal systems are relatively well known, many other fibers, particularly those associated with touch, pressure, and position sense, ascend in a diffusely distributed pattern both ipsilaterally and contra-laterally in the anterolateral quadrants of the spinal cord. This explains why a complete lesion of the posterior columns of the spinal cord may be associated with little sensory deficit on examination. Nerve conduction studies and nerve biopsy are important means of investigating the peripheral nervous system, but they do not evaluate the function or structure of cutaneous receptors and free nerve endings or of unmyelinated or thinly myelinated nerve fibers in the nerve trunks. Skin biopsy can be used to evaluate these structures in the dermis and epidermis. The main components of the sensory examination are tests of primary sensation (pain, touch, vibration, joint position, and thermal sensation) (Table 31-1). The examiner must depend on patient responses, and this complicates interpretation. Further, examination may be limited in some patients. In a stuporous patient, for example, sensory examination is reduced to observing the briskness of withdrawal in response to a pinch or another noxious stimulus. Comparison of responses on the two sides of the body is essential. In an alert but uncooperative patient, it may not be possible to examine cutaneous sensation, but some idea of proprioceptive function may be gained by noting the patient’s best performance of movements requiring balance and precision. Primary Sensation The sense of pain usually is tested 159 with a clean pin, which is then discarded. The patient is asked to close the eyes and focus on the pricking or unpleasant quality of the stimulus, not just the pressure or touch sensation elicited. Areas of hypalgesia should be mapped by proceeding radially from the most hypalgesic site. Temperature sensation to both hot and cold is best tested with small containers filled with water of the desired temperature. An alternative way to test cold sensation is to touch a metal object, such as a tuning fork at room temperature, to the skin. For testing warm temperatures, the tuning fork or another metal object may be held under warm water of the desired temperature and then used. The appreciation of both cold and warmth should be tested because different receptors respond to each. Touch usually is tested with a wisp of cotton or a fine camel hair brush, minimizing pressure on the skin. In general, it is better to avoid testing touch nucleus of on hairy skin because of the profusion of the sensory endings that surround each hair follicle. The patient is MIDBRAIN tested with the eyes closed and should indicate as soon as the stimulus is perceived, indicating its location. Joint position testing is a measure of proprioception. With the patient’s eyes closed, joint position is tested in the distal interphalangeal joint of the great toe and fingers. The digit is held by its sides, distal to the joint Principal sensory nucleus of V PONS being tested, and moved passively while more proximalMedial lemniscus joints are stabilized—the patient indicates the change in position or direction of movement. If errors are made, more proximal joints are tested. A test of proxi-Nucleus of mal joint position sense, primarily at the shoulder, isfuniculus gracilis performed by asking the patient to bring the two index Nucleus of fingers together with arms extended and eyes closed. funiculus cuneatus Normal individuals can do this accurately, with errors Spinothalamic tract of 1 cm or less. Nucleus of The sense of vibration is tested with an oscillating tuning fork that vibrates at 128 Hz. Vibration is tested over bony points, beginning distally; in the feet, it is tested over the dorsal surface of the distal phalanx of the big toes and at the malleoli of the ankles, and in the hands, it is tested dorsally at the distal phalanx of the fingers. If abnormalities are found, more proximal sites should be examined. Vibratory thresholds at the same site in the patient and the examiner may be compared for control purposes. Quantitative Sensory Testing Effective sensory testing devices are commercially available. Quantitative sen- FIguRE 31-1 The main somatosensory pathways. The spinothalamic tract (pain, sory testing is particularly useful for serial evaluation of thermal sense) and the posterior column–lemniscal system (touch, pressure, joint cutaneous sensation in clinical trials. Threshold testing position) are shown. Offshoots from the ascending anterolateral fasciculus (spino for touch and vibratory and thermal sensation is the thalamic tract) to nuclei in the medulla, pons, and mesencephalon and nuclear most widely used application. terminations of the tract are indicated. (From AH Ropper, MA Samuels: Adams and Victor’s Principles of Neurology, 9th ed. New York, McGraw-Hill, 2009.) Cortical Sensation The most commonly used tests of cortical function are two-point discrimination, touch localization, and bilateral simultaneous stimulation In patients with sensory complaints, testing should begin in the and tests for graphesthesia and stereognosis. Abnormalities of these center of the affected region and proceed radially until sensation is sensory tests, in the presence of normal primary sensation in an alert perceived as normal. The distribution of any abnormality is defined cooperative patient, signify a lesion of the parietal cortex or thalaand compared to root and peripheral nerve territories (Figs. 31-2 mocortical projections. If primary sensation is altered, these cortical and 31-3). Some patients present with sensory symptoms that do discriminative functions usually will be abnormal also. Comparisons not fit an anatomic localization and are accompanied by either no should always be made between analogous sites on the two sides of abnormalities or gross inconsistencies on examination. The exam-the body because the deficit with a specific parietal lesion is likely to iner should consider whether the sensory symptoms are a disguised be unilateral. request for help with psychologic or situational problems. Sensory Two-point discrimination is tested with special calipers, the points examination of a patient who has no neurologic complaints can of which may be set from 2 mm to several centimeters apart and then be brief and consist of pinprick, touch, and vibration testing in the applied simultaneously to the test site. On the fingertips, a normal hands and feet plus evaluation of stance and gait, including the individual can distinguish about a 3-mm separation of points. Romberg maneuver (Chap. 438). Evaluation of stance and gait also Touch localization is performed by light pressure for an instant tests the integrity of motor and cerebellar systems. with the examiner’s fingertip or a wisp of cotton wool; the patient, CHAPTER 31 Numbness, Tingling, and Sensory Loss Pain Temperature, heat Temperature, cold Touch Joint position Pinprick Warm metal object Cold metal object Cotton wisp, fine brush Tuning fork, 128 Hz Cutaneous nociceptors Cutaneous thermoreceptors for hot Cutaneous thermoreceptors for cold Cutaneous mechanoreceptors, also naked endings Mechanoreceptors, especially pacinian corpuscles Passive movement of specific joints Joint capsule and tendon endings, muscle spindles Large SpTh, also D SpTh SpTh Lem, also D and SpTh Lem, also D Lem, also D PART 2 Cardinal Manifestations and Presentation of Diseases Abbreviations: D, diffuse ascending projections in ipsilateral and contralateral anterolateral columns; Lem, posterior column and lemniscal projection, ipsilateral; SpTh, spinothalamic projection, contralateral. whose eyes are closed, is required to identify the site of touch. Bilateral simultaneous stimulation at analogous sites (e.g., the dorsum of both hands) can be carried out to determine whether the perception of touch is extinguished consistently on one side (extinction or neglect). Graphesthesia refers to the capacity to recognize, with eyes closed, letters or numbers drawn by the examiner’s fingertip on the palm of the hand. Once again, interside comparison is of prime importance. Inability to recognize numbers or letters is termed agraphesthesia. Stereognosis refers to the ability to identify common objects by palpation, recognizing their shape, texture, and size. Common standard objects such as keys, paper clips, and coins are best used. Patients with normal stereognosis should be able to distinguish a dime from a penny and a nickel from a quarter without looking. Patients should feel the object with only one hand at a time. If they are unable to identify it in one hand, it (lumbo-inguinal n.) Lat. cut. n. of thigh Intermed. & med. cut. n’s. of thigh (from femoral n.) Saphenous n. (from femoral n.) Deep peroneal n. (from common peroneal n.) Med. & lat. plantar n’s. (from posttibial n.) Great auricular n. Ant. cut. n. of neck Supraclavicular n’s. Med. cut. n. of arm & intercostobrachial n. Med. cut. n. Dorsal n. of penis Scrotal branch of perineal n. Obturator n. Lat. cut. n. of calf (from common peroneal n.) Superfcial peroneal n. (from common peroneal n.) Sural n. (from tibial n.) should be placed in the other for comparison. Individuals who are unable to identify common objects and coins in one hand but can do so in the other are said to have astereognosis of the abnormal hand. Sensory symptoms and signs can result from lesions at many different levels of the nervous system from the parietal cortex to the peripheral sensory receptor. Noting their distribution and nature is the most important way to localize their source. Their extent, configuration, symmetry, quality, and severity are the key observations. Dysesthesias without sensory findings by examination may be difficult to interpret. To illustrate, tingling dysesthesias in an acral distribution (hands and feet) can be systemic in origin, e.g., secondary to hyperventilation, or induced by a medication such as acetazolamide. Greater } occipital nervesLesser n. Great auricular n. Ant. cut. n. of neck C5 C6 T1 Supraclavicular n’s. Post. Axillary n. 3 (circumflex) 4 cut. rami Lat. of cut. Post cut. n. of arm thor.rami (from radial n.) n’s. 8 9 Lower 10 L1 Lat. cut. of arm 11 (from radial n.) 12 Med. (from musculocut n.) S1 cut. n. Post. rami of of lumbar sacral forearm Iliohypo-Radial n. gastric n. & coccygeal n’s. Med. cut. n. of arm & intercostobrachial n. Post. cut. n. of forearm (from radial n.) Lat. cut. n. of forearm Inf. med. cluneal n. Inf. med. n. of thigh Post cut. n. of thigh Lat. cut. n.of calf (from common femoral n.) Saphenous n. (from femoral n.) Superficial peroneal n. (from common peroneal n.) Sural n. (from tibial n.) Calcanean branches of sural & tibial n’s. Ulnar n. Inf. lat. cluneal n’s. Median n. Obturator n. Med. cut. n. of thigh (from femoral n.) Lat. plantar n. Med. plantar n. Lat. plantar n. Superficial peroneal n. Saphenous n. Calcanean branches of tibial & sural n’s. Sural n. FIguRE 31-2 The cutaneous fields of peripheral nerves. (Reproduced by permission from W Haymaker, B Woodhall: Peripheral Nerve Injuries, 2nd ed. Philadelphia, Saunders, 1953.) FIguRE 31-3 Distribution of the sensory spinal roots on the surface of the body (dermatomes). (From D Sinclair: Mechanisms of Cutaneous Sensation. Oxford, UK, Oxford University Press, 1981; with permission from Dr. David Sinclair.) Distal dysesthesias can also be an early event in an evolving polyneuropathy or may herald a myelopathy, such as from vitamin B12 deficiency. Sometimes distal dysesthesias have no definable basis. In contrast, dysesthesias that correspond in distribution to that of a particular peripheral nerve structure denote a lesion at that site. For instance, dysesthesias restricted to the fifth digit and the adjacent one-half of the fourth finger on one hand reliably point to disorder of the ulnar nerve, most commonly at the elbow. Nerve and Root In focal nerve trunk lesions, sensory abnormalities are readily mapped and generally have discrete boundaries (Figs. 31-2 and 31-3). Root (“radicular”) lesions frequently are accompanied by deep, aching pain along the course of the related nerve trunk. With compression of a fifth lumbar (L5) or first sacral (S1) root, as from a ruptured intervertebral disk, sciatica (radicular pain relating to the sciatic nerve trunk) is a common manifestation (Chap. 22). With a lesion affecting a single root, sensory deficits may be minimal or absent because adjacent root territories overlap extensively. Isolated mononeuropathies may cause symptoms beyond the territory supplied by the affected nerve, but abnormalities on examination typically are confined to appropriate anatomic boundaries. In multiple mononeuropathies, symptoms and signs occur in discrete territories supplied by different individual nerves and—as more nerves are affected—may simulate a polyneuropathy if deficits become confluent. With polyneuropathies, sensory deficits are generally graded, distal, and symmetric in distribution (Chap. 459). Dysesthesias, followed by numbness, begin in the toes and ascend symmetrically. When dysesthesias reach the knees, they usually also have appeared in the fingertips. The process is nerve length–dependent, and the deficit is often described as “stocking-glove” in type. Involvement of both hands and feet also occurs with lesions of the upper cervical cord or the brainstem, but an upper level of the sensory disturbance may then be found on the trunk and other evidence of a central lesion may be present, such as sphincter involvement or signs of an upper motor neuron lesion (Chap. 30). Although most polyneuropathies are pansensory and affect all modalities of sensation, selective sensory dysfunction according to nerve fiber size may occur. Small-fiber polyneuropa-161 thies are characterized by burning, painful dysesthesias with reduced pinprick and thermal sensation but with sparing of proprioception, motor function, and deep tendon reflexes. Touch is involved variably; when it is spared, the sensory pattern is referred to as exhibiting sensory dissociation. Sensory dissociation may occur also with spinal cord lesions as well as small-fiber neuropathies. Large-fiber polyneuropathies are characterized by vibration and position sense deficits, imbalance, absent tendon reflexes, and variable motor dysfunction but preservation of most cutaneous sensation. Dysesthesias, if present at all, tend to be tingling or bandlike in quality. Sensory neuronopathy (or ganglionopathy) is characterized by widespread but asymmetric sensory loss occurring in a non-lengthdependent manner so that it may occur proximally or distally and in the arms, legs, or both. Pain and numbness progress to sensory ataxia and impairment of all sensory modalities with time. This condition is usually paraneoplastic or idiopathic in origin (Chaps. 122 and 459) or related to an autoimmune disease, particularly Sjögren’s syndrome. Spinal Cord (See also Chap. 456) If the spinal cord is transected, all sensation is lost below the level of transection. Bladder and bowel function also are lost, as is motor function. Lateral hemisection of the spinal cord produces the Brown-Séquard syndrome, with absent pain and temperature sensation contralaterally and loss of proprioceptive sensation and power ipsilaterally below the lesion (see Figs. 31-1 and 456-1). Numbness or paresthesias in both feet may arise from a spinal cord lesion; this is especially likely when the upper level of the sensory loss extends to the trunk. When all extremities are affected, the lesion is probably in the cervical region or brainstem unless a peripheral neuropathy is responsible. The presence of upper motor neuron signs (Chap. 30) supports a central lesion; a hyperesthetic band on the trunk may suggest the level of involvement. A dissociated sensory loss can reflect spinothalamic tract involvement in the spinal cord, especially if the deficit is unilateral and has an upper level on the torso. Bilateral spinothalamic tract involvement occurs with lesions affecting the center of the spinal cord, such as in syringomyelia. There is a dissociated sensory loss with impairment of pinprick and temperature appreciation but relative preservation of light touch, position sense, and vibration appreciation. Dysfunction of the posterior columns in the spinal cord or of the posterior root entry zone may lead to a bandlike sensation around the trunk or a feeling of tight pressure in one or more limbs. Flexion of the neck sometimes leads to an electric shock–like sensation that radiates down the back and into the legs (Lhermitte’s sign) in patients with a cervical lesion affecting the posterior columns, such as from multiple sclerosis, cervical spondylosis, or recent irradiation to the cervical region. Brainstem Crossed patterns of sensory disturbance, in which one side of the face and the opposite side of the body are affected, localize to the lateral medulla. Here a small lesion may damage both the ipsilateral descending trigeminal tract and the ascending spinothalamic fibers subserving the opposite arm, leg, and hemitorso (see “Lateral medullary syndrome” in Fig. 446-10). A lesion in the tegmentum of the pons and midbrain, where the lemniscal and spinothalamic tracts merge, causes pansensory loss contralaterally. Thalamus Hemisensory disturbance with tingling numbness from head to foot is often thalamic in origin but also can arise from the anterior parietal region. If abrupt in onset, the lesion is likely to be due to a small stroke (lacunar infarction), particularly if localized to the thalamus. Occasionally, with lesions affecting the VPL nucleus or adjacent white matter, a syndrome of thalamic pain, also called Déjerine-Roussy syndrome, may ensue. The persistent, unrelenting unilateral pain often is described in dramatic terms. Cortex With lesions of the parietal lobe involving either the cortex or the subjacent white matter, the most prominent symptoms are contralateral hemineglect, hemi-inattention, and a tendency not to use CHAPTER 31 Numbness, Tingling, and Sensory Loss 162 the affected hand and arm. On cortical sensory testing (e.g., two-point discrimination, graphesthesia), abnormalities are often found but primary sensation is usually intact. Anterior parietal infarction may present as a pseudothalamic syndrome with contralateral loss of primary sensation from head to toe. Dysesthesias or a sense of numbness and, rarely, a painful state may also occur. Focal Sensory Seizures These seizures generally are due to lesions in the area of the postcentral or precentral gyrus. The principal symptom of focal sensory seizures is tingling, but additional, more complex sensations may occur, such as a rushing feeling, a sense of warmth, or a sense of movement without detectable motion. Symptoms typically are unilateral; commonly begin in the arm or hand, face, or foot; and often spread in a manner that reflects the cortical representation of different bodily parts, as in a Jacksonian march. Their duration is variable; seizures may be transient, lasting only for seconds, or persist for an hour or more. Focal motor features may supervene, often becoming generalized with loss of consciousness and tonic-clonic jerking. Arthur Asbury authored or co-authored this chapter in earlier editions of this book. PART 2 Cardinal Manifestations and Presentation of Diseases PREVALENCE, MORBIDITY, AND MORTALITY Gait and balance problems are common in the elderly and contribute to the risk of falls and injury. Gait disorders have been described in 15% of individuals older than 65. By age 80 one person in four will use a mechanical aid to assist with ambulation. Among those 85 and older, the prevalence of gait abnormality approaches 40%. In epidemiologic studies, gait disorders are consistently identified as a major risk factor for falls and injury. A substantial number of older persons report insecure balance and experience falls and fear of falling. Prospective studies indicate that 30% of those older than 65 fall each year. The proportion is even higher in frail elderly and nursing home patients. Each year, 8% of individuals older than 75 suffer a serious fall-related injury. Hip fractures result in hospitalization, can lead to nursing home admission, and are associated with an increased mortality risk in the subsequent year. For each person who is physically disabled, there are others whose functional independence is limited by anxiety and fear of falling. Nearly one in five elderly individuals voluntarily restricts his or her activity because of fear of falling. With loss of ambulation, the quality of life diminishes, and rates of morbidity and mortality increase. An upright bipedal gait depends on the successful integration of postural control and locomotion. These functions are widely distributed in the central nervous system. The biomechanics of bipedal walking are complex, and the performance is easily compromised by a neurologic deficit at any level. Command and control centers in the brain-stem, cerebellum, and forebrain modify the action of spinal pattern generators to promote stepping. While a form of “fictive locomotion” can be elicited from quadrupedal animals after spinal transection, this capacity is limited in primates. Step generation in primates is dependent on locomotor centers in the pontine tegmentum, midbrain, and subthalamic region. Locomotor synergies are executed through the reticular formation and descending pathways in the ventromedial spinal cord. Cerebral control provides a goal and purpose for walking and is involved in avoidance of obstacles and adaptation of locomotor programs to context and terrain. Postural control requires the maintenance of the center of mass over the base of support through the gait cycle. Unconscious postural adjustments maintain standing balance: long latency responses are measurable in the leg muscles, beginning 110 milliseconds after a perturbation. Forward motion of the center of mass provides propulsive force for stepping, but failure to maintain the center of mass within stability limits results in falls. The anatomic substrate for dynamic balance has not been well defined, but the vestibular nucleus and midline cerebellum contribute to balance control in animals. Patients with damage to these structures have impaired balance while standing and walking. Standing balance depends on good-quality sensory information about the position of the body center with respect to the environment, support surface, and gravitational forces. Sensory information for postural control is primarily generated by the visual system, the vestibular system, and proprioceptive receptors in the muscle spindles and joints. A healthy redundancy of sensory afferent information is generally available, but loss of two of the three pathways is sufficient to compromise standing balance. Balance disorders in older individuals sometimes result from multiple insults in the peripheral sensory systems (e.g., visual loss, vestibular deficit, peripheral neuropathy) that critically degrade the quality of afferent information needed for balance stability. Older patients with cognitive impairment from neurodegenerative diseases appear to be particularly prone to falls and injury. There is a growing body of literature on the use of attentional resources to manage gait and balance. Walking is generally considered to be unconscious and automatic, but the ability to walk while attending to a cognitive task (dual-task walking) may be compromised in frail elderly individuals with a history of falls. Older patients with deficits in executive function may have particular difficulty in managing the attentional resources needed for dynamic balance when distracted. Disorders of gait may be attributed to frailty, fatigue, arthritis, and orthopedic deformity, but neurologic causes are disabling and important to address. The heterogeneity of gait disorders observed in clinical practice reflects the large network of neural systems involved in the task. Walking is vulnerable to neurologic disease at every level. Gait disorders have been classified descriptively on the basis of abnormal physiology and biomechanics. One problem with this approach is that many failing gaits look fundamentally similar. This overlap reflects common patterns of adaptation to threatened balance stability and declining performance. The gait disorder observed clinically must be viewed as the product of a neurologic deficit and a functional adaptation. Unique features of the failing gait are often overwhelmed by the adaptive response. Some common patterns of abnormal gait are summarized next. Gait disorders can also be classified by etiology (Table 32-1). Etiology No. of Cases Percent Source: Reproduced with permission from J Masdeu, L Sudarsky, L Wolfson: Gait Disorders of Aging. Lippincott Raven, 1997. The term cautious gait is used to describe the patient who walks with an abbreviated stride and lowered center of mass, as if walking on a slippery surface. This disorder is both common and nonspecific. It is, in essence, an adaptation to a perceived postural threat. There may be an associated fear of falling. This disorder can be observed in more than one-third of older patients with gait impairment. Physical therapy often improves walking to the degree that follow-up observation may reveal a more specific underlying disorder. Spastic gait is characterized by stiffness in the legs, an imbalance of muscle tone, and a tendency to circumduct and scuff the feet. The disorder reflects compromise of corticospinal command and overactivity of spinal reflexes. The patient may walk on the toes. In extreme instances, the legs cross due to increased tone in the adductors. Upper motor neuron signs are present on physical examination. Shoes often reflect an uneven pattern of wear across the outside. The disorder may be cerebral or spinal in origin. Myelopathy from cervical spondylosis is a common cause of spastic or spastic-ataxic gait in the elderly. Demyelinating disease and trauma are the leading causes of myelopathy in younger patients. In chronic progressive myelopathy of unknown cause, a workup with laboratory and imaging tests may establish a diagnosis. A family history should suggest hereditary spastic paraplegia (Chap. 452); genetic testing is now available for some of the common mutations responsible for this disorder. Tropical spastic paraparesis related to the retrovirus human T-cell lymphotropic virus 1 (HTLV-1) is endemic in parts of the Caribbean and South America. A structural lesion, such as a tumor or a spinal vascular malformation, should be excluded with appropriate testing. Spinal cord disorders are discussed in detail in Chap. 456. With cerebral spasticity, asymmetry is common, the upper extremities are usually involved, and dysarthria is often an associated feature. Common causes include vascular disease (stroke), multiple sclerosis, and perinatal injury to the nervous system (cerebral palsy). Other stiff-legged gaits include dystonia (Chap. 449) and stiff-person syndrome (Chap. 122). Dystonia is a disorder characterized by sustained muscle contractions resulting in repetitive twisting movements and abnormal posture. It often has a genetic basis. Dystonic spasms can produce plantar flexion and inversion of the feet, sometimes with torsion of the trunk. In autoimmune stiff-person syndrome, exaggerated lordosis of the lumbar spine and overactivation of antagonist muscles restrict trunk and lower-limb movement and result in a wooden or fixed posture. Parkinson’s disease (Chap. 449) is common, affecting 1% of the population >55 years of age. The stooped posture and shuffling gait are characteristic and distinctive features. Patients sometimes accelerate (festinate) with walking, display retropulsion, or exhibit a tendency to turn en bloc. A National Institutes of Health workshop defined freezing of gait as “brief, episodic absence of forward progression of the feet, despite the intention to walk.” Gait freezing occurs in 26% of Parkinson’s patients by the end of 5 years and develops in most such patients eventually. Postural instability and falling occur as the disease progresses; some falls are precipitated by freezing of gait. Freezing of gait is even more common in some Parkinson’s-related neurodegenerative disorders, such as progressive supranuclear palsy, multiple-system atrophy, and corticobasal degeneration. Patients with these disorders frequently present with axial stiffness, postural instability, and a shuffling, freezing gait while lacking the characteristic pill-rolling tremor of Parkinson’s disease. Falls within the first year suggest the possibility of progressive supranuclear palsy. Hyperkinetic movement disorders also produce characteristic and recognizable disturbances in gait. In Huntington’s disease (Chap. 449), the unpredictable occurrence of choreic movements gives the gait a dancing quality. Tardive dyskinesia is the cause of many odd, stereotypic gait disorders seen in patients chronically exposed to antipsychotics and other drugs that block the D2 dopamine receptor. Frontal gait disorder, sometimes known as gait apraxia, is common in the elderly and has a variety of causes. The term is used to describe a shuffling, freezing gait with imbalance and other signs of higher cerebral dysfunction. Typical features include a wide base of support, a short stride, shuffling along the floor, and difficulty with starts and turns. Many patients exhibit a difficulty with gait initiation that is descriptively characterized as the “slipping clutch” syndrome or gait ignition failure. The term lower-body parkinsonism is also used to describe such patients. Strength is generally preserved, and patients are able to make stepping movements when not standing and maintaining their balance at the same time. This disorder is best considered a higher-level motor control disorder, as opposed to an apraxia (Chap. 36). The most common cause of frontal gait disorder is vascular disease, particularly subcortical small-vessel disease. Lesions are frequently found in the deep frontal white matter and centrum ovale. Gait disorder may be the salient feature in hypertensive patients with ischemic lesions of the deep-hemisphere white matter (Binswanger’s disease). The clinical syndrome includes mental changes (variable in degree), dysarthria, pseudobulbar affect (emotional disinhibition), increased tone, and hyperreflexia in the lower limbs. Communicating hydrocephalus in adults also presents with a gait disorder of this type. Other features of the diagnostic triad (mental changes, incontinence) may be absent in the initial stages. MRI demonstrates ventricular enlargement, an enlarged flow void about the aqueduct, and a variable degree of periventricular white-matter change. A lumbar puncture or dynamic test is necessary to confirm hydrocephalus. Disorders of the cerebellum have a dramatic impact on gait and balance. Cerebellar gait ataxia is characterized by a wide base of support, lateral instability of the trunk, erratic foot placement, and decompensation of balance when attempting to walk on a narrow base. Difficulty maintaining balance when turning is often an early feature. Patients are unable to walk tandem heel to toe and display truncal sway in narrow-based or tandem stance. They show considerable variation in their tendency to fall in daily life. Causes of cerebellar ataxia in older patients include stroke, trauma, tumor, and neurodegenerative disease such as multiple-system atrophy (Chaps. 449 and 454) and various forms of hereditary cerebellar degeneration (Chap. 450). A short expansion at the site of the fragile X mutation (fragile X pre-mutation) has been associated with gait ataxia in older men. Alcoholic cerebellar degeneration can be screened by history and often confirmed by MRI. In patients with ataxia, MRI demonstrates the extent and topography of cerebellar atrophy. As reviewed earlier in this chapter, balance depends on high-quality afferent information from the visual and the vestibular systems and proprioception. When this information is lost or degraded, balance during locomotion is impaired and instability results. The sensory ataxia of tabetic neurosyphilis is a classic example. The contemporary equivalent is the patient with neuropathy affecting large fibers. Vitamin B12 deficiency is a treatable cause of large-fiber sensory loss in the spinal cord and peripheral nervous system. Joint position and vibration sense are diminished in the lower limbs. The stance in such patients is destabilized by eye closure; they often look down at their feet when walking and do poorly in the dark. Table 32-2 compares sensory ataxia with cerebellar ataxia and frontal gait disorder. Some frail older patients exhibit a syndrome of imbalance from the combined effect of multiple sensory deficits. Such patients have disturbances in proprioception, vision, and vestibular sense that impair postural support. Patients with neuromuscular disease often have an abnormal gait, occasionally as a presenting feature. With distal weakness (peripheral neuropathy), the step height is increased to compensate for footdrop, and the sole of the foot may slap on the floor during weight acceptance. Neuropathy may be associated with a degree of sensory imbalance, as described earlier. Patients with myopathy or muscular dystrophy more TABLE 32-2 fEATuRES of CEREBELLAR ATAxiA, SEnSoRy ATAxiA, AnD fRonTAL gAiT DiSoRDERS Base of Wide-based Narrow base, Wide-based support Stride Irregular, Regular with Short, shuffling lurching path deviation Romberg test +/− Unsteady, falls +/− Turns Unsteady +/− Hesitant, multistep typically exhibit proximal weakness. Weakness of the hip girdle may result in some degree of excess pelvic sway during locomotion. Alcohol intoxication is the most common cause of acute walking difficulty. Chronic toxicity from medications and metabolic disturbances can impair motor function and gait. Mental status changes may be found, and examination may reveal asterixis or myoclonus. Static equilibrium is disturbed, and such patients are easily thrown off balance. Disequilibrium is particularly evident in patients with chronic renal disease and those with hepatic failure, in whom asterixis may impair postural support. Sedative drugs, especially neuroleptics and long-acting benzodiazepines, affect postural control and increase the risk for falls. These disorders are especially important to recognize because they are often treatable. Psychogenic disorders are common in neurologic practice, and the presentation often involves gait. Some patients with extreme anxiety or phobia walk with exaggerated caution with abduction of the arms, as if walking on ice. This inappropriately overcautious gait differs in degree from the gait of the patient who is insecure and making adjustments for imbalance. Depressed patients exhibit primarily slowness, a manifestation of psychomotor retardation, and lack of purpose in their stride. Hysterical gait disorders are among the most spectacular encountered. Odd gyrations of posture with wastage of muscular energy (astasia–abasia), extreme slow motion, and dramatic fluctuations over time may be observed in patients with somatoform disorders and conversion reactions. APPROACH TO THE PATIENT: Slowly Progressive Disorder of gait PART 2 Cardinal Manifestations and Presentation of Diseases When reviewing the history, it is helpful to inquire about the onset and progression of disability. Initial awareness of an unsteady gait often follows a fall. Stepwise evolution or sudden progression suggests vascular disease. Gait disorder may be associated with urinary urgency and incontinence, particularly in patients with cervical spine disease or hydrocephalus. It is always important to review the use of alcohol and medications that affect gait and balance. Information on localization derived from the neurologic examination can be helpful in narrowing the list of possible diagnoses. Gait observation provides an immediate sense of the patient’s degree of disability. Arthritic and antalgic gaits are recognized by observation, though neurologic and orthopedic problems may coexist. Characteristic patterns of abnormality are sometimes seen, though, as stated previously, failing gaits often look fundamentally similar. Cadence (steps per minute), velocity, and stride length can be recorded by timing a patient over a fixed distance. Watching the patient rise from a chair provides a good functional assessment of balance. Brain imaging studies may be informative in patients with an undiagnosed disorder of gait. MRI is sensitive for cerebral lesions of vascular or demyelinating disease and is a good screening test for occult hydrocephalus. Patients with recurrent falls are at risk for subdural hematoma. As mentioned earlier, many elderly patients with gait and balance difficulty have white matter abnormalities in the periventricular region and centrum semiovale. While these lesions may be an incidental finding, a substantial burden of white matter disease will ultimately impact cerebral control of locomotion. DEFINITION, ETIOLOgY, AND MANIFESTATIONS Balance is the ability to maintain equilibrium—a state in which opposing physical forces cancel one another out. In physiology, this term is taken to mean the ability to control the center of mass with respect to gravity and the support surface. In reality, people are not consciously aware of their center of mass, but everyone (particularly gymnasts, figure skaters, and platform divers, for example) move so as to manage it. Disorders of balance present as difficulty maintaining posture while standing and walking and as a subjective sense of disequilibrium, which is a form of dizziness. The cerebellum and vestibular system organize antigravity responses needed to maintain an upright posture. These responses are physiologically complex, and the anatomic representation they entail is not well understood. Failure, resulting in disequilibrium, can occur at several levels: cerebellar, vestibular, somatosensory, and higher-level disequilibrium. Patients with cerebellar ataxia do not generally complain of dizziness, though balance is visibly impaired. Neurologic examination reveals a variety of cerebellar signs. Postural compensation may prevent falls early on, but falls are inevitable with disease progression. The progression of neurodegenerative ataxia is often measured by the number of years to loss of stable ambulation. Vestibular disorders (Chap. 28) have symptoms and signs that fall into three categories: (1) vertigo (the subjective inappropriate perception or illusion of movement); (2) nystagmus (involuntary eye movements); and (3) impaired standing balance. Not every patient has all manifestations. Patients with vestibular deficits related to ototoxic drugs may lack vertigo or obvious nystagmus, but their balance is impaired on standing and walking, and they cannot navigate in the dark. Laboratory testing is available to investigate vestibular deficits. Somatosensory deficits also produce imbalance and falls. There is often a subjective sense of insecure balance and fear of falling. Postural control is compromised by eye closure (Romberg’s sign); these patients also have difficulty navigating in the dark. A dramatic example is provided by the patient with autoimmune subacute sensory neuropathy, which is sometimes a paraneoplastic disorder (Chap. 122). Compensatory strategies enable such patients to walk in the virtual absence of proprioception, but the task requires active visual monitoring. Patients with higher-level disorders of equilibrium have difficulty maintaining balance in daily life and may present with falls. Their awareness of balance impairment may be reduced. Patients taking sedating medications are in this category. In prospective studies, dementia and sedating medications substantially increase the risk for falls. Falls are common in the elderly; 30% of people older than 65 who are living in the community fall each year. Modest changes in balance function have been described in fit older individuals as a result of normal aging. Subtle deficits in sensory systems, attention, and motor reaction time contribute to the risk, and environmental hazards abound. Many falls by older adults are episodes of tripping or slipping, often designated mechanical falls. A fall is not a neurologic problem per se, but there are events for which neurologic evaluation is appropriate. It is important to distinguish falls associated with loss of consciousness (syncope, seizure), which require appropriate evaluation and intervention (Chaps. 27 and 445). In most prospective studies, a small subset of individuals experience a large number of fall events. These individuals with recurrent falls often have gait and balance issues that need to be addressed. Fall Patterns: The Event description The history of a fall is often problematic or incomplete, and the underlying mechanism or cause may be difficult to establish in retrospect. The patient and family may have limited information about what triggered the fall. Injuries can complicate the physical examination. While there is no standard nosology of falls, some common clinical patterns may emerge and provide a clue. drop attacks and collapsIng falls Drop attacks are sudden collapsing falls without loss of consciousness. Patients who collapse from lack of postural tone present a diagnostic challenge. Patients may report that their legs just “gave out” underneath them; their families may describe these patients as “collapsing in a heap.” Orthostatic hypotension may be a factor in some such falls, and this possibility should be thoroughly evaluated. Rarely, a colloid cyst of the third ventricle can present with intermittent obstruction of the foramen of Monro, with a consequent drop attack. While collapsing falls are more common among older patients with vascular risk factors, they should not be confused with vertebrobasilar ischemic attacks. topplIng falls Some patients maintain tone in antigravity muscles but fall over like a tree trunk, as if postural defenses had disengaged. There may be a consistent direction to such falls. The patient with cerebellar pathology may lean and topple over toward the side of the lesion. Patients with lesions of the vestibular system or its central pathways may experience lateral pulsion and toppling falls. Patients with progressive supranuclear palsy often fall over backward. Falls of this nature occur in patients with advanced Parkinson’s disease once postural instability has developed. falls due to gaIt freezIng Another fall pattern in Parkinson’s disease and related disorders is the fall due to freezing of gait. The feet stick to the floor and the center of mass keeps moving, resulting in a disequilibrium from which the patient has difficulty recovering. This sequence of events can result in a forward fall. Gait freezing can also occur as the patient attempts to turn and change direction. Similarly, patients with Parkinson’s disease and festinating gait may find their feet unable to keep up and may thus fall forward. falls related to sensory loss Patients with somatosensory, visual, or vestibular deficits are prone to falls. These patients have particular difficulty dealing with poor illumination or walking on uneven ground. They often report subjective imbalance, apprehension, and fear of falling. Deficits in joint position and vibration sense are apparent on physical examination. These patients may be especially responsive to a rehabilitation-based intervention. weakness and fraIlty Patients who lack strength in antigravity muscles have difficulty rising from a chair, tire easily when walking, and have difficulty maintaining their balance after a perturbation. These patients are often unable to get up after a fall and may have to remain on the floor for a prolonged period until help arrives. Deconditioning of this sort is often treatable. Resistance strength training can increase muscle mass and leg strength, even for people in their eighties and nineties. The most productive approach is to identify the high-risk patient prospectively, before there is a serious injury. Patients at particular risk include hospitalized patients with mental status changes, nursing home residents, patients with dementia, and those taking medications that compromise attention and alertness. Patients with Parkinson’s disease and other gait disorders are also at increased risk. (Table 32-3) summarizes a meta-analysis of prospective studies establishing the principal risk factors for falls. It is often possible to address and mitigate some of the major risk factors. Medication overuse may be the most important remediable risk factor for falls. Abbreviations: OR, odds ratio from retrospective studies; RR, relative risk from prospective studies. Source: Reproduced with permission from J Masdeu, L Sudarsky, L Wolfson: Gait Disorders of Aging. Lippincott Raven, 1997. Efforts should be made to define the etiology of the gait disorder and the mechanism underlying the falls by a given patient. Orthostatic changes in blood pressure and pulse should be recorded. Rising from a chair and walking should be evaluated for safety. Specific treatment may be possible once a diagnosis is established. Therapeutic intervention is often recommended for older patients at substantial risk for falls, even if no neurologic disease is identified. A home visit to look for environmental hazards can be helpful. A variety of modifications may be recommended to improve safety, including improved lighting and the installation of grab bars and nonslip surfaces. Rehabilitative interventions aim to improve muscle strength and balance stability and to make the patient more resistant to injury. High-intensity resistance strength training with weights and machines is useful to improve muscle mass, even in frail older patients. Improvements realized in posture and gait should translate to reduced risk of falls and injury. Sensory balance training is another approach to improving balance stability. Measurable gains can be made in a few weeks of training, and benefits can be maintained over 6 months by a 10to 20-min home exercise program. This strategy is particularly successful in patients with vestibular and somatosensory balance disorders. A Tai Chi exercise program has been demonstrated to reduce the risk of falls and injury in patients with Parkinson’s disease. Video Library of Gait Disorders Gail Kang, Nicholas B. Galifianakis, Michael D. Geschwind Problems with gait and balance are major causes of falls, accidents, and resulting disability, especially in later life, and are often harbingers of neurologic disease. Early diagnosis is essential, especially for treatable conditions, because it may permit the institution of prophylactic measures to prevent dangerous falls and also to reverse or ameliorate the underlying cause. In this video, examples of gait disorders due to Parkinson’s disease, other extrapyramidal disorders, and ataxias, as well as other common gait disorders, are presented. CHAPTER 33e Video Library of Gait Disorders Confusion and Delirium S. Andrew Josephson, Bruce L. Miller Confusion, a mental and behavioral state of reduced comprehen-sion, coherence, and capacity to reason, is one of the most common problems encountered in medicine, accounting for a large number of emergency department visits, hospital admissions, and inpatient 34166 consultations. Delirium, a term used to describe an acute confusional state, remains a major cause of morbidity and mortality, costing over $150 billion dollars yearly in health care costs in the United States alone. Despite increased efforts targeting awareness of this condition, delirium often goes unrecognized in the face of evidence that it is usually the cognitive manifestation of serious underlying medical or neurologic illness. A multitude of terms are used to describe patients with delirium, including encephalopathy, acute brain failure, acute confusional state, and postoperative or intensive care unit (ICU) psychosis. Delirium has many clinical manifestations, but is defined as a relatively acute decline in cognition that fluctuates over hours or days. The hallmark of delirium is a deficit of attention, although all cognitive domains—including memory, executive function, visuospatial tasks, and language—are variably involved. Associated symptoms that may be present in some cases include altered sleep-wake cycles, perceptual disturbances such as hallucinations or delusions, affect changes, and autonomic findings that include heart rate and blood pressure instability. Delirium is a clinical diagnosis that is made only at the bedside. Two subtypes have been described—hyperactive and hypoactive—based on differential psychomotor features. The cognitive syndrome associated with severe alcohol withdrawal (i.e., “delirium tremens”) remains the classic example of the hyperactive subtype, featuring prominent hallucinations, agitation, and hyperarousal, often accompanied by life-threatening autonomic instability. In striking contrast is the hypoactive subtype, exemplified by benzodiazepine intoxication, in which patients are withdrawn and quiet, with prominent apathy and psychomotor slowing. This dichotomy between subtypes of delirium is a useful construct, but patients often fall somewhere along a spectrum between the hyperactive and hypoactive extremes, sometimes fluctuating from one to the other. Therefore, clinicians must recognize this broad range of presentations of delirium to identify all patients with this potentially reversible cognitive disturbance. Hyperactive patients are often easily recognized by their characteristic severe agitation, tremor, hallucinations, and autonomic instability. Patients who are quietly hypoactive are more often overlooked on the medical wards and in the ICU. The reversibility of delirium is emphasized because many etiologies, such as systemic infection and medication effects, can be treated easily. The long-term cognitive effects of delirium remain largely unknown. Some episodes of delirium continue for weeks, months, or even years. The persistence of delirium in some patients and its high recurrence rate may be due to inadequate initial treatment of the underlying etiology. In other instances, delirium appears to cause permanent neuronal damage and cognitive decline. Even if an episode of delirium completely resolves, there may be lingering effects of the disorder; a patient’s recall of events after delirium varies widely, ranging from complete amnesia to repeated re-experiencing of the frightening period of confusion, similar to what is seen in patients with posttraumatic stress disorder. An effective primary prevention strategy for delirium begins with identification of patients at high risk for this disorder, including those preparing for elective surgery or being admitted to the hospital. Although no single validated scoring system has been widely accepted as a screen for asymptomatic patients, there are multiple well-established risk factors for delirium. PART 2 Cardinal Manifestations and Presentation of Diseases The two most consistently identified risks are older age and baseline cognitive dysfunction. Individuals who are over age 65 or exhibit low scores on standardized tests of cognition develop delirium upon hospitalization at a rate approaching 50%. Whether age and baseline cognitive dysfunction are truly independent risk factors is uncertain. Other predisposing factors include sensory deprivation, such as preexisting hearing and visual impairment, as well as indices for poor overall health, including baseline immobility, malnutrition, and underlying medical or neurologic illness. In-hospital risks for delirium include the use of bladder catheterization, physical restraints, sleep and sensory deprivation, and the addition of three or more new medications. Avoiding such risks remains a key component of delirium prevention as well as treatment. Surgical and anesthetic risk factors for the development of postoperative delirium include specific procedures such as those involving cardiopulmonary bypass, inadequate or excessive treatment of pain in the immediate postoperative period, and perhaps specific agents such as inhalational anesthetics. The relationship between delirium and dementia (Chap. 448) is complicated by significant overlap between the two conditions, and it is not always simple to distinguish between them. Dementia and preexisting cognitive dysfunction serve as major risk factors for delirium, and at least two-thirds of cases of delirium occur in patients with coexisting underlying dementia. A form of dementia with parkinsonism, termed dementia with Lewy bodies, is characterized by a fluctuating course, prominent visual hallucinations, parkinsonism, and an attentional deficit that clinically resembles hyperactive delirium; patients with this condition are particularly vulnerable to delirium. Delirium in the elderly often reflects an insult to the brain that is vulnerable due to an underlying neurodegenerative condition. Therefore, the development of delirium sometimes heralds the onset of a previously unrecognized brain disorder. Delirium is common, but its reported incidence has varied widely with the criteria used to define this disorder. Estimates of delirium in hospitalized patients range from 18 to 64%, with higher rates reported for elderly patients and patients undergoing hip surgery. Older patients in the ICU have especially high rates of delirium that approach 75%. The condition is not recognized in up to one-third of delirious inpatients, and the diagnosis is especially problematic in the ICU environment, where cognitive dysfunction is often difficult to appreciate in the setting of serious systemic illness and sedation. Delirium in the ICU should be viewed as an important manifestation of organ dysfunction not unlike liver, kidney, or heart failure. Outside the acute hospital setting, delirium occurs in nearly one-quarter of patients in nursing homes and in 50 to 80% of those at the end of life. These estimates emphasize the remarkably high frequency of this cognitive syndrome in older patients, a population expected to grow in the upcoming decades. Until recently, an episode of delirium was viewed as a transient condition that carried a benign prognosis. It is now recognized as a disorder with a substantial morbidity rate and increased mortality rate and often represents the first manifestation of a serious underlying illness. Recent estimates of in-hospital mortality rates among delirious patients have ranged from 25 to 33%, a rate similar to that of patients with sepsis. Patients with an in-hospital episode of delirium have a fivefold higher mortality rate in the months after their illness compared with age-matched nondelirious hospitalized patients. Delirious hospitalized patients have a longer length of stay, are more likely to be discharged to a nursing home, and are more likely to experience subsequent episodes of delirium and cognitive decline; as a result, this condition has enormous economic implications. The pathogenesis and anatomy of delirium are incompletely understood. The attentional deficit that serves as the neuropsychological hallmark of delirium has a diffuse localization within the brainstem, thalamus, prefrontal cortex, and parietal lobes. Rarely, focal lesions such as ischemic strokes have led to delirium in otherwise healthy persons; right parietal and medial dorsal thalamic lesions have been reported most commonly, pointing to the importance of these areas to delirium pathogenesis. In most cases, delirium results from widespread disturbances in cortical and subcortical regions rather than a focal neuroanatomic cause. Electroencephalogram (EEG) data in persons with delirium usually show symmetric slowing, a nonspecific finding that supports diffuse cerebral dysfunction. Multiple neurotransmitter abnormalities, proinflammatory factors, and specific genes likely play a role in the pathogenesis of delirium. Deficiency of acetylcholine may play a key role, and medications with anticholinergic properties also can precipitate delirium. Dementia patients are susceptible to episodes of delirium, and those with Alzheimer’s pathology and dementia with Lewy bodies or Parkinson’s disease dementia are known to have a chronic cholinergic deficiency state due to degeneration of acetylcholine-producing neurons in the basal forebrain. Additionally, other neurotransmitters are also likely to be involved in this diffuse cerebral disorder. For example, increases in dopamine can also lead to delirium. Patients with Parkinson’s disease treated with dopaminergic medications can develop a delirium-like state that features visual hallucinations, fluctuations, and confusion. Not all individuals exposed to the same insult will develop signs of delirium. A low dose of an anticholinergic medication may have no cognitive effects on a healthy young adult but produce a florid delirium in an elderly person with known underlying dementia, although even healthy young persons develop delirium with very high doses of anticholinergic medications. This concept of delirium developing as the result of an insult in predisposed individuals is currently the most widely accepted pathogenic construct. Therefore, if a previously healthy individual with no known history of cognitive illness develops delirium in the setting of a relatively minor insult such as elective surgery or hospitalization, an unrecognized underlying neurologic illness such as a neurodegenerative disease, multiple previous strokes, or another diffuse cerebral cause should be considered. In this context, delirium can be viewed as a “stress test for the brain” whereby exposure to known inciting factors such as systemic infection and offending drugs can unmask a decreased cerebral reserve and herald a serious underlying and potentially treatable illness. APPROACH TO THE PATIENT: Because the diagnosis of delirium is clinical and is made at the bedside, a careful history and physical examination are necessary in evaluating patients with possible confusional states. Screening tools can aid physicians and nurses in identifying patients with delirium, including the Confusion Assessment Method (CAM) (Table 34-1); the Organic Brain Syndrome Scale; the Delirium Rating Scale; and, in the ICU, the ICU version of the CAM and the Delirium Detection Score. Using the well-validated CAM, a diagnosis of delirium is made if there is (1) an acute onset and fluctuating course and (2) inattention accompanied by either (3) disorganized thinking or (4) an altered level of consciousness. These scales may not identify the full spectrum of patients with delirium, and all patients who are acutely confused should be presumed delirious regardless of their presentation due to the wide variety of possible clinical features. A course that fluctuates over hours or days and may worsen at night (termed sundowning) is typical but not essential for the diagnosis. Observation of the patient usually will reveal an altered level of consciousness or a deficit of attention. Other features that are sometimes present include alteration of sleep-wake cycles, thought disturbances such as hallucinations or delusions, autonomic instability, and changes in affect. It may be difficult to elicit an accurate history in delirious patients who have altered levels of consciousness or impaired attention. Information from a collateral source such as a spouse or another The diagnosis of delirium requires the presence of features 1 and 2 and of either feature 3 or 4. Feature 1. Acute onset and fluctuating course This feature is satisfied by positive responses to the following questions: Is there evidence of an acute change in mental status from the patient’s baseline? Did the (abnormal) behavior fluctuate during the day, that is, tend to come and go, or did it increase and decrease in severity? Feature 2. Inattention This feature is satisfied by a positive response to the following question: Did the patient have difficulty focusing attention, for example, being easily distractible, or have difficulty keeping track of what was being said? Feature 3. Disorganized thinking This feature is satisfied by a positive response to the following question: Was the patient’s thinking disorganized or incoherent, such as rambling or irrelevant conversation, unclear or illogical flow of ideas, or unpredictable switching from subject to subject? Feature 4. Altered level of consciousness This feature is satisfied by any answer other than “alert” to the following question: Overall, how would you rate the patient’s level of consciousness: alert (normal), vigilant (hyperalert), lethargic (drowsy, easily aroused), stupor (difficult to arouse), or coma (unarousable)? aInformation is usually obtained from a reliable reporter, such as a family member, caregiver, or nurse. Source: Modified from SK Inouye et al: Clarifying confusion: The Confusion Assessment Method. A new method for detection of delirium. Ann Intern Med 113:941, 1990. family member is therefore invaluable. The three most important pieces of history are the patient’s baseline cognitive function, the time course of the present illness, and current medications. Premorbid cognitive function can be assessed through the collateral source or, if needed, via a review of outpatient records. Delirium by definition represents a change that is relatively acute, usually over hours to days, from a cognitive baseline. As a result, an acute confusional state is nearly impossible to diagnose without some knowledge of baseline cognitive function. Without this information, many patients with dementia or depression may be mistaken as delirious during a single initial evaluation. Patients with a more hypoactive, apathetic presentation with psychomotor slowing may be identified as being different from baseline only through conversations with family members. A number of validated instruments have been shown to diagnose cognitive dysfunction accurately using a collateral source, including the modified Blessed Dementia Rating Scale and the Clinical Dementia Rating (CDR). Baseline cognitive impairment is common in patients with delirium. Even when no such history of cognitive impairment is elicited, there should still be a high suspicion for a previously unrecognized underlying neurologic disorder. Establishing the time course of cognitive change is important not only to make a diagnosis of delirium but also to correlate the onset of the illness with potentially treatable etiologies such as recent medication changes or symptoms of systemic infection. Medications remain a common cause of delirium, especially compounds with anticholinergic or sedative properties. It is estimated that nearly one-third of all cases of delirium are secondary to medications, especially in the elderly. Medication histories should include all prescription as well as over-the-counter and herbal substances taken by the patient and any recent changes in dosing or formulation, including substitution of generics for brand-name medications. Other important elements of the history include screening for symptoms of organ failure or systemic infection, which often contributes to delirium in the elderly. A history of illicit drug use, alcoholism, or toxin exposure is common in younger delirious patients. Finally, asking the patient and collateral source about other symptoms that may accompany delirium, such as depression, may help identify potential therapeutic targets. The general physical examination in a delirious patient should include careful screening for signs of infection such as fever, tachypnea, pulmonary consolidation, heart murmur, and stiff neck. The patient’s fluid status should be assessed; both dehydration and fluid overload with resultant hypoxemia have been associated with delirium, and each is usually easily rectified. The appearance of the skin can be helpful, showing jaundice in hepatic encephalopathy, cyanosis in hypoxemia, or needle tracks in patients using intravenous drugs. The neurologic examination requires a careful assessment of mental status. Patients with delirium often present with a fluctuating course; therefore, the diagnosis can be missed when one relies on a single time point of evaluation. Some but not all patients exhibit the characteristic pattern of sundowning, a worsening of their condition in the evening. In these cases, assessment only during morning rounds may be falsely reassuring. An altered level of consciousness ranging from hyperarousal to lethargy to coma is present in most patients with delirium and can be assessed easily at the bedside. In a patient with a relatively normal level of consciousness, a screen for an attentional deficit is in order, because this deficit is the classic neuropsychological hallmark of delirium. Attention can be assessed while taking a history from the patient. Tangential speech, a fragmentary flow of ideas, or inability to follow complex commands often signifies an attentional problem. There are formal neuropsychological tests to assess attention, but a simple bedside test of digit span forward is quick and fairly sensitive. In this task, patients are asked to repeat successively longer random strings of digits beginning with two digits in a row, said to the patient at 1-second intervals. Healthy adults can repeat a string of five to seven digits before faltering; a digit span of four or less usually indicates an attentional deficit unless hearing or language barriers are present, and many patients with delirium have digit spans of three or fewer digits. More formal neuropsychological testing can be helpful in assessing a delirious patient, but it is usually too cumbersome and time-consuming in the inpatient setting. A Mini-Mental State Examination (MMSE) provides information regarding orientation, language, and visuospatial skills; however, performance of many tasks on the MMSE, including the spelling of “world” backward and serial subtraction of digits, will be impaired by delirious patients’ attentional deficits, rendering the test unreliable. The remainder of the screening neurologic examination should focus on identifying new focal neurologic deficits. Focal strokes or mass lesions in isolation are rarely the cause of delirium, but patients with underlying extensive cerebrovascular disease or neurodegenerative conditions may not be able to cognitively tolerate even relatively small new insults. Patients should be screened for other signs of neurodegenerative conditions such as parkinsonism, which is seen not only in idiopathic Parkinson’s disease but also in other dementing conditions such as Alzheimer’s disease, dementia with Lewy bodies, and progressive supranuclear palsy. The presence of multifocal myoclonus or asterixis on the motor examination is nonspecific but usually indicates a metabolic or toxic etiology of the delirium. Some etiologies can be easily discerned through a careful history and physical examination, whereas others require confirmation with laboratory studies, imaging, or other ancillary tests. A large, diverse group of insults can lead to delirium, and the cause in many patients is often multifactorial. Common etiologies are listed in Table 34-2. Prescribed, over-the-counter, and herbal medications all can precipitate delirium. Drugs with anticholinergic properties, narcotics, and benzodiazepines are particularly common offenders, but nearly any compound can lead to cognitive dysfunction in a predisposed patient. Whereas an elderly patient with baseline dementia PART 2 Cardinal Manifestations and Presentation of Diseases CoMMon ETioLogiES of DELiRiuM Prescription medications: especially those with anticholinergic properties, narcotics, and benzodiazepines Drugs of abuse: alcohol intoxication and alcohol withdrawal, opiates, ecstasy, LSD, GHB, PCP, ketamine, cocaine, “bath salts,” marijuana and its synthetic forms Poisons: inhalants, carbon monoxide, ethylene glycol, pesticides Metabolic conditions Electrolyte disturbances: hypoglycemia, hyperglycemia, hyponatremia, hypernatremia, hypercalcemia, hypocalcemia, hypomagnesemia Hypothermia and hyperthermia Pulmonary failure: hypoxemia and hypercarbia Liver failure/hepatic encephalopathy Renal failure/uremia Cardiac failure Vitamin deficiencies: B12, thiamine, folate, niacin Dehydration and malnutrition Anemia Systemic infections: urinary tract infections, pneumonia, skin and soft tissue infections, sepsis CNS infections: meningitis, encephalitis, brain abscess Endocrine conditions Hyperthyroidism, hypothyroidism Hyperparathyroidism Adrenal insufficiency Cerebrovascular disorders Global hypoperfusion states Hypertensive encephalopathy Focal ischemic strokes and hemorrhages (rare): especially nondominant Seizure-related disorders Nonconvulsive status epilepticus Intermittent seizures with prolonged postictal states Neoplastic disorders Diffuse metastases to the brain Gliomatosis cerebri Carcinomatous meningitis CNS lymphoma Hospitalization Terminal end-of-life delirium Abbreviations: CNS, central nervous system; GHB, γ-hydroxybutyrate; LSD, lysergic acid diethylamide; PCP, phencyclidine. may become delirious upon exposure to a relatively low dose of a medication, less susceptible individuals may become delirious only with very high doses of the same medication. This observation emphasizes the importance of correlating the timing of recent medication changes, including dose and formulation, with the onset of cognitive dysfunction. In younger patients, illicit drugs and toxins are common causes of delirium. In addition to more classic drugs of abuse, the recent rise in availability of methylenedioxymethamphetamine (MDMA, ecstasy), γ-hydroxybutyrate (GHB), “bath salts,” synthetic cannabis, and the phencyclidine (PCP)-like agent ketamine, has led to an increase in delirious young persons presenting to acute care settings (Chap. 469e). Many common prescription drugs such as oral narcotics and benzodiazepines are often abused and readily available on the street. Alcohol abuse leading to high serum levels causes confusion, but more commonly, it is withdrawal from alcohol that leads to a hyperactive delirium. Alcohol and benzodiazepine withdrawal should be considered in all cases of delirium because even patients who drink only a few servings of alcohol every day can experience relatively severe withdrawal symptoms upon hospitalization. Metabolic abnormalities such as electrolyte disturbances of sodium, calcium, magnesium, or glucose can cause delirium, and mild derangements can lead to substantial cognitive disturbances in susceptible individuals. Other common metabolic etiologies include liver and renal failure, hypercarbia and hypoxemia, vitamin deficiencies of thiamine and B12, autoimmune disorders including central nervous system (CNS) vasculitis, and endocrinopathies such as thyroid and adrenal disorders. Systemic infections often cause delirium, especially in the elderly. A common scenario involves the development of an acute cognitive decline in the setting of a urinary tract infection in a patient with baseline dementia. Pneumonia, skin infections such as cellulitis, and frank sepsis also lead to delirium. This so-called septic encephalopathy, often seen in the ICU, is probably due to the release of proinflammatory cytokines and their diffuse cerebral effects. CNS infections such as meningitis, encephalitis, and abscess are less common etiologies of delirium; however, in light of the high mortality rates associated with these conditions when they are not treated quickly, clinicians must always maintain a high index of suspicion. In some susceptible individuals, exposure to the unfamiliar environment of a hospital itself can lead to delirium. This etiology usually occurs as part of a multifactorial delirium and should be considered a diagnosis of exclusion after all other causes have been thoroughly investigated. Many primary prevention and treatment strategies for delirium involve relatively simple methods to address the aspects of the inpatient setting that are most confusing. Cerebrovascular etiologies of delirium are usually due to global hypoperfusion in the setting of systemic hypotension from heart failure, septic shock, dehydration, or anemia. Focal strokes in the right parietal lobe and medial dorsal thalamus rarely can lead to a delirious state. A more common scenario involves a new focal stroke or hemorrhage causing confusion in a patient who has decreased cerebral reserve. In these individuals, it is sometimes difficult to distinguish between cognitive dysfunction resulting from the new neurovascular insult itself and delirium due to the infectious, metabolic, and pharmacologic complications that can accompany hospitalization after stroke. Because a fluctuating course often is seen in delirium, intermittent seizures may be overlooked when one is considering potential etiologies. Both nonconvulsive status epilepticus and recurrent focal or generalized seizures followed by postictal confusion can cause delirium; EEG remains essential for this diagnosis. Seizure activity spreading from an electrical focus in a mass or infarct can explain global cognitive dysfunction caused by relatively small lesions. It is very common for patients to experience delirium at the end of life in palliative care settings. This condition, sometimes described as terminal restlessness, must be identified and treated aggressively because it is an important cause of patient and family discomfort at the end of life. It should be remembered that these patients also may be suffering from more common etiologies of delirium such as systemic infection. A cost-effective approach to the diagnostic evaluation of delirium allows the history and physical examination to guide further tests. No established algorithm for workup will fit all delirious patients due to the staggering number of potential etiologies, but one stepwise approach is detailed in Table 34-3. If a clear precipitant is STEPwiSE EvALuATion of A PATiEnT wiTH DELiRiuM History with special attention to medications (including over-the-counter and herbals) General physical examination and neurologic examination Complete blood count Electrolyte panel including calcium, magnesium, phosphorus Liver function tests, including albumin Renal function tests Electrocardiogram Arterial blood gas Serum and/or urine toxicology screen (perform earlier in young persons) Brain imaging with MRI with diffusion and gadolinium (preferred) or CT Suspected CNS infection: lumbar puncture after brain imaging Suspected seizure-related etiology: electroencephalogram (EEG) (if high suspicion, should be performed immediately) Second-tier further evaluation Vitamin levels: B12, folate, thiamine Endocrinologic laboratories: thyroid-stimulating hormone (TSH) and free T4; cortisol Serum ammonia Sedimentation rate Autoimmune serologies: antinuclear antibodies (ANA), complement levels; p-ANCA, c-ANCA. consider paraneoplastic serologies Infectious serologies: rapid plasmin reagin (RPR); fungal and viral serologies if high suspicion; HIV antibody Lumbar puncture (if not already performed) Brain MRI with and without gadolinium (if not already performed) Abbreviations: c-ANCA, cytoplasmic antineutrophil cytoplasmic antibody; CNS, central nervous system; CT, computed tomography; MRI, magnetic resonance imaging; p-ANCA, perinuclear antineutrophil cytoplasmic antibody. identified, such as an offending medication, further testing may not be required. If, however, no likely etiology is uncovered with initial evaluation, an aggressive search for an underlying cause should be initiated. Basic screening labs, including a complete blood count, electrolyte panel, and tests of liver and renal function, should be obtained in all patients with delirium. In elderly patients, screening for systemic infection, including chest radiography, urinalysis and culture, and possibly blood cultures, is important. In younger individuals, serum and urine drug and toxicology screening may be appropriate early in the workup. Additional laboratory tests addressing other autoimmune, endocrinologic, metabolic, and infectious etiologies should be reserved for patients in whom the diagnosis remains unclear after initial testing. Multiple studies have demonstrated that brain imaging in patients with delirium is often unhelpful. If, however, the initial workup is unrevealing, most clinicians quickly move toward imaging of the brain to exclude structural causes. A noncontrast computed tomography (CT) scan can identify large masses and hemorrhages but is otherwise unlikely to help determine an etiology of delirium. The ability of magnetic resonance imaging (MRI) to identify most acute ischemic strokes as well as to provide neuroanatomic detail that gives clues to possible infectious, inflammatory, neurodegenerative, and neoplastic conditions makes it the test of choice. Because MRI techniques are limited by availability, speed of imaging, patient cooperation, and contraindications, many clinicians begin with CT scanning and proceed to MRI if the etiology of delirium remains elusive. Lumbar puncture (LP) must be obtained immediately after appropriate neuroimaging in all patients in whom CNS infection is suspected. Spinal fluid examination can also be useful in identifying inflammatory and neoplastic conditions. As a result, LP should be considered in any delirious patient with a negative workup. EEG does not have a routine role in the workup of delirium, but it remains invaluable if seizure-related etiologies are considered. PART 2 Cardinal Manifestations and Presentation of Diseases Management of delirium begins with treatment of the underlying inciting factor (e.g., patients with systemic infections should be given appropriate antibiotics, and underlying electrolyte disturbances judiciously corrected). These treatments often lead to prompt resolution of delirium. Blindly targeting the symptoms of delirium pharmacologically only serves to prolong the time patients remain in the confused state and may mask important diagnostic information. Relatively simple methods of supportive care can be highly effective in treating patients with delirium. Reorientation by the nursing staff and family combined with visible clocks, calendars, and outside-facing windows can reduce confusion. Sensory isolation should be prevented by providing glasses and hearing aids to patients who need them. Sundowning can be addressed to a large extent through vigilance to appropriate sleep-wake cycles. During the day, a well-lit room should be accompanied by activities or exercises to prevent napping. At night, a quiet, dark environment with limited interruptions by staff can assure proper rest. These sleep-wake cycle interventions are especially important in the ICU setting as the usual constant 24-h activity commonly provokes delirium. Attempting to mimic the home environment as much as possible also has been shown to help treat and even prevent delirium. Visits from friends and family throughout the day minimize the anxiety associated with the constant flow of new faces of staff and physicians. Allowing hospitalized patients to have access to home bedding, clothing, and nightstand objects makes the hospital environment less foreign and therefore less confusing. Simple standard nursing practices such as maintaining proper nutrition and volume status as well as managing incontinence and skin breakdown also help alleviate discomfort and resulting confusion. In some instances, patients pose a threat to their own safety or to the safety of staff members, and acute management is required. Bed alarms and personal sitters are more effective and much less disorienting than physical restraints. Chemical restraints should be avoided, but only when necessary, very-low-dose typical or atypical antipsychotic medications administered on an as-needed basis are effective. The recent association of antipsychotic use in the elderly with increased mortality rates underscores the importance of using these medications judiciously and only as a last resort. Benzodiazepines often worsen confusion through their sedative properties. Although many clinicians still use benzodiazepines to treat acute confusion, their use should be limited to cases in which delirium is caused by alcohol or benzodiazepine withdrawal. In light of the high morbidity associated with delirium and the tremendously increased health care costs that accompany it, development of an effective strategy to prevent delirium in hospitalized patients is extremely important. Successful identification of high-risk patients is the first step, followed by initiation of appropriate interventions. Simple standardized protocols used to manage risk factors for delirium, including sleep-wake cycle reversal, immobility, visual impairment, hearing impairment, sleep deprivation, and dehydration, have been shown to be effective. Recent trials in the ICU have focused both on identifying sedatives, such as dexmedetomidine, that are less likely to lead to delirium in critically ill patients and on developing protocols for daily awakenings in which infusions of sedative medications are interrupted and the patient is reorientated by the staff. All hospitals and health care systems should work toward decreasing the incidence of delirium. William W. Seeley, Bruce L. Miller Dementia, a syndrome with many causes, affects >5 million people in the United States and results in a total annual health care cost between $157 and $215 billion. Dementia is defined as an acquired deterioration in cognitive abilities that impairs the successful performance of activities of daily living. Episodic memory, the ability to recall events specific in time and place, is the cognitive function most commonly lost; 10% of persons age >70 years and 20–40% of individuals age >85 years have clinically identifiable memory loss. In addition to memory, dementia may erode other mental faculties, including language, visuospatial, praxis, calculation, judgment, and problem-solving abilities. Neuropsychiatric and social deficits also arise in many dementia syndromes, manifesting as depression, apathy, anxiety, hallucinations, delusions, agitation, insomnia, sleep disturbances, compulsions, or disinhibition. The clinical course may be slowly progressive, as in Alzheimer’s disease (AD); static, as in anoxic encephalopathy; or may fluctuate from day to day or minute to minute, as in dementia with Lewy bodies. Most patients with AD, the most prevalent form of dementia, begin with episodic memory impairment, although in other dementias, such as frontotemporal dementia, memory loss is not typically a presenting feature. Focal cerebral disorders are discussed in Chap. 36 and illustrated in a video library in Chap. 37e; the pathogenesis of AD and related disorders is discussed in Chap. 448. Dementia syndromes result from the disruption of specific large-scale neuronal networks; the location and severity of synaptic and neuronal loss combine to produce the clinical features (Chap. 36). Behavior, mood, and attention are modulated by ascending noradrenergic, serotonergic, and dopaminergic pathways, whereas cholinergic signaling is critical for attention and memory functions. The dementias differ in the relative neurotransmitter deficit profiles; accordingly, accurate diagnosis guides effective pharmacologic therapy. AD begins in the entorhinal region of the medial temporal lobe, spreads to the hippocampus, and then moves to lateral and posterior temporal and parietal neocortex, eventually causing a more widespread degeneration. Vascular dementia is associated with focal damage in a variable patchwork of cortical and subcortical regions or white matter tracts that disconnect nodes within distributed networks. In keeping with its anatomy, AD typically presents with episodic memory loss accompanied later by aphasia or navigational problems. In contrast, dementias that begin in frontal or subcortical regions, such as frontotemporal dementia (FTD) or Huntington’s disease (HD), are less likely to begin with memory problems and more likely to present with difficulties with judgment, mood, executive control, movement, and behavior. Lesions of frontal-striatal1 pathways produce specific and predictable effects on behavior. The dorsolateral prefrontal cortex has connections with a central band of the caudate nucleus. Lesions of either the caudate or dorsolateral prefrontal cortex, or their connecting white matter pathways, may result in executive dysfunction, manifesting as poor organization and planning, decreased cognitive flexibility, and impaired working memory. The lateral orbital frontal cortex connects with the ventromedial caudate, and lesions of this system cause impulsiveness, distractibility, and disinhibition. The anterior cingulate cortex and adjacent medial prefrontal cortex project to the nucleus accumbens, and interruption of this system produces apathy, poverty of speech, emotional blunting, or even akinetic mutism. All corticostriatal systems also include topographically organized projections through the globus pallidus and thalamus, and damage to these nodes 1The striatum comprises the caudate/putamen. can likewise reproduce the clinical syndrome of cortical or striatal injury. The single strongest risk factor for dementia is increasing age. The prevalence of disabling memory loss increases with each decade over age 50 and is usually associated with the microscopic changes of AD at autopsy. Yet some centenarians have intact memory function and no evidence of clinically significant dementia. Whether dementia is an inevitable consequence of normal human aging remains controversial. The many causes of dementia are listed in Table 35-1. The frequency of each condition depends on the age group under study, access of the group to medical care, country of origin, and perhaps racial or ethnic background. AD is the most common cause of dementia in Western Most Common Causes of Dementia Less Common Causes of Dementia Thiamine (B1): Wernicke’s encephalopathya B12 (subacute combined multifocal leukoencephalopathy) Tuberculosis, fungal, and protozoala Whipple’s diseasea Drug, medication, and narcotic poisoninga Heavy metal intoxicationa Organic toxins degeneration spectrum Multiple sclerosis Adult Down’s syndrome with ALS-parkinsonism-dementia complex of Guam Prion (Creutzfeldt-Jakob and Gerstmann-Sträussler-Scheinker diseases) Miscellaneous Sarcoidosisa Vasculitisa CADASIL, etc. Acute intermittent porphyriaa Metabolic disorders (e.g., Wilson’s and Leigh’s diseases, leukodystrophies, lipid storage diseases, mitochondrial mutations) countries, accounting for more than half of all patients. Vascular dis-171 ease is considered the second most frequent cause for dementia and is particularly common in elderly patients or populations with limited access to medical care, where vascular risk factors are undertreated. Often, vascular brain injury is mixed with neurodegenerative disorders, making it difficult, even for the neuropathologist, to estimate the contribution of cerebrovascular disease to the cognitive disorder in an individual patient. Dementias associated with Parkinson’s disease (PD) (Chap. 449) are common and may develop years after onset of a parkinsonian disorder, as seen with PD-related dementia (PDD), or can occur concurrently with or preceding the motor syndrome, as in dementia with Lewy bodies (DLB). In patients under the age of 65, FTD rivals AD as the most common cause of dementia. Chronic intoxications, including those resulting from alcohol and prescription drugs, are an important and often treatable cause of dementia. Other disorders listed in Table 35-1 are uncommon but important because many are reversible. The classification of dementing illnesses into reversible and irreversible disorders is a useful approach to differential diagnosis. When effective treatments for the neurodegenerative conditions emerge, this dichotomy will become obsolete. In a study of 1000 persons attending a memory disorders clinic, 19% had a potentially reversible cause of the cognitive impairment and 23% had a potentially reversible concomitant condition that may have contributed to the patient’s impairment. The three most common potentially reversible diagnoses were depression, normal pressure hydrocephalus (NPH), and alcohol dependence; medication side effects are also common and should be considered in every patient (Table 35-1). Subtle cumulative decline in episodic memory is a common part of aging. This frustrating experience, often the source of jokes and humor, is referred to as benign forgetfulness of the elderly. Benign means that it is not so progressive or serious that it impairs reasonably successful and productive daily functioning, although the distinction between benign and more significant memory loss can be difficult to make. At age 85, the average person is able to learn and recall approximately one-half of the items (e.g., words on a list) that he or she could at age 18. A measurable cognitive problem that does not seriously disrupt daily activities is often referred to as mild cognitive impairment (MCI). Factors that predict progression from MCI to an AD dementia include a prominent memory deficit, family history of dementia, presence of an apolipoprotein ε4 (Apo ε4) allele, small hippocampal volumes, an AD-like signature of cortical atrophy, low cerebrospinal fluid Aβ, and elevated tau or evidence of brain amyloid deposition on positron emission tomography (PET) imaging. The major degenerative dementias include AD, DLB, FTD and related disorders, HD, and prion diseases, including Creutzfeldt-Jakob disease (CJD). These disorders are all associated with the abnormal aggregation of a specific protein: Aβ42 and tau in AD; α-synuclein in DLB; tau, TAR DNA-binding protein of 43 kDa (TDP-43), or fused in sarcoma (FUS) in FTD; huntingtin in HD; and misfolded prion protein (PrPsc) in CJD (Table 35-2). APPROACH TO THE PATIENT: Three major issues should be kept at the forefront: (1) What is the best fit for a clinical diagnosis? (2) What component of the dementia syndrome is treatable or reversible? (3) Can the physician help to alleviate the burden on caregivers? A broad overview of the approach to dementia is shown in Table 35-3. The major degenerative dementias can usually be distinguished by the initial symptoms; neuropsychological, neuropsychiatric, and neurologic findings; and neuroimaging features (Table 35-4). The history should concentrate on the onset, duration, and tempo of progression. An acute or subacute onset of confusion may be due to delirium (Chap. 34) and should trigger the search for intoxication, infection, or metabolic derangement. An elderly person aPotentially reversible dementia. Abbreviations: ALS, amyotrophic lateral sclerosis; CADASIL, cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy; LBD, Lewy body disease; PDD, Parkinson’s disease dementia. PART 2 Cardinal Manifestations and Presentation of Diseases Abbreviations: AD, Alzheimer’s disease; CJD, Creutzfeldt-Jakob disease; DLB, dementia with Lewy bodies; FTD, frontotemporal dementia. with slowly progressive memory loss over several years is likely to suffer from AD. Nearly 75% of patients with AD begin with memory symptoms, but other early symptoms include difficulty with managing money, driving, shopping, following instructions, finding words, or navigating. Personality change, disinhibition, and weight gain or compulsive eating suggest FTD, not AD. FTD is also suggested by prominent apathy, compulsivity, loss of empathy for others, or progressive loss of speech fluency or single-word comprehension and by a relative sparing of memory and visuospatial abilities. The diagnosis of DLB is suggested by early visual hallucinations; parkinsonism; proneness to delirium or sensitivity to psychoactive medications; rapid eye movement (REM) behavior disorder (RBD; the loss of skeletal muscle paralysis during dreaming); or Capgras syndrome, the delusion that a familiar person has been replaced by an impostor. A history of stroke with irregular stepwise progression suggests vascular dementia. Vascular dementia is also commonly seen in the setting of hypertension, atrial fibrillation, peripheral vascular disease, and diabetes. In patients suffering from cerebrovascular disease, it can be difficult to determine whether the dementia is due to AD, vascular disease, or a mixture of the two because many of the risk factors for vascular dementia, including diabetes, high cholesterol, elevated homocysteine, and low exercise, are also putative risk factors for AD. Moreover, many patients with a major vascular contribution to their dementia lack a history of stepwise decline. Rapid progression with motor rigidity and myoclonus suggests CJD (Chap. 453e). Seizures may indicate strokes or neoplasm but also occur in AD, particularly early-age-of-onset AD. Gait disturbance is common in vascular dementia, PD/DLB, or NPH. A history of high-risk sexual behaviors or intravenous drug use Abbreviations: CT, computed tomography; EEG, electroencephalogram; MRI, magnetic resonance imaging; PET, positron emission tomography; RBC, red blood cell; RPR, rapid plasma reagin (test); SPECT, single-photon emission computed tomography; TSH, thyroid-stimulating hormone; VDRL, Venereal Disease Research Laboratory (test for syphilis). Abbreviations: AD, Alzheimer’s disease; CBD, cortical basal degeneration; CJD, Creutzfeldt-Jakob disease; DLB, dementia with Lewy bodies; FLAIR, fluid-attenuated inversion recovery; FTD, frontotemporal dementia; MND, motor neuron disease; MRI, magnetic resonance imaging; PSP, progressive supranuclear palsy; REM, rapid eye movement. should trigger a search for central nervous system (CNS) infection, especially HIV or syphilis. A history of recurrent head trauma could indicate chronic subdural hematoma, chronic traumatic encephalopathy (a progressive dementia best characterized in contact sport athletes such as boxers and American football players), intracranial hypotension, or NPH. Subacute onset of severe amnesia and psychosis with mesial temporal T2/fluid-attenuated inversion recovery (FLAIR) hyperintensities on magnetic resonance imaging (MRI) should raise concern for paraneoplastic limbic encephalitis, especially in a long-term smoker or other patients at risk for cancer. Related autoimmune conditions, such as voltage-gated potassium channel (VGKC)or N-methyl-d-aspartate (NMDA)-receptor antibody-mediated encephalopathy, can present with a similar tempo and imaging signature with or without characteristic motor manifestations such as myokymia (anti-VGKC) and faciobrachial dystonic seizures (anti-NMDA). Alcohol abuse creates risk for malnutrition and thiamine deficiency. Veganism, bowel irradiation, an autoimmune diathesis, a remote history of gastric surgery, and chronic antihistamine therapy for dyspepsia or gastroesophageal reflux predispose to B12 deficiency. Certain occupations, such as working in a battery or chemical factory, might indicate heavy metal intoxication. Careful review of medication intake, especially for sedatives and analgesics, may raise the issue of chronic drug intoxication. An autosomal dominant family history is found in HD and in familial forms of AD, FTD, DLB, or prion disorders. A history of mood disorders, the recent death of a loved one, or depressive signs, such as insomnia or weight loss, raise the possibility of depression-related cognitive impairments. A thorough general and neurologic examination is essential to document dementia, to look for other signs of nervous system involvement, and to search for clues suggesting a systemic disease that might be responsible for the cognitive disorder. Typical AD spares motor systems until later in the course. In contrast, FTD patients often develop axial rigidity, supranuclear gaze palsy, or a motor neuron disease reminiscent of amyotrophic lateral sclerosis (ALS). In DLB, the initial symptoms may include the new onset of a parkinsonian syndrome (resting tremor, cogwheel rigidity, bradykinesia, festinating gait), but DLB often starts with visual hallucinations or dementia. Symptoms referable to the lower brainstem (RBD, gastrointestinal or autonomic problems) may arise years or even decades before parkinsonism or dementia. Corticobasal syndrome (CBS) features asymmetric akinesia and rigidity, dystonia, myoclonus, alien limb phenomena, pyramidal signs, and prefrontal deficits such as nonfluent aphasia with or without motor speech impairment, executive dysfunction, apraxia, or a behavioral disorder. Progressive supranuclear palsy (PSP) is associated with unexplained falls, axial rigidity, dysphagia, and vertical gaze deficits. CJD is suggested by the presence of diffuse rigidity, an akinetic-mute state, and prominent, often startle-sensitive myoclonus. Hemiparesis or other focal neurologic deficits suggest vascular dementia or brain tumor. Dementia with a myelopathy and peripheral neuropathy suggests vitamin B12 deficiency. Peripheral neuropathy could also indicate another vitamin deficiency, heavy metal intoxication, thyroid dysfunction, Lyme disease, or vasculitis. Dry, cool skin, hair loss, and bradycardia suggest hypothyroidism. Fluctuating confusion associated with repetitive stereotyped movements may indicate ongoing limbic, temporal, or frontal seizures. In the elderly, hearing impairment or visual loss may produce confusion and disorientation misinterpreted as dementia. Profound bilateral sensorineural hearing loss in a younger patient with short stature or myopathy, however, should raise concern for a mitochondrial disorder. Brief screening tools such as the Mini-Mental State Examination (MMSE), the Montreal Cognitive Assessment (MOCA), and Cognistat can be used to capture dementia and follow progression. None of these tests is highly sensitive to early-stage dementia or discriminates between dementia syndromes. The MMSE is a 30 point test of cognitive function, with each correct answer being scored as 1 point. It includes tests in the areas of: orientation (e.g., identify season/date/month/year/floor/hospital/town/state/ country); registration (e.g., name and restate 3 objects); recall (e.g., remember the same three objects 5 minutes later); and language (e.g., name pencil and watch; repeat “no if ’s and’s or but’s”; follow a 3-step command; obey a written command; and write a sentence and copy a design). In most patients with MCI and some with clinically apparent AD, bedside screening tests may be normal, and a more challenging and comprehensive set of neuropsychological tests will be required. When the etiology for the dementia syndrome remains in doubt, a specially tailored evaluation should be performed that includes tasks of working and episodic memory, executive function, language, and visuospatial and perceptual abilities. In AD, the early deficits involve episodic memory, category generation (“name as many animals as you can in 1 minute”), and visuoconstructive ability. Usually deficits in verbal or visual episodic memory are the first neuropsychological abnormalities detected, and tasks that require the patient to recall a long list of words or a series of pictures after a predetermined delay will demonstrate deficits in most patients. In FTD, the earliest deficits on cognitive testing involve executive control or language (speech or naming) function, but some patients lack either finding despite profound social-emotional deficits. PDD or DLB patients have more severe deficits in visuospatial function but do better on episodic memory tasks than patients with AD. Patients with vascular dementia often demonstrate a mixture of executive control and visuospatial deficits, with prominent psychomotor slowing. In delirium, the most prominent deficits involve attention, working memory, and executive function, making the assessment of other cognitive domains challenging and often uninformative. A functional assessment should also be performed to help the physician determine the day-to-day impact of the disorder on the patient’s memory, community affairs, hobbies, judgment, dressing, and eating. Knowledge of the patient’s functional abilities will help the clinician and the family to organize a therapeutic approach. Neuropsychiatric assessment is important for diagnosis, prognosis, and treatment. In the early stages of AD, mild depressive features, social withdrawal, and irritability or anxiety are the most prominent psychiatric changes, but patients often maintain core social graces into the middle or late stages, when delusions, agitation, and sleep disturbance may emerge. In FTD, dramatic personality change with apathy, overeating, compulsions, disinhibition, euphoria, and loss of empathy are early and common. DLB is associated with visual hallucinations, delusions related to person or place identity, RBD, and excessive daytime sleepiness. Dramatic fluctuations occur not only in cognition but also in arousal. Vascular dementia can present with psychiatric symptoms such as depression, anxiety, delusions, disinhibition, or apathy. The choice of laboratory tests in the evaluation of dementia is complex and should be tailored to the individual patient. The physician must take measures to avoid missing a reversible or treatable cause, yet no single treatable etiology is common; thus, a screen must use multiple tests, each of which has a low yield. Cost/benefit ratios are difficult to assess, and many laboratory screening algorithms for dementia discourage multiple tests. Nevertheless, even a test with only a 1–2% positive rate is worth undertaking if the alternative is missing a treatable cause of dementia. Table 35-3 lists most screening tests for dementia. The American Academy of Neurology recommends the routine measurement of a complete blood count, electrolytes, renal and thyroid function, a vitamin B12 level, and a neuroimaging study (computed tomography [CT] or MRI). Neuroimaging studies, especially MRI, help to rule out primary and metastatic neoplasms, locate areas of infarction or inflammation, detect subdural hematomas, and suggest NPH or diffuse white matter disease. They also help to establish a regional pattern of atrophy. Support for the diagnosis of AD includes hippocampal atrophy in addition to posterior-predominant cortical atrophy (Fig. 35-1). Focal frontal, insular, and/or anterior temporal atrophy suggests FTD (Chap. 448). DLB often features less prominent atrophy, with greater involvement of amygdala than hippocampus. In CJD, magnetic resonance (MR) diffusion-weighted imaging reveals restricted diffusion within the cortical ribbon and basal ganglia in most patients. Extensive white matter abnormalities correlate with a vascular etiology (Fig. 35-2). Communicating hydrocephalus with vertex effacement (crowding of dorsal convexity gyri/sulci), gaping Sylvian fissures despite minimal cortical atrophy, and additional features shown in Fig. 35-3 suggest NPH. Single-photon emission computed tomography (SPECT) and PET scanning show temporal-parietal hypoperfusion or hypometabolism in AD and frontotemporal deficits in FTD, but these changes often reflect atrophy and can therefore be detected with MRI alone in many patients. Recently, amyloid imaging has shown promise for the diagnosis of AD, and Pittsburgh Compound-B (PiB) (not available outside of research settings) and 18F-AV-45 (florbetapir; approved by the U.S. Food and Drug Administration in 2013) are reliable radioligands for detecting brain amyloid associated with amyloid PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 35-1 Alzheimer’s disease (AD). Axial T1-weighted magnetic resonance images of a healthy 71-year-old (A) and a 64-year-old with AD (C). Note the reduction in medial temporal lobe volume in the patient with AD. Fluorodeoxyglucose positron emission tomography scans of the same individuals (B and D) demonstrate reduced glucose metabolism in the posterior temporoparietal regions bilaterally in AD, a typical finding in this condition. HC, healthy control. (Images courtesy of Gil Rabinovici, University of California, San Francisco and William Jagust, University of California, Berkeley.) angiopathy or neuritic plaques of AD (Fig. 35-4). Because these abnormalities can be seen in cognitively normal older persons, however (~25% of individuals at age 65), amyloid imaging may also detect preclinical or incidental AD in patients lacking an AD-like dementia syndrome. Currently, the main clinical value of amyloid imaging is to exclude AD as the likely cause of dementia in patients who have negative scans. Once disease-modifying therapies become available, use of these biomarkers may help to identify treatment FIguRE 35-2 Diffuse white matter disease. Axial fluid-attenuated inversion recovery (FLAIR) magnetic resonance image through the lateral ventricles reveals multiple areas of hyperintensity (arrows) involving the periventricular white matter as well as the corona radiata and striatum. Although seen in some individuals with normal cognition, this appearance is more pronounced in patients with dementia of a vascular etiology. FIguRE 35-3 Normal-pressure hydrocephalus. A. Sagittal T1-weighted magnetic resonance image (MRI) demonstrates dilation of the lateral ventricle and stretching of the corpus callosum (arrows), depression of the floor of the third ventricle (single arrowhead), and enlargement of the aqueduct (double arrowheads). Note the diffuse dilation of the lateral, third, and fourth ventricles with a patent aqueduct, typical of communicating hydrocephalus. B. Axial T2-weighted MRIs demonstrate dilation of the lateral ventricles. This patient underwent successful ventriculoperitoneal shunting. candidates before irreversible brain injury has occurred. In the meantime, the significance of detecting brain amyloid in an asymptomatic elder remains a topic of vigorous investigation. Similarly, MRI perfusion and structural/functional connectivity methods are being explored as potential treatment-monitoring strategies. Lumbar puncture need not be done routinely in the evaluation of dementia, but it is indicated when CNS infection or inflammation are credible diagnostic possibilities. Cerebrospinal fluid (CSF) levels of Aβ42 and tau proteins show differing patterns with the various dementias, and the presence of low Aβ42 and mildly elevated CSF tau is highly suggestive of AD. The routine use of lumbar puncture in the diagnosis of dementia is debated, but the sensitivity and specificity of AD diagnostic measures are not yet high enough to warrant routine use. Formal psychometric testing helps to document the severity of cognitive disturbance, suggest psychogenic causes, and provide a more formal method for following the disease course. Electroencephalogram (EEG) is not routinely used but can help to suggest CJD (repetitive bursts of diffuse high-amplitude sharp waves, or “periodic complexes”) or an underlying nonconvulsive seizure disorder (epileptiform discharges). Brain biopsy (including meninges) is not advised except to diagnose vasculitis, potentially treatable neoplasms, or unusual infections when the diagnosis is uncertain. Systemic disorders with CNS manifestations, such as sarcoidosis, can usually be confirmed through biopsy of lymph node or solid organ rather than brain. MR angiography should be considered when cerebral vasculitis or cerebral venous thrombosis is a possible cause of the dementia. The major goals of dementia management are to treat reversible causes and to provide comfort and support to the patient and caregivers. Treatment of underlying causes includes thyroid replacement for hypothyroidism; vitamin therapy for thiamine or B12 deficiency or for elevated serum homocysteine; antimicrobials for opportunistic infections or antiretrovirals for HIV; ventricular shunting for NPH; or appropriate surgical, radiation, and/or chemotherapeutic treatment for CNS neoplasms. Removal of cognition-impairing drugs or medications is frequently useful. If the patient’s cognitive complaints stem from a psychiatric disorder, vigorous treatment of this condition should seek to eliminate the cognitive complaint or confirm that it persists despite adequate resolution of the mood or anxiety symptoms. Patients with degenerative diseases may also be depressed or anxious, and those aspects of their condition often respond to therapy. Antidepressants, such as selective serotonin reuptake inhibitors (SSRIs) or serotonin-norepinephrine reuptake inhibitors (SNRIs) (Chap. 465e), which feature anxiolytic properties but few cognitive side effects, provide the mainstay of treatment when necessary. Anticonvulsants are used to control seizures. Levetiracetam may be particularly useful, but there have as yet been no randomized trials for treatment of AD-associated seizures. Agitation, hallucinations, delusions, and confusion are difficult to treat. These behavioral problems represent major causes for nursing home placement and institutionalization. Before treating these behaviors with medications, the clinician should aggressively seek out modifiable environmental or metabolic factors. Hunger, lack of exercise, toothache, constipation, urinary tract or respiratory infection, electrolyte imbalance, and drug toxicity all represent easily correctable causes that can be remedied without psychoactive drugs. Drugs such as phenothiazines and benzodiazepines may ameliorate the behavior problems but have untoward side effects such as sedation, rigidity, dyskinesia, and occasionally paradoxical disinhibition (benzodiazepines). Despite their unfavorable side effect profile, second-generation antipsychotics such as quetiapine (starting dose, 12.5–25 mg daily) can be used for patients with agitation, aggression, and FIguRE 35-4 Positron emission tomography (PET) images obtained with the amyloid-imaging agent Pittsburgh Compound-B ([11C]PIB) in a normal control (left); three different patients with mild cognitive impairment (MCI; center); and a patient with mild Alzheimer’s disease (AD; right). Some MCI patients have control-like levels of amyloid, some have AD-like levels of amyloid, and some have intermediate levels. (Images courtesy of William Klunk and Chester Mathis, University of Pittsburgh.) 176 psychosis, although the risk profile for these compounds is significant. When patients do not respond to treatment, it is usually a mistake to advance to higher doses or to use anticholinergic drugs or sedatives (such as barbiturates or benzodiazepines). It is important to recognize and treat depression; treatment can begin with a low dose of an SSRI (e.g., escitalopram, starting dose 5 mg daily, target dose 5–10 mg daily) while monitoring for efficacy and toxicity. Sometimes apathy, visual hallucinations, depression, and other psychiatric symptoms respond to the cholinesterase inhibitors, especially in DLB, obviating the need for other more toxic therapies. Cholinesterase inhibitors are being used to treat AD (donepezil, rivastigmine, galantamine) and PDD (rivastigmine). Recent work has focused on developing antibodies against Aβ42 as a treatment for AD. Although the initial randomized controlled trials failed, there was some evidence for efficacy in the mildest patient groups. Therefore, researchers have begun to focus on patients with very mild disease and asymptomatic individuals at risk for AD, such as those who carry autosomal dominantly inherited genetic mutations or healthy elders with CSF or amyloid imaging biomarker evidence supporting presymptomatic AD. Memantine proves useful when treating some patients with moderate to severe AD; its major benefit relates to decreasing caregiver burden, most likely by decreasing resistance to dressing and grooming support. In moderate to severe AD, the combination of memantine and a cholinesterase inhibitor delayed nursing home placement in several studies, although other studies have not supported the efficacy of adding memantine to the regimen. A proactive strategy has been shown to reduce the occurrence of delirium in hospitalized patients. This strategy includes frequent orientation, cognitive activities, sleep-enhancement measures, vision and hearing aids, and correction of dehydration. Nondrug behavior therapy has an important place in dementia management. The primary goals are to make the patient’s life comfortable, uncomplicated, and safe. Preparing lists, schedules, calendars, and labels can be helpful in the early stages. It is also useful to stress familiar routines, walks, and simple physical exercises. For many demented patients, memory for events is worse than their ability to carry out routine activities, and they may still be able to take part in activities such as walking, bowling, dancing, singing, bingo, and golf. Demented patients often object to losing control over familiar tasks such as driving, cooking, and handling finances. Attempts to help or take over may be greeted with complaints, depression, or anger. Hostile responses on the part of the caregiver are counterproductive and sometimes even harmful. Reassurance, distraction, and calm positive statements are more productive in this setting. Eventually, tasks such as finances and driving must be assumed by others, and the patient will conform and adjust. Safety is an important issue that includes not only driving but controlling the kitchen, bathroom, and sleeping area environments, as well as stairways. These areas need to be monitored, supervised, and made as safe as possible. A move to a retirement complex, assisted-living center, or nursing home can initially increase confusion and agitation. Repeated reassurance, reorientation, and careful introduction to the new personnel will help to smooth the process. Providing activities that are known to be enjoyable to the patient can be of considerable benefit. The clinician must pay special attention to frustration and depression among family members and caregivers. Caregiver guilt and burnout are common. Family members often feel overwhelmed and helpless and may vent their frustrations on the patient, each other, and health care providers. Caregivers should be encouraged to take advantage of day-care facilities and respite services. Education and counseling about dementia are important. Local and national support groups, such as the Alzheimer’s Association (www.alz.org), can provide considerable help. PART 2 Cardinal Manifestations and Presentation of Diseases Aphasia, Memory Loss, and other M.-Marsel Mesulam The cerebral cortex of the human brain contains approximately 20 billion neurons spread over an area of 2.5 m2. The primary sensory and motor areas constitute 10% of the cerebral cortex. The rest is subsumed by modality-selective, heteromodal, paralimbic, and limbic areas collectively known as the association cortex (Fig. 36-1). The association cortex mediates the integrative processes that subserve cognition, emotion, and comportment. A systematic testing of these mental functions is necessary for the effective clinical assessment of the association cortex and its diseases. According to current thinking, there are no centers for “hearing words,” “perceiving space,” or “storing memories.” FIguRE 36-1 Lateral (top) and medial (bottom) views of the cerebral hemispheres. The numbers refer to the Brodmann cytoarchitectonic designations. Area 17 corresponds to the primary visual cortex, 41–42 to the primary auditory cortex, 1–3 to the primary somatosensory cortex, and 4 to the primary motor cortex. The rest of the cerebral cortex contains association areas. AG, angular gyrus; B, Broca’s area; CC, corpus callosum; CG, cingulate gyrus; DLPFC, dorsolateral prefrontal cortex; FEF, frontal eye fields (premotor cortex); FG, fusiform gyrus; IPL, inferior parietal lobule; ITG, inferior temporal gyrus; LG, lingual gyrus; MPFC, medial prefrontal cortex; MTG, middle temporal gyrus; OFC, orbitofrontal cortex; PHG, parahippocampal gyrus; PPC, posterior parietal cortex; PSC, peristriate cortex; SC, striate cortex; SMG, supramarginal gyrus; SPL, superior parietal lobule; STG, superior temporal gyrus; STS, superior temporal sulcus; TP, temporopolar cortex; W, Wernicke’s area. Cognitive and behavioral functions (domains) are coordinated by intersecting large-scale neural networks that contain interconnected cortical and subcortical components. Five anatomically defined large-scale networks are most relevant to clinical practice: (1) a perisylvian network for language, (2) a parietofrontal network for spatial orientation, (3) an occipitotemporal network for face and object recognition, (4) a limbic network for retentive memory, and (5) a prefrontal network for the executive control of cognition and comportment. The areas that are critical for language make up a distributed network located along the perisylvian region of the left hemisphere. One hub, located in the inferior frontal gyrus, is known as Broca’s area. Damage to this region impairs phonology, fluency, and the grammatical structure of sentences. The location of a second hub, known as Wernicke’s area, is less clearly settled but is traditionally thought to include the posterior parts of the temporal lobe. Cerebrovascular accidents that damage this area interfere with the ability to understand spoken or written sentences as well as the ability to express thoughts through meaningful words and statements. These two hubs are interconnected with each other and with surrounding parts of the frontal, parietal, and temporal lobes. Damage to this network gives rise to language impairments known as aphasia. Aphasia should be diagnosed only when there are deficits in the formal aspects of language, such as word finding, word choice, comprehension, spelling, or grammar. Dysarthria and mutism do not by themselves lead to a diagnosis of aphasia. In approximately 90% of right-handers and 60% of left-handers, aphasia occurs only after lesions of the left hemisphere. The clinical examination of language should include the assessment of naming, spontaneous speech, comprehension, repetition, reading, and writing. A deficit of naming (anomia) is the single most common finding in aphasic patients. When asked to name a common object, the patient may fail to come up with the appropriate word, may provide a circumlocutious description of the object (“the thing for writing”), or may come up with the wrong word (paraphasia). If the patient offers an incorrect but related word (“pen” for “pencil”), the naming error is known as a semantic paraphasia; if the word approximates the correct answer but is phonetically inaccurate (“plentil” for “pencil”), it is known as a phonemic paraphasia. In most anomias, the patient cannot retrieve the appropriate name when shown an object but can point to the appropriate object when the name is provided by the examiner. This is known as a one-way (or retrieval-based) naming deficit. A two-way (comprehension-based) naming deficit exists if the patient can neither provide nor recognize the correct name. Spontaneous speech is described as “fluent” if it maintains appropriate output volume, phrase length, and melody or as “nonfluent” if it is sparse and halting and average utterance length is below four words. The examiner also should note the integrity of grammar as manifested by 177 word order (syntax), tenses, suffixes, prefixes, plurals, and possessives. Comprehension can be tested by assessing the patient’s ability to follow conversation, asking yes-no questions (“Can a dog fly?” “Does it snow in summer?”), asking the patient to point to appropriate objects (“Where is the source of illumination in this room?”), or asking for verbal definitions of single words. Repetition is assessed by asking the patient to repeat single words, short sentences, or strings of words such as “No ifs, ands, or buts.” The testing of repetition with tongue twisters such as “hippopotamus” and “Irish constabulary” provides a better assessment of dysarthria and palilalia than of aphasia. It is important to make sure that the number of words does not exceed the patient’s attention span. Otherwise, the failure of repetition becomes a reflection of the narrowed attention span (working memory) rather than an indication of an aphasic deficit. Reading should be assessed for deficits in reading aloud as well as comprehension. Alexia describes an inability to either read aloud or comprehend single words and simple sentences; agraphia (or dysgraphia) is used to describe an acquired deficit in spelling. Aphasias can arise acutely in cerebrovascular accidents (CVAs) or gradually in neurodegenerative diseases. The syndromes listed in Table 36-1 are most applicable to the former group, where gray matter and white matter at the lesion site are abruptly and jointly destroyed. Progressive neurodegenerative diseases can have cellular, laminar, and regional specificity, giving rise to a different set of aphasias that will be described separately. The syndromes outlined below are idealizations and rarely occur in pure form. Wernicke’s Aphasia Comprehension is impaired for spoken and written words and sentences. Language output is fluent but is highly paraphasic and circumlocutious. Paraphasic errors may lead to strings of neologisms, which lead to “jargon aphasia.” Speech contains few substantive nouns. The output is therefore voluminous but uninformative. For example, a patient attempts to describe how his wife accidentally threw away something important, perhaps his dentures: “We don’t need it anymore, she says. And with it when that was downstairs was my teeth-tick … a … den … dentith … my dentist. And they happened to be in that bag … see? …Where my two … two little pieces of dentist that I use … that I … all gone. If she throws the whole thing away … visit some friends of hers and she can’t throw them away.” Gestures and pantomime do not improve communication. The patient may not realize that his or her language is incomprehensible and may appear angry and impatient when the examiner fails to decipher the meaning of a severely paraphasic statement. In some patients this type of aphasia can be associated with severe agitation and paranoia. The ability to follow commands aimed at axial musculature may be preserved. The dissociation between the failure to understand simple questions (“What is your name?”) in a patient who rapidly closes his or her eyes, sits up, or rolls over when asked to do so is characteristic of Wernicke’s aphasia and helps differentiate it from CHAPTER 36 Aphasia, Memory Loss, and Other Focal Cerebral Disorders 178 deafness, psychiatric disease, or malingering. Patients with Wernicke’s aphasia cannot express their thoughts in meaning-appropriate words and cannot decode the meaning of words in any modality of input. This aphasia therefore has expressive as well as receptive components. Repetition, naming, reading, and writing also are impaired. The lesion site most commonly associated with Wernicke’s aphasia is the posterior portion of the language network. An embolus to the inferior division of the middle cerebral artery, to the posterior temporal or angular branches in particular, is the most common etiology (Chap. 446). Intracerebral hemorrhage, head trauma, and neoplasm are other causes of Wernicke’s aphasia. A coexisting right hemianopia or superior quadrantanopia is common, and mild right nasolabial flattening may be found, but otherwise, the examination is often unrevealing. The paraphasic, neologistic speech in an agitated patient with an otherwise unremarkable neurologic examination may lead to the suspicion of a primary psychiatric disorder such as schizophrenia or mania, but the other components characteristic of acquired aphasia and the absence of prior psychiatric disease usually settle the issue. Prognosis for recovery of language function is guarded. Broca’s Aphasia Speech is nonfluent, labored, interrupted by many word-finding pauses, and usually dysarthric. It is impoverished in function words but enriched in meaning-appropriate nouns. Abnormal word order and the inappropriate deployment of bound morphemes (word endings used to denote tenses, possessives, or plurals) lead to a characteristic agrammatism. Speech is telegraphic and pithy but quite informative. In the following passage, a patient with Broca’s aphasia describes his medical history: “I see … the dotor, dotor sent me … Bosson. Go to hospital. Dotor … kept me beside. Two, tee days, doctor send me home.” Output may be reduced to a grunt or single word (“yes” or “no”), which is emitted with different intonations in an attempt to express approval or disapproval. In addition to fluency, naming and repetition are impaired. Comprehension of spoken language is intact except for syntactically difficult sentences with a passive voice structure or embedded clauses, indicating that Broca’s aphasia is not just an “expressive” or “motor” disorder and that it also may involve a comprehension deficit in decoding syntax. Patients with Broca’s aphasia can be tearful, easily frustrated, and profoundly depressed. Insight into their condition is preserved, in contrast to Wernicke’s aphasia. Even when spontaneous speech is severely dysarthric, the patient may be able to display a relatively normal articulation of words when singing. This dissociation has been used to develop specific therapeutic approaches (melodic intonation therapy) for Broca’s aphasia. Additional neurologic deficits include right facial weakness, hemiparesis or hemiplegia, and a buccofacial apraxia characterized by an inability to carry out motor commands involving oropharyngeal and facial musculature (e.g., patients are unable to demonstrate how to blow out a match or suck through a straw). The cause is most often infarction of Broca’s area (the inferior frontal convolution; “B” in Fig. 36-1) and surrounding anterior perisylvian and insular cortex due to occlusion of the superior division of the middle cerebral artery (Chap. 446). Mass lesions, including tumor, intracerebral hemorrhage, and abscess, also may be responsible. When the cause of Broca’s aphasia is stroke, recovery of language function generally peaks within 2 to 6 months, after which time further progress is limited. Speech therapy is more successful than in Wernicke’s aphasia. Conduction Aphasia Speech output is fluent but contains many phonemic paraphasias, comprehension of spoken language is intact, and repetition is severely impaired. Naming elicits phonemic paraphasias, and spelling is impaired. Reading aloud is impaired, but reading comprehension is preserved. The lesion sites spare the functionality of Broca’s and Wernicke’s areas but may induce a disconnection between the two. Occasionally, a transient Wernicke’s aphasia may rapidly resolve into a conduction aphasia. The paraphasic output in conduction aphasia interferes with the ability to express meaning, but this deficit is not nearly as severe as the one displayed by patients with Wernicke’s aphasia. Associated neurologic signs in conduction aphasia vary according to the primary lesion site. PART 2 Cardinal Manifestations and Presentation of Diseases Transcortical Aphasias: Fluent and Nonfluent Clinical features of fluent (posterior) transcortical aphasia are similar to those of Wernicke’s aphasia, but repetition is intact. The lesion site disconnects the intact core of the language network from other temporoparietal association areas. Associated neurologic findings may include hemianopia. Cerebrovascular lesions (e.g., infarctions in the posterior watershed zone) and neoplasms that involve the temporoparietal cortex posterior to Wernicke’s area are common causes. The features of nonfluent (anterior) transcortical aphasia are similar to those of Broca’s aphasia, but repetition is intact and agrammatism is less pronounced. The neurologic examination may be otherwise intact, but a right hemiparesis also can exist. The lesion site disconnects the intact language network from prefrontal areas of the brain and usually involves the anterior watershed zone between anterior and middle cerebral artery territories or the supplementary motor cortex in the territory of the anterior cerebral artery. global and Isolation Aphasias Global aphasia represents the combined dysfunction of Broca’s and Wernicke’s areas and usually results from strokes that involve the entire middle cerebral artery distribution in the left hemisphere. Speech output is nonfluent, and comprehension of language is severely impaired. Related signs include right hemiplegia, hemisensory loss, and homonymous hemianopia. Isolation aphasia represents a combination of the two transcortical aphasias. Comprehension is severely impaired, and there is no purposeful speech output. The patient may parrot fragments of heard conversations (echolalia), indicating that the neural mechanisms for repetition are at least partially intact. This condition represents the pathologic function of the language network when it is isolated from other regions of the brain. Broca’s and Wernicke’s areas tend to be spared, but there is damage to the surrounding frontal, parietal, and temporal cortex. Lesions are patchy and can be associated with anoxia, carbon monoxide poisoning, or complete watershed zone infarctions. Anomic Aphasia This form of aphasia may be considered the “minimal dysfunction” syndrome of the language network. Articulation, comprehension, and repetition are intact, but confrontation naming, word finding, and spelling are impaired. Word-finding pauses are uncommon, so language output is fluent but paraphasic, circumlocutious, and uninformative. The lesion sites can be anywhere within the left hemisphere language network, including the middle and inferior temporal gyri. Anomic aphasia is the single most common language disturbance seen in head trauma, metabolic encephalopathy, and Alzheimer’s disease. Pure Word Deafness The most common causes are either bilateral or left-sided middle cerebral artery (MCA) strokes affecting the superior temporal gyrus. The net effect of the underlying lesion is to interrupt the flow of information from the auditory association cortex to the language network. Patients have no difficulty understanding written language and can express themselves well in spoken or written language. They have no difficulty interpreting and reacting to environmental sounds since primary auditory cortex and auditory association areas of the right hemisphere are spared. Because auditory information cannot be conveyed to the language network, however, it cannot be decoded into neural word representations, and the patient reacts to speech as if it were in an alien tongue that cannot be deciphered. Patients cannot repeat spoken language but have no difficulty naming objects. In time, patients with pure word deafness teach themselves lipreading and may appear to have improved. There may be no additional neurologic findings, but agitated paranoid reactions are common in the acute stages. Cerebrovascular lesions are the most common cause. Pure Alexia Without Agraphia This is the visual equivalent of pure word deafness. The lesions (usually a combination of damage to the left occipital cortex and to a posterior sector of the corpus callosum—the splenium) interrupt the flow of visual input into the language network. There is usually a right hemianopia, but the core language network remains unaffected. The patient can understand and produce spoken language, name objects in the left visual hemifield, repeat, and write. However, the patient acts as if illiterate when asked to read even the simplest sentence because the visual information from the written words (presented to the intact left visual hemifield) cannot reach the language network. Objects in the left hemifield may be named accurately because they activate nonvisual associations in the right hemisphere, which in turn can access the language network through transcallosal pathways anterior to the splenium. Patients with this syndrome also may lose the ability to name colors, although they can match colors. This is known as a color anomia. The most common etiology of pure alexia is a vascular lesion in the territory of the posterior cerebral artery or an infiltrating neoplasm in the left occipital cortex that involves the optic radiations as well as the crossing fibers of the splenium. Because the posterior cerebral artery also supplies medial temporal components of the limbic system, a patient with pure alexia also may experience an amnesia, but this is usually transient because the limbic lesion is unilateral. Apraxia and Aphemia Apraxia designates a complex motor deficit that cannot be attributed to pyramidal, extrapyramidal, cerebellar, or sensory dysfunction and that does not arise from the patient’s failure to understand the nature of the task. Apraxia of speech is used to designate articulatory abnormalities in the duration, fluidity, and stress of syllables that make up words. Intoning the words may improve articulation. It can arise with CVAs in the posterior part of Broca’s area or in the course of frontotemporal lobar degeneration (FTLD) with tauopathy. Aphemia is a severe form of acute speech apraxia that presents with severely impaired fluency (often mutism). Recovery is the rule and involves an intermediate stage of hoarse whispering. Writing, reading, and comprehension are intact, and so this is not a true aphasic syndrome. CVAs in parts of Broca’s area or subcortical lesions that undercut its connections with other parts of the brain may be present. Occasionally, the lesion site is on the medial aspects of the frontal lobes and may involve the supplementary motor cortex of the left hemisphere. Ideomotor apraxia is diagnosed when commands to perform a specific motor act (“cough,” “blow out a match”) or pantomime the use of a common tool (a comb, hammer, straw, or toothbrush) in the absence of the real object cannot be followed. The patient’s ability to comprehend the command is ascertained by demonstrating multiple movements and establishing that the correct one can be recognized. Some patients with this type of apraxia can imitate the appropriate movement (when it is demonstrated by the examiner) and show no impairment when handed the real object, indicating that the sensorimotor mechanisms necessary for the movement are intact. Some forms of ideomotor apraxia represent a disconnection of the language network from pyramidal motor systems so that commands to execute complex movements are understood but cannot be conveyed to the appropriate motor areas. Buccofacial apraxia involves apraxic deficits in movements of the face and mouth. Limb apraxia encompasses apraxic deficits in movements of the arms and legs. Ideomotor apraxia almost always is caused by lesions in the left hemisphere and is commonly associated with aphasic syndromes, especially Broca’s aphasia and conduction aphasia. Because the handling of real objects is not impaired, ideomotor apraxia by itself causes no major limitation of daily living activities. Patients with lesions of the anterior corpus callosum can display ideomotor apraxia confined to the left side of the body, a sign known as sympathetic dyspraxia. A severe form of sympathetic dyspraxia, known as the alien hand syndrome, is characterized by additional features of motor disinhibition on the left hand. Ideational apraxia refers to a deficit in the sequencing of goal-directed movements in patients who have no difficulty executing the individual components of the sequence. For example, when the patient is asked to pick up a pen and write, the sequence of uncapping the pen, placing the cap at the opposite end, turning the point toward the writing surface, and writing may be disrupted, and the patient may be seen trying to write with the wrong end of the pen or even with the removed cap. These motor sequencing problems usually are seen in the context of confusional states and dementias rather than focal lesions associated with aphasic conditions. Limb-kinetic apraxia involves clumsiness in the use of tools or objects that cannot be attributed to sensory, pyramidal, extrapyramidal, or cerebellar dysfunction. This condition can emerge in the context of focal premotor cortex lesions or corticobasal 179 degeneration. gerstmann’s Syndrome The combination of acalculia (impairment of simple arithmetic), dysgraphia (impaired writing), finger anomia (an inability to name individual fingers such as the index and thumb), and right-left confusion (an inability to tell whether a hand, foot, or arm of the patient or examiner is on the right or left side of the body) is known as Gerstmann’s syndrome. In making this diagnosis, it is important to establish that the finger and left-right naming deficits are not part of a more generalized anomia and that the patient is not otherwise aphasic. When Gerstmann’s syndrome arises acutely and in isolation, it is commonly associated with damage to the inferior parietal lobule (especially the angular gyrus) in the left hemisphere. Pragmatics and Prosody Pragmatics refers to aspects of language that communicate attitude, affect, and the figurative rather than literal aspects of a message (e.g., “green thumb” does not refer to the actual color of the finger). One component of pragmatics, prosody, refers to variations of melodic stress and intonation that influence attitude and the inferential aspect of verbal messages. For example, the two statements “He is clever.” and “He is clever?” contain an identical word choice and syntax but convey vastly different messages because of differences in the intonation with which the statements are uttered. Damage to right hemisphere regions corresponding to Broca’s area impairs the ability to introduce meaning-appropriate prosody into spoken language. The patient produces grammatically correct language with accurate word choice, but the statements are uttered in a monotone that interferes with the ability to convey the intended stress and affect. Patients with this type of aprosodia give the mistaken impression of being depressed or indifferent. Other aspects of pragmatics, especially the ability to infer the figurative aspect of a message, become impaired by damage to the right hemisphere or frontal lobes. Subcortical Aphasia Damage to subcortical components of the language network (e.g., the striatum and thalamus of the left hemisphere) also can lead to aphasia. The resulting syndromes contain combinations of deficits in the various aspects of language but rarely fit the specific patterns described in Table 36-1. In a patient with a CVA, an anomic aphasia accompanied by dysarthria or a fluent aphasia with hemiparesis should raise the suspicion of a subcortical lesion site. Progressive Aphasias Aphasias caused by major cerebrovascular accidents start suddenly and display maximal deficits at the onset. These are the “classic” aphasias described above. Aphasias caused by neurodegenerative diseases have an insidious onset and relentless progression. The neuropathology can be selective not only for gray matter but also for specific layers and cell types. The clinico-anatomic patterns are therefore different from those described in Table 36-1. CLINICAL PRESENTATION ANd dIAgNOSIS OF PRIMARY PROgRESSIVE APHASIA (PPA) Several neurodegenerative syndromes, such as typical Alzheimer-type (amnestic) and frontal-type (behavioral) dementias, can also undermine language as the disease progresses. In these cases, the aphasia is an ancillary component of the overall syndrome. When a neurodegenerative language disorder arises in relative isolation and becomes the primary concern that brings the patient to medical attention, a diagnosis of PPA is made. LANgUAgE IN PPA The impairments of language in PPA have slightly different patterns from those seen in CVA-caused aphasias. Three major subtypes of PPA can be recognized. The agrammatic variant is characterized by consistently low fluency and impaired grammar but intact word comprehension. It most closely resembles Broca’s aphasia or anterior transcortical aphasia but usually lacks the right hemiparesis or dysarthria and has more profound impairments of grammar. Peak sites of neuronal loss (gray matter atrophy) include the left inferior frontal gyrus where Broca’s area is located. The neuropathology is usually an FTLD with tauopathy but can also be an atypical form of Alzheimer’s disease (AD) pathology. The semantic variant is characterized by preserved fluency and syntax but poor single-word comprehension and profound two-way naming impairments. This CHAPTER 36 Aphasia, Memory Loss, and Other Focal Cerebral Disorders 180 kind of aphasia is not seen with CVAs. It differs from Wernicke’s aphasia or posterior transcortical aphasia because speech is usually informative, repetition is intact, and comprehension of conversation is relatively preserved, as long as the meaning is not too dependent on words that the patient fails to understand. Peak atrophy sites are located in the left anterior temporal lobe, indicating that this part of the brain plays a critical role in the comprehension of words, especially words that denote concrete objects. The neuropathology is frequently an FTLD with abnormal precipitates of the 43-kDa transactive response DNA-binding protein TDP-43. The logopenic variant is characterized by preserved syntax and comprehension but frequent and severe word-finding pauses, anomia, circumlocutions, and simplifications during spontaneous speech. Peak atrophy sites are located in the temporoparietal junction and posterior temporal lobe, partially overlapping with traditional location of Wernicke’s area. However, the comprehension impairment of Wernicke’s aphasia is absent, perhaps because the underlying white matter, frequently damaged by cerebrovascular accidents, remains relatively intact in PPA. In contrast to Broca’s aphasia or agrammatic PPA, the interruption of fluency is variable so that speech may appear entirely normal if the patient is allowed to engage in small talk. Logopenic PPA resembles the anomic aphasia of Table 36-1 but usually has longer and more frequent word-finding pauses. Patients may also have poor phrase and word repetition, in which case the aphasia resembles the conduction aphasia in Table 36-1. Of all PPA subtypes, this is the one most commonly associated with the pathology of AD, but FTLD can also be the cause. In addition to these three major subtypes, PPA can also present in the form of pure word deafness or Gerstmann’s syndrome. Adaptive spatial orientation is subserved by a large-scale network containing three major cortical components. The cingulate cortex provides access to a motivational mapping of the extrapersonal space, the posterior parietal cortex to a sensorimotor representation of salient extrapersonal events, and the frontal eye fields to motor strategies for attentional behaviors (Fig. 36-2). Subcortical components of this network include the striatum and the thalamus. Damage to this network can undermine the distribution of attention within the extrapersonal space, giving rise to hemispatial neglect, simultanagnosia and object finding failures. The integration of egocentric (self-centered) with allocentric (object-centered) coordinates can also be disrupted, giving rise to impairments in route finding, the ability to avoid obstacles, and the ability to dress. Contralesional hemispatial neglect represents one outcome of damage to the cortical or subcortical components of this network. The traditional view that hemispatial neglect always denotes a parietal lobe lesion is inaccurate. According to one model of spatial cognition, the right hemisphere directs attention within the entire extrapersonal space, whereas the left hemisphere directs attention mostly within the contra-lateral right hemispace. Consequently, left hemisphere lesions do not give rise to much contralesional neglect because the global attentional mechanisms of the right hemisphere can compensate for the loss of the contralaterally directed attentional functions of the left hemisphere. Right hemisphere lesions, however, give rise to severe contralesional left hemispatial neglect because the unaffected left hemisphere does not contain ipsilateral attentional mechanisms. This model is consistent with clinical experience, which shows that contralesional neglect is more common, more severe, and longer lasting after damage to the right hemisphere than after damage to the left hemisphere. Severe neglect for the right hemispace is rare, even in left-handers with left hemisphere lesions. Clinical Examination Patients with severe neglect may fail to dress, shave, or groom the left side of the body; fail to eat food placed on the PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 36-2 Functional magnetic resonance imaging of language and spatial attention in neurologically intact subjects. The red and black areas show regions of task-related significant activation. (Top) The subjects were asked to determine if two words were synonymous. This language task led to the simultaneous activation of the two epicenters of the language network, Broca’s area (B) and Wernicke’s area (W). The activations are exclusively in the left hemisphere. (Bottom) The subjects were asked to shift spatial attention to a peripheral target. This task led to the simultaneous activation of the three epicenters of the attentional network: the posterior parietal cortex (P), the frontal eye fields (F), and the cingulate gyrus (CG). The activations are predominantly in the right hemisphere. (Courtesy of Darren Gitelman, MD; with permission.) left side of the tray; and fail to read the left half of sentences. When asked to copy a simple line drawing, the patient fails to copy detail on the left, and when the patient is asked to write, there is a tendency to leave an unusually wide margin on the left. Two bedside tests that are useful in assessing neglect are simultaneous bilateral stimulation and visual target cancellation. In the former, the examiner provides either unilateral or simultaneous bilateral stimulation in the visual, auditory, and tactile modalities. After right hemisphere injury, patients who have no difficulty detecting unilateral stimuli on either side experience the bilaterally presented stimulus as coming only from the right. This phenomenon is known as extinction and is a manifestation of the sensory-representational aspect of hemispatial neglect. In the target detection task, targets (e.g., A’s) are interspersed with foils (e.g., other letters of the alphabet) on a 21.5to 28.0-cm (8.5 to 11 in.) sheet of paper, and the patient is asked to circle all the targets. A failure to detect targets on the left is a manifestation of the exploratory (motor) deficit in hemispatial neglect (Fig. 36-3A). Hemianopia is not by itself sufficient to cause the target detection failure because the patient is free to turn the head and eyes to the left. Target detection failures therefore reflect a distortion of spatial attention, not just of sensory input. Some patients with neglect also may deny the existence of hemiparesis and may even deny ownership of the paralyzed limb, a condition known as anosognosia. 181 CHAPTER 36 Aphasia, Memory Loss, and Other Focal Cerebral Disorders FIguRE 36-3 A. A 47-year-old man with a large frontoparietal lesion in the right hemisphere was asked to circle all the A’s. Only targets on the right are circled. This is a manifestation of left hemispatial neglect. B. A 70-year-old woman with a 2-year history of degenerative dementia was able to circle most of the small targets but ignored the larger ones. This is a manifestation of simultanagnosia. BÁLINT’S SYNDROME, SIMuLTANAgNOSIA, DRESSINg APRAXIA, CONSTRuCTION APRAXIA, AND ROuTE FINDINg Bilateral involvement of the network for spatial attention, especially its parietal components, leads to a state of severe spatial disorientation known as Bálint’s syndrome. Bálint’s syndrome involves deficits in the orderly visuomotor scanning of the environment (oculomotor apraxia), accurate manual reaching toward visual targets (optic ataxia), and the ability to integrate visual information in the center of gaze with more peripheral information (simultanagnosia). A patient with simultanagnosia “misses the forest for the trees.” For example, a patient who is shown a table lamp and asked to name the object may look at its circular base and call it an ashtray. Some patients with simultanagnosia report that objects they look at may vanish suddenly, probably indicating an inability to look back at the original point of gaze after brief saccadic displacements. Movement and distracting stimuli greatly exacerbate the difficulties of visual perception. Simultanagnosia can occur without the other two components of Bálint’s syndrome. A modification of the letter cancellation task described above can be used for the bedside diagnosis of simultanagnosia. In this modification, some of the targets (e.g., A’s) are made to be much larger than the others (7.5 to 10 cm vs 2.5 cm [3 to 4 in. vs 1 in.] in height), and all targets are embedded among foils. Patients with simultanagnosia display a counterintuitive but characteristic tendency to miss the larger targets (Fig. 36-3B). This occurs because the information needed for the identification of the larger targets cannot be confined to the immediate line of gaze and requires the integration of visual information across multiple fixation points. The greater difficulty in the detection of the larger targets also indicates that poor acuity is not responsible for the impairment of visual function and that the problem is central rather than peripheral. The test shown in Fig. 36-3B is not by itself sufficient to diagnose simultanagnosia because some patients with a frontal network syndrome may omit the large letters, perhaps because they lack the mental flexibility needed to realize that the two types of targets are symbolically identical despite being superficially different. Bilateral parietal lesions can impair the integration of egocentric with allocentric spatial coordinates. One manifestation is dressing apraxia. A patient with this condition is unable to align the body axis with the axis of the garment and can be seen struggling as he or she holds a coat from its bottom or extends his or her arm into a fold of the garment rather than into its sleeve. Lesions that involve the posterior parietal cortex also lead to severe difficulties in copying simple line drawings. This is known as a construction apraxia and is much more severe if the lesion is in the right hemisphere. In some patients with right hemisphere lesions, the drawing difficulties are confined to the left side of the figure and represent a manifestation of hemispatial neglect; in others, there is a more universal deficit in reproducing contours and three-dimensional perspective. Impairments of route finding can be included in this group of disorders, which reflect an inability to orient the self with respect to external objects and landmarks. Causes of Spatial Disorientation Cerebrovascular lesions and neoplasms in the right hemisphere are common causes of hemispatial neglect. Depending on the site of the lesion, a patient with neglect also may have hemiparesis, hemihypesthesia, and hemianopia on the left, but these are not invariant findings. The majority of these patients display considerable improvement of hemispatial neglect, usually within the first several weeks. Bálint’s syndrome, dressing apraxia, and route finding impairments are more likely to result from bilateral dorsal parietal lesions; common settings for acute onset include watershed infarction between the middle and posterior cerebral artery territories, hypoglycemia, and sagittal sinus thrombosis. A progressive form of spatial disorientation, known as the posterior cortical atrophy syndrome, most commonly represents a variant of AD with unusual concentrations of neurofibrillary degeneration in the parieto-occipital cortex and the superior colliculus. The patient displays a progressive hemispatial neglect or Bálint’s syndrome, usually accompanied by dressing and construction apraxia. The corticobasal syndrome, which can be caused by AD or FTLD pathology, can also lead to a progressive left hemineglect syndrome. Both syndromes can impair route finding. PART 2 Cardinal Manifestations and Presentation of Diseases A patient with prosopagnosia cannot recognize familiar faces, including, sometimes, the reflection of his or her own face in the mirror. This is not a perceptual deficit because prosopagnosic patients easily can tell whether two faces are identical. Furthermore, a prosopagnosic patient who cannot recognize a familiar face by visual inspection alone can use auditory cues to reach appropriate recognition if allowed to listen to the person’s voice. The deficit in prosopagnosia is therefore modality-specific and reflects the existence of a lesion that prevents the activation of otherwise intact multimodal templates by relevant visual input. Prosopagnosic patients characteristically have no difficulty with the generic identification of a face as a face or a car as a car, but may not recognize the identity of an individual face or the make of an individual car. This reflects a visual recognition deficit for proprietary features that characterize individual members of an object class. When recognition problems become more generalized and extend to the generic identification of common objects, the condition is known as visual object agnosia. A patient with anomia cannot name the object but can describe its use. In contrast, a patient with visual agnosia is unable either to name a visually presented object or to describe its use. Face and object recognition disorders also can result from the simultanagnosia of Bálint’s syndrome, in which case they are known as apperceptive agnosias as opposed to the associative agnosias that result from inferior temporal lobe lesions. The characteristic lesions in prosopagnosia and visual object agnosia of acute onset consist of bilateral infarctions in the territory of the posterior cerebral arteries. Associated deficits can include visual field defects (especially superior quadrantanopias) and a centrally based color blindness known as achromatopsia. Rarely, the responsible lesion is unilateral. In such cases, prosopagnosia is associated with lesions in the right hemisphere, and object agnosia with lesions in the left. Degenerative diseases of anterior and inferior temporal cortex can cause progressive associative prosopagnosia and object agnosia. The combination of progressive associative agnosia and a fluent aphasia is known as semantic dementia. Patients with semantic dementia fail to recognize faces and objects and cannot understand the meaning of words denoting objects. This needs to be differentiated from the semantic type of PPA where there is severe impairment in understanding words that denote objects and in naming faces and objects but a relative preservation of face and object recognition. Limbic and paralimbic areas (such as the hippocampus, amygdala, and entorhinal cortex), the anterior and medial nuclei of the thalamus, the medial and basal parts of the striatum, and the hypothalamus collectively constitute a distributed network known as the limbic system. The behavioral affiliations of this network include the coordination of emotion, motivation, autonomic tone, and endocrine function. An additional area of specialization for the limbic network and the one that is of most relevance to clinical practice is that of declarative (explicit) memory for recent episodes and experiences. A disturbance in this function is known as an amnestic state. In the absence of deficits in motivation, attention, language, or visuospatial function, the clinical diagnosis of a persistent global amnestic state is always associated with bilateral damage to the limbic network, usually within the hippocampo-entorhinal complex or the thalamus. Damage to the limbic network does not necessarily destroy memories but interferes with their conscious recall in coherent form. The individual fragments of information remain preserved despite the limbic lesions and can sustain what is known as implicit memory. For example, patients with amnestic states can acquire new motor or perceptual skills even though they may have no conscious knowledge of the experiences that led to the acquisition of these skills. The memory disturbance in the amnestic state is multimodal and includes retrograde and anterograde components. The retrograde amnesia involves an inability to recall experiences that occurred before the onset of the amnestic state. Relatively recent events are more vulnerable to retrograde amnesia than are more remote and more extensively consolidated events. A patient who comes to the emergency room complaining that he cannot remember his or her identity but can remember the events of the previous day almost certainly does not have a neurologic cause of memory disturbance. The second and most important component of the amnestic state is the anterograde amnesia, which indicates an inability to store, retain, and recall new knowledge. Patients with amnestic states cannot remember what they ate a few hours ago or the details of an important event they may have experienced in the recent past. In the acute stages, there also may be a tendency to fill in memory gaps with inaccurate, fabricated, and often implausible information. This is known as confabulation. Patients with the amnestic syndrome forget that they forget and tend to deny the existence of a memory problem when questioned. Confabulation is more common in cases where the underlying lesion also interferes with parts of the frontal network, as in the case of the Wernicke-Korsakoff syndrome or traumatic head injury. A patient with an amnestic state is almost always disoriented, especially to time, and has little knowledge of current news. The anterograde component of an amnestic state can be tested with a list of four to five words read aloud by the examiner up to five times or until the patient can immediately repeat the entire list without an intervening delay. The next phase of the recall occurs after a period of 5 to 10 min during which the patient is engaged in other tasks. Amnestic patients fail this phase of the task and may even forget that they were given a list of words to remember. Accurate recognition of the words by multiple choice in a patient who cannot recall them indicates a less severe memory disturbance that affects mostly the retrieval stage of memory. The retrograde component of an amnesia can be assessed with questions related to autobiographical or historic events. The anterograde component of amnestic states is usually much more prominent than the retrograde component. In rare instances, occasionally associated with temporal lobe epilepsy or herpes simplex encephalitis, the retrograde component may dominate. Confusional states caused by toxic-metabolic encephalopathies and some types of frontal lobe damage lead to secondary memory impairments, especially at the stages of encoding and retrieval, even in the absence of limbic lesions. This sort of memory impairment can be differentiated from the amnestic state by the presence of additional impairments in the attention-related tasks described below in the section on the frontal lobes. CAuSES, INCLuDINg ALZHEIMER’S DISEASE Neurologic diseases that give rise to an amnestic state include tumors (of the sphenoid wing, posterior corpus callosum, thalamus, or medial temporal lobe), infarctions (in the territories of the anterior or posterior cerebral arteries), head trauma, herpes simplex encephalitis, Wernicke-Korsakoff encephalopathy, paraneoplastic limbic encephalitis, and degenerative dementias such as AD and Pick’s disease. The one common denominator of all these diseases is the presence of bilateral lesions within one or more components in the limbic network. Occasionally, unilateral left-sided hippocampal lesions can give rise to an amnestic state, but the memory disorder tends to be transient. Depending on the nature and distribution of the underlying neurologic disease, the patient also may have visual field deficits, eye movement limitations, or cerebellar findings. AD and its prodromal state of mild cognitive impairment (MCI) are the most common causes of progressive memory impairments. The predilection of the entorhinal cortex and hippocampus for early neurofibrillary degeneration by typical AD pathology is responsible for the initially selective impairment of episodic memory. In time, additional impairments in language, attention, and visuospatial skills emerge as the neurofibrillary degeneration spreads to additional neocortical areas. Transient global amnesia is a distinctive syndrome usually seen in late middle age. Patients become acutely disoriented and repeatedly ask who they are, where they are, and what they are doing. The spell is characterized by anterograde amnesia (inability to retain new information) and a retrograde amnesia for relatively recent events that occurred before the onset. The syndrome usually resolves within 24 to 48 h and is followed by the filling in of the period affected by the retrograde amnesia, although there is persistent loss of memory for the events that occurred during the ictus. Recurrences are noted in approximately 20% of patients. Migraine, temporal lobe seizures, and perfusion abnormalities in the posterior cerebral territory have been postulated as causes of transient global amnesia. The absence of associated neurologic findings occasionally may lead to the incorrect diagnosis of a psychiatric disorder. The frontal lobes can be subdivided into motor-premotor, dorsolateral prefrontal, medial prefrontal, and orbitofrontal components. The terms frontal lobe syndrome and prefrontal cortex refer only to the last three of these four components. These are the parts of the cerebral cortex that show the greatest phylogenetic expansion in primates, especially in humans. The dorsolateral prefrontal, medial prefrontal, and orbitofrontal areas, along with the subcortical structures with which they are interconnected (i.e., the head of the caudate and the dorsomedial nucleus of the thalamus), collectively make up a large-scale network that coordinates exceedingly complex aspects of human cognition and behavior. The prefrontal network plays an important role in behaviors that require multitasking and the integration of thought with emotion. Cognitive operations impaired by prefrontal cortex lesions often are referred to as “executive functions.” The most common clinical manifestations of damage to the prefrontal network take the form of two relatively distinct syndromes. In the frontal abulic syndrome, the patient shows a loss of initiative, creativity, and curiosity and displays a pervasive emotional blandness, apathy, and lack of empathy. In the 183 frontal disinhibition syndrome, the patient becomes socially disinhibited and shows severe impairments of judgment, insight, foresight, and the ability to mind rules of conduct. The dissociation between intact intellectual function and a total lack of even rudimentary common sense is striking. Despite the preservation of all essential memory functions, the patient cannot learn from experience and continues to display inappropriate behaviors without appearing to feel emotional pain, guilt, or regret when those behaviors repeatedly lead to disastrous consequences. The impairments may emerge only in real-life situations when behavior is under minimal external control and may not be apparent within the structured environment of the medical office. Testing judgment by asking patients what they would do if they detected a fire in a theater or found a stamped and addressed envelope on the road is not very informative because patients who answer these questions wisely in the office may still act very foolishly in real-life settings. The physician must therefore be prepared to make a diagnosis of frontal lobe disease based on historic information alone even when the mental state is quite intact in the office examination. The emergence of developmentally primitive reflexes, also known as frontal release signs, such as grasping (elicited by stroking the palm) and sucking (elicited by stroking the lips) are seen primarily in patients with large structural lesions that extend into the premotor components of the frontal lobes or in the context of metabolic encephalopathies. The vast majority of patients with prefrontal lesions and frontal lobe behavioral syndromes do not display these reflexes. Damage to the frontal lobe disrupts a variety of attention-related functions, including working memory (the transient online holding and manipulation of information), concentration span, the scanning and retrieval of stored information, the inhibition of immediate but inappropriate responses, and mental flexibility. Digit span (which should be seven forward and five reverse) is decreased, reflecting poor working memory; the recitation of the months of the year in reverse order (which should take less than 15 s) is slowed as another indication of poor working memory; and the fluency in producing words starting with the letter a, f, or s that can be generated in 1 min (normally ≥12 per letter) is diminished even in nonaphasic patients, indicating an impairment in the ability to search and retrieve information from long-term stores. In “go–no go” tasks (where the instruction is to raise the finger upon hearing one tap but keep it still upon hearing two taps), the patient shows a characteristic inability to inhibit the response to the “no go” stimulus. Mental flexibility (tested by the ability to shift from one criterion to another in sorting or matching tasks) is impoverished; distractibility by irrelevant stimuli is increased; and there is a pronounced tendency for impersistence and perseveration. The ability for abstracting similarities and interpreting proverbs is also undermined. The attentional deficits disrupt the orderly registration and retrieval of new information and lead to secondary memory deficits. The distinction of the underlying neural mechanisms is illustrated by the observation that severely amnestic patients who cannot remember events that occurred a few minutes ago may have intact if not superior working memory capacity as shown in tests of digit span. CAuSES: TRAuMA, NEOPLASM, AND FRONTOTEMPORAL DEMENTIA The abulic syndrome tends to be associated with damage in dorsolateral or dorsomedial prefrontal cortex, and the disinhibition syndrome with damage in orbitofrontal or ventromedial cortex. These syndromes tend to arise almost exclusively after bilateral lesions. Unilateral lesions confined to the prefrontal cortex may remain silent until the pathology spreads to the other side; this explains why thromboembolic CVA is an unusual cause of the frontal lobe syndrome. Common settings for frontal lobe syndromes include head trauma, ruptured aneurysms, hydrocephalus, tumors (including metastases, glioblastoma, and falx or olfactory groove meningiomas), and focal degenerative diseases. A major clinical form of FTLD known as the behavioral variant of frontotemporal dementia (bvFTD) causes a progressive frontal lobe syndrome. The behavioral changes can range CHAPTER 36 Aphasia, Memory Loss, and Other Focal Cerebral Disorders 184 from apathy to shoplifting, compulsive gambling, sexual indiscretions, remarkable lack of common sense, new ritualistic behaviors, and alterations in dietary preferences, usually leading to increased taste for sweets or rigid attachment to specific food items. In many patients with AD, neurofibrillary degeneration eventually spreads to prefrontal cortex and gives rise to components of the frontal lobe syndrome, but almost always on a background of severe memory impairment. Rarely, the bvFTD syndrome can arise in isolation in the context of an atypical form of AD pathology. Lesions in the caudate nucleus or in the dorsomedial nucleus of the thalamus (subcortical components of the prefrontal network) also can produce a frontal lobe syndrome. This is one reason why the changes in mental state associated with degenerative basal ganglia diseases such as Parkinson’s disease and Huntington’s disease display components of the frontal lobe syndrome. Bilateral multifocal lesions of the cerebral hemispheres, none of which are individually large enough to cause specific cognitive deficits such as aphasia and neglect, can collectively interfere with the connectivity and therefore integrating (executive) function of the prefrontal cortex. A frontal lobe syndrome is therefore the single most common behavioral profile associated with a variety of bilateral multifocal brain diseases, including metabolic encephalopathy, multiple sclerosis, and vitamin B12 deficiency, among others. Many patients with the clinical diagnosis of a frontal lobe syndrome tend to have lesions that do not involve prefrontal cortex but involve either the subcortical components of the prefrontal network or its connections with other parts of the brain. To avoid making a diagnosis of “frontal lobe syndrome” in a patient with no evidence of frontal cortex disease, it is advisable to use the diagnostic term frontal network syndrome, with the understanding that the responsible lesions can lie anywhere within this distributed network. A patient with frontal lobe disease raises potential dilemmas in differential diagnosis: the abulia and blandness may be misinterpreted as depression, and the disinhibition as idiopathic mania or acting out. Appropriate intervention may be delayed while a treatable tumor keeps expanding. Part 2 Cardinal Manifestations and Presentation of Diseases Brain damage may cause a dissociation between feeling states and their expression so that a patient who may superficially appear jocular could still be suffering from an underlying depression that needs to be treated. If neuroleptics become absolutely necessary for the control of agitation, atypical neuroleptics are preferable because of their lower extrapyramidal side effects. Treatment with neuroleptics in elderly patients with dementia requires weighing the potential benefits against the potentially serious side effects. Spontaneous improvement of cognitive deficits due to acute neurologic lesions is common. It is most rapid in the first few weeks but may continue for up to 2 years, especially in young individuals with single brain lesions. Some of the initial deficits appear to arise from remote dysfunction (diaschisis) in parts of the brain that are interconnected with the site of initial injury. Improvement in these patients may reflect, at least in part, a normalization of the remote dysfunction. Other mechanisms may involve functional reorganization in surviving neurons adjacent to the injury or the compensatory use of homologous structures, e.g., the right superior temporal gyrus with recovery from Wernicke’s aphasia. Cognitive rehabilitation procedures have been used in the treatment of higher cortical deficits. There are few controlled studies, but some show a benefit of rehabilitation in the recovery from hemispatial neglect and aphasia. Determining driving competence is challenging, especially in the early stages of dementing diseases. The diagnosis of a neurodegenerative disease is not by itself sufficient for asking the patient to stop driving. An on-the-road driving test and reports from family members may help time decisions related to this very important activity. Some of the deficits described in this chapter are so complex that they may bewilder not only the patient and family but also the physician. It is imperative to carry out a systematic clinical evaluation to characterize the nature of the deficits and explain them in lay terms to the patient and family. An enlightened approach to patients with damage to the cerebral cortex requires an understanding of the principles that link neural networks to higher cerebral functions in health and disease. Primary Progressive Aphasia, Memory Loss, and Other Focal Cerebral Disorders Maria Luisa Gorno-Tempini, Jennifer Ogar, Joel Kramer, Bruce L. Miller, Gil Rabinovici, Maria Carmela Tartaglia 37e Language and memory are essential human functions. For the experienced clinician, the recognition of different types of language and memory disturbances often provides essential clues to the anatomic localization and diagnosis of neurologic disorders. This video illustrates classic disorders of language and speech (including the aphasias), memory (the amnesias), and other disorders of cognition that are commonly encountered in clinical practice. CHAPTER 37e Primary Progressive Aphasia, Memory Loss, and Other Focal Cerebral Disorders PART 2 Cardinal Manifestations and Presentation of Diseases Sleep Disorders Charles A. Czeisler, Thomas E. Scammell, Clifford B. Saper Disturbed sleep is among the most frequent health complaints that physicians encounter. More than one-half of adults in the United States experience at least intermittent sleep disturbance, and only 30% 38 of adult Americans report consistently obtaining a sufficient amount of sleep. The Institute of Medicine has estimated that 50–70 million Americans suffer from a chronic disorder of sleep and wakefulness, which can adversely affect daytime functioning as well as physical and mental health. Over the last 20 years, the field of sleep medicine has emerged as a distinct specialty in response to the impact of sleep disorders and sleep deficiency on overall health. Given the opportunity, most healthy young adults will sleep 7–8 h per night, although the timing, duration, and internal structure of sleep vary among individuals. In the United States, adults tend to have one consolidated sleep episode each night, although in some cultures sleep may be divided into a mid-afternoon nap and a shortened night sleep. This pattern changes considerably over the life span, as infants and young children sleep considerably more than older people. The stages of human sleep are defined on the basis of characteristic patterns in the electroencephalogram (EEG), the electrooculogram (EOG—a measure of eye-movement activity), and the surface electromyogram (EMG) measured on the chin, neck, and legs. The continuous recording of these electrophysiologic parameters to define sleep and wakefulness is termed polysomnography. Polysomnographic profiles define two basic states of sleep: (1) rapid eye movement (REM) sleep and (2) non–rapid eye movement (NREM) sleep. NREM sleep is further subdivided into three stages: N1, N2, and N3, characterized by increasing arousal threshold and slowing of the cortical EEG. REM sleep is characterized by a low-amplitude, mixed-frequency EEG similar to that of NREM stage N1 sleep. The EOG shows bursts of rapid eye movements similar to those seen during eyes-open wakefulness. EMG activity is absent in nearly all skeletal muscles, reflecting the brainstem-mediated muscle atonia that is characteristic of REM sleep. Normal nocturnal sleep in adults displays a consistent organization from night to night (Fig. 38-1). After sleep onset, sleep usually progresses through NREM stages N1–N3 sleep within 45–60 min. NREM stage N3 sleep (also known as slow-wave sleep) predominates in the first third of the night and comprises 15–25% of total nocturnal sleep time in young adults. Sleep deprivation increases the rapidity of sleep onset and both the intensity and amount of slow-wave sleep. The first REM sleep episode usually occurs in the second hour of sleep. NREM and REM sleep alternate through the night with an average period of 90–110 min (the “ultradian” sleep cycle). Overall, in a healthy young adult, REM sleep constitutes 20–25% of total sleep, and NREM stages N1 and N2 constitute 50–60%. Age has a profound impact on sleep state organization (Fig. 38-1). N3 sleep is most intense and prominent during childhood, decreasing with puberty and across the second and third decades of life. N3 sleep declines during adulthood to the point where it may be completely absent in older adults. The remaining NREM sleep becomes more fragmented, with many more frequent awakenings from NREM sleep. It is the increased frequency of awakenings, rather than a decreased ability to fall back asleep, that accounts for the increased wakefulness during the sleep episode in older people. While REM sleep may account for 50% of total sleep time in infancy, the percentage falls off sharply over the first postnatal year as a mature REM-NREM cycle develops; thereafter, REM sleep occupies about 25% of total sleep time. Sleep deprivation degrades cognitive performance, particularly on tests that require continual vigilance. Paradoxically, older people are less vulnerable to the neurobehavioral performance impairment induced by acute sleep deprivation than young adults, maintaining their reaction time and sustaining vigilance with fewer lapses of attention. However, it is more difficult for older adults to obtain recovery sleep after staying awake all night, as the ability to sleep during the daytime declines with age. After sleep deprivation, NREM sleep is generally recovered first, followed by REM sleep. However, because REM sleep tends to be most prominent in the second half of the night, sleep truncation (e.g., by an alarm clock) results in selective REM sleep deprivation. This may 06.00 08.00 increase REM sleep pressure to the point where the first REM sleep 185 may occur much earlier in the nightly sleep episode. Because several disorders (see below) also cause sleep fragmentation, it is important that the patient have sufficient sleep opportunity (at least 8 h per night) for several nights prior to a diagnostic polysomnogram. There is growing evidence that sleep deficiency in humans may cause glucose intolerance and contribute to the development of diabetes, obesity, and the metabolic syndrome, as well as impaired immune responses, accelerated atherosclerosis, and increased risk of cardiac disease and stroke. For these reasons, the Institute of Medicine declared sleep deficiency and sleep disorders “an unmet public health problem.” Two principal neural systems govern the expression of the sleep and wakefulness. The ascending arousal system, illustrated in green in Fig. 38-2, consists of clusters of nerve cells extending from the upper pons to the hypothalamus and basal forebrain that activate the cerebral cortex, thalamus (which is necessary to relay sensory information to the cortex), and other forebrain regions. The ascending arousal neurons use monoamines (norepinephrine, dopamine, serotonin, and histamine), glutamate, or acetylcholine as neurotransmitters to activate their target neurons. Additional arousal-promoting neurons in the hypothalamus use the peptide neurotransmitter orexin (also known as hypocretin, shown in blue) to reinforce activity in the other arousal cell groups. Damage to the arousal system at the level of the rostral pons and lower midbrain causes coma, indicating that the ascending arousal influence from this level is critical in maintaining wakefulness. Damage to the hypothalamic branch of the arousal system causes profound sleepiness, but usually not coma. Specific loss of the orexin neurons produces the sleep disorder narcolepsy (see below). The arousal system is turned off during sleep by inhibitory inputs from cell groups in the sleep-promoting system, shown in Fig. 38-2 in red. These neurons in the preoptic area, lateral hypothalamus, and pons use γ-aminobutyric acid (GABA) to inhibit the arousal system. Many sleep-promoting neurons are themselves inhibited by inputs from the arousal system. This mutual inhibition between the arousal-and sleep-promoting systems forms a neural circuit akin to what electrical engineers call a “flip-flop switch.” A switch of this type tends to promote rapid transitions between the on (wake) and off (sleep) states, while avoiding intermediate states. The relatively rapid transitions between waking and sleeping states, as seen in the EEG of humans and animals, is consistent with this model. Neurons in the ventrolateral preoptic nucleus, one of the key sleep- promoting sites, are lost during normal human aging, correlating with reduced ability to maintain sleep (sleep fragmentation). The ventrolateral preoptic neurons are also injured in Alzheimer’s disease, which may in part account for the poor sleep quality in those patients. Transitions between NREM and REM sleep appear to be governed by a similar switch in the brainstem. GABAergic REM-Off neurons have been identified in the lower midbrain that inhibit REM-On neurons in the upper pons. The REM-On group contains both GABAergic neurons that inhibit the REM-Off group (thus satisfying the conditions for a REM flip-flop switch) as well as glutamatergic neurons that project widely in the central nervous system (CNS) to cause the key phenomena associated with REM sleep. REM-On neurons that project to the medulla and 00.00 02.00 04.00 containing) interneurons, which in turn hyperpolar-FIguRE 38-1 Wake-sleep architecture. Alternating stages of wakefulness, the three ize the motor neurons, producing the atonia of REM stages of NREM sleep (N1–N3), and REM sleep (solid bars) occur over the course of the sleep. REM-On neurons that project to the forebrain night for representative young and older adult men. Characteristic features of sleep in may be important in producing dreams. older people include reduction of N3 slow-wave sleep, frequent spontaneous awaken-The REM sleep switch receives cholinergic input, ings, early sleep onset, and early morning awakening. NREM, non–rapid eye move-which favors transitions to REM sleep, and monoment; REM, rapid eye movement. (From the Division of Sleep and Circadian Disorders, aminergic (norepinephrine and serotonin) input Brigham and Women’s Hospital.) that prevents REM sleep. As a result, drugs that FIguRE 38-2 Relationship of drugs for insomnia with wake-sleep systems. The arousal system in the brain (green) includes monoaminergic, glutamatergic, and cholinergic neurons in the brainstem that activate neurons in the hypothalamus, thalamus, basal forebrain, and cerebral cortex. Orexin neurons (blue) in the hypothalamus, which are lost in narcolepsy, reinforce and stabilize arousal by activating other components of the arousal system. The sleep-promoting system (red) consists of GABAergic neurons in the preoptic area, lateral hypothalamus, and brainstem that inhibit the components of the arousal system, thus allowing sleep to occur. Drugs used to treat insomnia include those that block the effects of arousal system neurotransmitters (green and blue) and those that enhance the effects of γ-aminobutyric acid (GABA) produced by the sleep system (red). increase monoamine tone (e.g., serotonin or norepinephrine reuptake inhibitors) tend to reduce the amount of REM sleep. Damage to the neurons that promote REM sleep atonia can produce REM sleep behavior disorder, a condition in which patients act out their dreams (see below). SLEEP-WAKE CYCLES ARE DRIVEN BY HOMEOSTATIC, ALLOSTATIC, AND CIRCADIAN INPuTS The gradual increase in sleep drive with prolonged wakefulness, followed by deeper slow-wave sleep and prolonged sleep episodes, demonstrates that there is a homeostatic mechanism that regulates sleep. The neurochemistry of sleep homeostasis is only partially understood, but with prolonged wakefulness, adenosine levels rise in parts of the brain. Adenosine may act through A1 receptors to directly inhibit many arousal-promoting brain regions. In addition, adenosine promotes sleep through A2a receptors; inhibition of these receptors by caffeine is one of the chief ways in which people fight sleepiness. Other humoral factors, such as prostaglandin D2, have also been implicated in this process. Both adenosine and prostaglandin D2 activate the sleep-promoting neurons in the ventrolateral preoptic nucleus. Allostasis is the physiologic response to a threat that cannot be managed by homeostatic mechanisms (e.g., the presence of physical danger or psychological threat). These stress responses can severely impact the need for and ability to sleep. For example, insomnia is very common in patients with anxiety and other psychiatric disorders. Stress-induced insomnia is even more common, affecting most people at some time in their lives. Positron emission tomography (PET) studies in patients with chronic insomnia show hyperactivation of the components of the ascending arousal system, as well as their targets in the limbic system in the forebrain (e.g., cingulate cortex and amygdala). The limbic areas are not only targets for the arousal system, but they also send excitatory outputs back to the arousal system, which contributes to a vicious cycle of anxiety about wakefulness that makes it more difficult to sleep. Approaches to treating insomnia rely on drugs that either inhibit the output of the ascending arousal system (green and blue in Fig. 38-2) or potentiate the output of the sleep-promoting system (red in Fig. 38-2). However, behavioral approaches (cognitive behavioral therapy and sleep hygiene) that may reduce forebrain limbic activity at bedtime are often equally or more successful. Sleep is also regulated by a strong circadian timing signal, driven by the suprachiasmatic nuclei (SCN) of the hypothalamus, as described below. The SCN sends outputs to key sites in the hypothalamus, which impose 24-h rhythms on a wide range of behaviors and body systems, including the wake-sleep cycle. The wake-sleep cycle is the most evident of many 24-h rhythms in humans. Prominent daily variations also occur in endocrine, thermoregulatory, cardiac, pulmonary, renal, immune, gastrointestinal, and neurobehavioral functions. At the molecular level, endogenous circadian rhythmicity is driven by self-sustaining transcriptional/ translational feedback loops. In evaluating daily rhythms in humans, it is important to distinguish between diurnal components passively evoked by periodic environmental or behavioral changes (e.g., the increase in blood pressure and heart rate that occurs upon assumption of the upright posture) and circadian rhythms actively driven by an endogenous oscillatory process (e.g., the circadian variations in adrenal cortisol and pineal melatonin secretion that persist across a variety of environmental and behavioral conditions). While it is now recognized that most cells in the body have circadian clocks that regulate diverse physiologic processes, most of these disparate clocks are unable to maintain the synchronization with each other that is required to produce useful 24-h rhythms aligned with the external light-dark cycle. The neurons in the SCN are interconnected with one another in such a way as to produce a near-24-h synchronous rhythm of neural activity that is then transmitted to the rest of the body. Bilateral destruction of the SCN results in a loss of most endogenous circadian rhythms including wake-sleep behavior and rhythms in endocrine and metabolic systems. The genetically determined period of this endogenous neural oscillator, which averages ~24.15 h in humans, is normally synchronized to the 24-h period of the environmental light-dark cycle through direct input from intrinsically photosensitive ganglion cells in the retina to the SCN. Humans are exquisitely sensitive to the resetting effects of light, particularly the shorter wavelengths (~460–500 nm) of the visible spectrum. Small differences in circadian period contribute to variations in diurnal preference in young adults (with the circadian period shorter in those who typically go to bed and rise earlier compared to those who typically go to bed and wake up later), whereas changes in homeostatic sleep regulation may underlie the age-related tendency toward earlier sleep-wake timing. The timing and internal architecture of sleep are directly coupled to the output of the endogenous circadian pacemaker. Paradoxically, the endogenous circadian rhythm for wake propensity peaks just before the habitual bedtime, whereas that of sleep propensity peaks near the habitual wake time. These rhythms are thus timed to oppose the rise of sleep tendency throughout the usual waking day and the decline of sleep propensity during the habitual sleep episode, respectively. Misalignment of the endogenous circadian pacemaker with the desired wake-sleep cycle can, therefore, induce insomnia, decreased alertness, and impaired performance evident in night-shift workers and airline travelers. Polysomnographic staging of sleep correlates with behavioral changes during specific states and stages. During the transitional state (stage N1) between wakefulness and deeper sleep, individuals may respond to faint auditory or visual signals. Formation of short-term memories is inhibited at the onset of NREM stage N1 sleep, which may explain why individuals aroused from that transitional sleep stage frequently lack situational awareness. After sleep deprivation, such transitions may intrude upon behavioral wakefulness notwithstanding attempts to remain continuously awake (see “Shift-Work Disorder,” below). Awakenings from REM sleep are associated with recall of vivid dream imagery over 80% of the time, especially later in the night. Imagery may also be reported after NREM sleep interruptions. Certain disorders may occur during specific sleep stages and are described below under “Parasomnias.” These include sleepwalking, night terrors, and enuresis (bed wetting), which occur most commonly in children during deep (N3) NREM sleep, and REM sleep behavior disorder, which occurs mainly among older men who fail to maintain full atonia during REM sleep, and often call out, thrash around, or even act out entire dreams. All major physiologic systems are influenced by sleep. Blood pressure and heart rate decrease during NREM sleep, particularly during N3 sleep. During REM sleep, bursts of eye movements are associated with large variations in both blood pressure and heart rate mediated by the autonomic nervous system. Cardiac dysrhythmias may occur selectively during REM sleep. Respiratory function also changes. In 187 comparison to relaxed wakefulness, respiratory rate becomes slower but more regular during NREM sleep (especially N3 sleep) and becomes irregular during bursts of eye movements in REM sleep. Decreases in minute ventilation during NREM sleep are out of proportion to the decrease in metabolic rate, resulting in a slightly higher Pco2. Endocrine function also varies with sleep. N3 sleep is associated with secretion of growth hormone in men, while sleep in general is associated with augmented secretion of prolactin in both men and women. Sleep has a complex effect on the secretion of luteinizing hormone (LH): during puberty, sleep is associated with increased LH secretion, whereas sleep in the postpubertal female inhibits LH secretion in the early follicular phase of the menstrual cycle. Sleep onset (and probably N3 sleep) is associated with inhibition of thyroid-stimulating hormone and of the adrenocorticotropic hormone–cortisol axis, an effect that is superimposed on the prominent circadian rhythms in the two systems. The pineal hormone melatonin is secreted predominantly at night in both dayand night-active species, reflecting the direct modulation of pineal activity by a circuitous neural pathway that links the SCN to the sympathetic nervous system, which innervates the pineal gland. Melatonin secretion does not require sleep, but melatonin secretion is inhibited by ambient light, an effect mediated by the neural connection from the retina to the pineal gland via the SCN. Sleep efficiency is highest when the sleep episode coincides with endogenous melatonin secretion. Administration of exogenous melatonin can hasten sleep onset and increase sleep efficiency when administered at a time when endogenous melatonin levels are low, such as in the afternoon or evening or at the desired bedtime in patients with delayed sleep-wake phase disorder, but it does not increase sleep efficiency if administered when endogenous melatonin levels are elevated. This may explain why melatonin is often ineffective in the treatment of patients with primary insomnia. Sleep is accompanied by alterations of thermoregulatory function. NREM sleep is associated with an increase in the firing of warm-responsive neurons in the preoptic area and a fall in body temperature; conversely, skin warming without increasing core body temperature has been found to increase NREM sleep. REM sleep is associated with reduced thermoregulatory responsiveness. APPROACH TO THE PATIENT: Patients may seek help from a physician because of: (1) sleepiness or tiredness during the day; (2) difficulty initiating or maintaining sleep at night (insomnia); or (3) unusual behaviors during sleep itself (parasomnias). Obtaining a careful history is essential. In particular, the duration, severity, and consistency of the symptoms are important, along with the patient’s estimate of the consequences of the sleep disorder on waking function. Information from a bed partner or family member is often helpful because some patients may be unaware of symptoms such as heavy snoring or may underreport symptoms such as falling asleep at work or while driving. Physicians should inquire about when the patient typically goes to bed, when they fall asleep and wake up, whether they awaken during sleep, whether they feel rested in the morning, and whether they nap during the day. Depending on the primary complaint, it may be useful to ask about snoring, witnessed apneas, restless sensations in the legs, movements during sleep, depression, anxiety, and behaviors around the sleep episode. The physical exam may provide evidence of a small airway, large tonsils, or a neurologic or medical disorder that contributes to the main complaint. It is important to remember that, rarely, seizures may occur exclusively during sleep, mimicking a primary sleep disorder; such sleep-related seizures typically occur during episodes of NREM sleep and may take the form of generalized tonic-clonic movements (sometimes with urinary incontinence or tongue biting) or stereo- PART 2 Cardinal Manifestations and Presentation of Diseases typed movements in partial complex epilepsy (Chap. 445). It is often helpful for the patient to complete a daily sleep log for 1–2 weeks to define the timing and amounts of sleep. When relevant, the log can also include information on levels of alertness, work times, and drug and alcohol use, including caffeine and hypnotics. Polysomnography is necessary for the diagnosis of several disorders such as sleep apnea, narcolepsy, and periodic limb movement disorder. A conventional polysomnogram performed in a clinical sleep laboratory allows measurement of sleep stages, respiratory effort and airflow, oxygen saturation, limb movements, heart rhythm, and additional parameters. A home sleep test usually focuses on just respiratory measures and is helpful in patients with a moderate to high likelihood of having obstructive sleep apnea. The multiple sleep latency test (MSLT) is used to measure a patient’s propensity to sleep during the day and can provide crucial evidence for diagnosing narcolepsy and some other causes of sleepiness. The maintenance of wakefulness test is used to measure a patient’s ability to sustain wakefulness during the daytime and can provide important evidence for evaluating the efficacy of therapies for improving sleepiness in conditions such as narcolepsy and obstructive sleep apnea. Up to 25% of the adult population has persistent daytime sleepiness that impairs an individual’s ability to perform optimally in school, at work, while driving, and in other conditions that require alertness. Sleepy students often have trouble staying alert and performing well in school, and sleepy adults struggle to stay awake and focused on their work. More than half of Americans have fallen asleep while driving. An estimated 1.2 million motor vehicle crashes per year are due to drowsy drivers, causing about 20% of all serious crash injuries and deaths. One needn’t fall asleep to have an accident, as the inattention and slowed responses of drowsy drivers are a major contributor. Reaction time is equally impaired by 24 h of sleep loss as by a blood alcohol concentration of 0.10 g/dL. Identifying and quantifying sleepiness can be challenging. First, patients may describe themselves as “sleepy,” “fatigued,” or “tired,” and the meanings of these words may differ between patients. For clinical purposes, it is best to use the term “sleepiness” to describe a propensity to fall asleep; whereas “fatigue” is best used to describe a feeling of low physical or mental energy but without a tendency to actually sleep. Sleepiness is usually most evident when the patient is sedentary, whereas fatigue may interfere with more active pursuits. Sleepiness generally occurs with disorders that reduce the quality or quantity of sleep or that interfere with the neural mechanisms of arousal, whereas fatigue is more common in inflammatory disorders such as cancer, multiple sclerosis (Chap. 458), fibromyalgia (Chap. 396), chronic fatigue syndrome (Chap. 464e), or endocrine deficiencies such as hypothyroidism (Chap. 405) or Addison’s disease (Chap. 406). Second, sleepiness can affect judgment in a manner analogous to ethanol, such that patients may have limited insight into the condition and the extent of their functional impairment. Finally, patients may be reluctant to admit that sleepiness is a problem because they may have become unfamiliar with feeling fully alert and because sleepiness is sometimes viewed pejoratively as reflecting poor motivation or bad sleep habits. Table 38-1 outlines the diagnostic and therapeutic approach to the patient with a complaint of excessive daytime sleepiness. To determine the extent and impact of sleepiness on daytime function, it is helpful to ask patients about the occurrence of sleep episodes during normal waking hours, both intentional and unintentional. Specific areas to be addressed include the occurrence of inadvertent sleep episodes while driving or in other safety-related settings, sleepiness while at work or school (and the relationship of sleepiness to work and school performance), and the effect of sleepiness on social and family life. Standardized questionnaires such as the Epworth Sleepiness Scale are often used clinically to measure sleepiness. Eliciting a history of daytime sleepiness is usually adequate, but objective quantification is sometimes necessary. The MSLT measures a patient’s propensity to sleep under quiet conditions. The test is performed after an overnight polysomnogram to establish that the patient has had an adequate amount of good-quality nighttime sleep. The MSLT consists of five 20-min nap opportunities every 2 h across the day. The patient is instructed to try to fall asleep, and the major endpoints are the average latency to sleep and the occurrence of REM sleep during the naps. An average sleep latency across the naps of less than 8 min is considered objective evidence of excessive daytime sleepiness. REM sleep normally occurs only during the nighttime sleep episode, and the occurrence of REM sleep in two or more of the MSLT naps provides support for the diagnosis of narcolepsy. For the safety of the individual and the general public, physicians have a responsibility to help manage issues around driving in patients with sleepiness. Legal reporting requirements vary from state to state, but at a minimum, physicians should inform sleepy patients about their increased risk of having an accident and advise such patients not to drive a motor vehicle until the sleepiness has been treated effectively. This discussion is especially important for professional drivers, and it should be documented in the patient’s medical record. Insufficient sleep is probably the most common cause of excessive daytime sleepiness. The average adult needs 7.5–8 h of sleep, but on Difficulty waking in the morning, rebound sleep on weekends and vacations with improvement in sleepiness Obesity, snoring, hypertension Cataplexy, hypnogogic hallucinations, sleep paralysis Restless legs, kicking movements during sleep Sedating medications, stimulant withdrawal, head trauma, systemic inflammation, Parkinson’s disease and other neurodegenerative disorders, hypothyroidism, encephalopathy weeknights, the average U.S. adult gets only 6.75 h of sleep. Only 30% of the U.S. adult population reports consistently obtaining sufficient sleep. Insufficient sleep is especially common among shift workers, individuals working multiple jobs, and people in lower socioeconomic groups. Most teenagers need ≥9 h of sleep, but many fail to get enough sleep because of circadian phase delay, or social pressures to stay up late coupled with early school start times. Late evening light exposure, television viewing, video-gaming, social media, texting, and smart-phone use often delay bedtimes despite the fixed, early wake times required for work or school. As is typical with any disorder that causes sleepiness, individuals with chronically insufficient sleep may feel inattentive, irritable, unmotivated, and depressed, and have difficulty with school, work, and driving. Individuals differ in their optimal amount of sleep, and it can be helpful to ask how much sleep the patient obtains on a quiet vacation when he or she can sleep without restrictions. Some patients may think that a short amount of sleep is normal or advantageous, and they may not appreciate their biological need for more sleep, especially if coffee and other stimulants mask the sleepiness. A 2-week sleep log documenting the timing of sleep and daily level of alertness is diagnostically useful and provides helpful feedback for the patient. Extending sleep to the optimal amount on a regular basis can resolve the sleepiness and other symptoms. As with any lifestyle change, extending sleep requires commitment and adjustments, but the improvements in daytime alertness make this change worthwhile. Respiratory dysfunction during sleep is a common, serious cause of excessive daytime sleepiness as well as of disturbed nocturnal sleep. At least 24% of middle-aged men and 9% of middle-aged women in the United States have a reduction or cessation of breathing dozens or more times each night during sleep, with 9% of men and 4% of women doing so more than a hundred times per night. These episodes may be due to an occlusion of the airway (obstructive sleep apnea), absence of respiratory effort (central sleep apnea), or a combination of these factors (mixed sleep apnea). Failure to recognize and treat these conditions appropriately may lead to impairment of daytime alertness, increased risk of sleep-related motor vehicle crashes, depression, hypertension, myocardial infarction, diabetes, stroke, and increased mortality. Sleep apnea is particularly prevalent in overweight men and in the elderly, yet it is estimated to go undiagnosed in most affected individuals. This is unfortunate because several effective treatments are available. Readers are referred to Chap. 319 for a comprehensive review of the diagnosis and treatment of patients with sleep apnea. Narcolepsy is characterized by difficulty sustaining wakefulness, poor regulation of REM sleep, and disturbed nocturnal sleep. All patients with narcolepsy have excessive daytime sleepiness. This sleepiness is often severe, but in some, it is mild. In contrast to patients of the face or neck. Narcolepsy is one of the more common causes of 189 chronic sleepiness and affects about 1 in 2000 people in the United States. Narcolepsy typically begins between age 10 and 20; once established, the disease persists for life. Narcolepsy is caused by loss of the hypothalamic neurons that produce the orexin neuropeptides (also known as hypocretins). Research in mice and dogs first demonstrated that a loss of orexin signaling due to null mutations of either the orexin neuropeptides or one of the orexin receptors causes sleepiness and cataplexy nearly identical to that seen in people with narcolepsy. Although genetic mutations rarely cause human narcolepsy, researchers soon discovered that patients with narcolepsy had very low or undetectable levels of orexins in their cerebrospinal fluid, and autopsy studies showed a nearly complete loss of the orexin-producing neurons in the hypothalamus. The orexins normally promote long episodes of wakefulness and suppress REM sleep, and thus, loss of orexin signaling results in frequent intrusions of sleep during the usual waking episode, with REM sleep and fragments of REM sleep at any time of day (Fig. 38-3). Extensive evidence suggests that an autoimmune process likely causes this selective loss of the orexin-producing neurons. Certain human leukocyte antigens (HLAs) can increase the risk of autoimmune disorders (Chap. 373e), and narcolepsy has the strongest known HLA association. HLA DQB1*06:02 is found in about 90% of people with narcolepsy, whereas it occurs in only 12–25% of the general population. Researchers now hypothesize that in people with DQB1*06:02, an immune response against influenza, Streptococcus, or other infections may also damage the orexin-producing neurons through a process of molecular mimicry. This mechanism may account for the 8to 12-fold increase in new cases of narcolepsy among children in Europe who received a particular brand of H1N1 influenza A vaccine (Pandemrix). On rare occasions, narcolepsy can occur with neurologic disorders such as tumors or strokes that directly damage the orexin-producing neurons in the hypothalamus or their projections. Diagnosis Narcolepsy is most commonly diagnosed by the history of chronic sleepiness plus cataplexy or other symptoms. Many disorders can cause feelings of weakness, but with true cataplexy, patients will describe definite functional weakness (e.g., slurred speech, dropping a cup, slumping into a chair) that has consistent emotional triggers such as heartfelt mirth when laughing at a great joke, happy surprise at unexpectedly seeing a friend, or intense anger. Cataplexy occurs in about half of all narcolepsy patients and is diagnostically very helpful because it occurs in almost no other disorder. In contrast, occasional hypnagogic hallucinations and sleep paralysis occur in about 20% of the general population, and these symptoms are not as diagnostically specific. When narcolepsy is suspected, the diagnosis should be firmly established with a polysomnogram followed by an MSLT. The 00:00 04:00 08:00 12:00 16:00 00:00 04:00 08:00 12:00 16:00 Clock time paralysis). With severe cataplexy, an indi-FIguRE 38-3 Polysomnographic recordings of a healthy individual and a patient with vidual may be laughing at a joke and then narcolepsy. The individual with narcolepsy enters rapid eye movement (REM) sleep quickly at suddenly collapse to the ground, immobile night and has moderately fragmented sleep. During the day, the healthy subject stays awake but awake for 1–2 min. With milder epi-from 8:00 AM until midnight, but the patient with narcolepsy dozes off frequently, with many sodes, patients may have mild weakness daytime naps that include REM sleep. 190 polysomnogram helps rule out other possible causes of sleepiness such as sleep apnea, and the MSLT provides essential, objective evidence of sleepiness plus REM sleep dysregulation. Across the five naps of the MSLT, most patients with narcolepsy will fall asleep in less than 8 min on average, and they will have episodes of REM sleep in at least two of the naps. Abnormal regulation of REM sleep is also manifested by the appearance of REM sleep within 15 min of sleep onset at night, which is rare in healthy individuals sleeping at their habitual bedtime. Stimulants should be stopped 1 week before the MSLT and antidepressants should be stopped 3 weeks prior, because these medications can affect the MSLT. In addition, patients should be encouraged to obtain a fully adequate amount of sleep each night for the week prior to the test to eliminate any effects of insufficient sleep. The treatment of narcolepsy is symptomatic. Most patients with narcolepsy feel more alert after sleep, and they should be encouraged to get adequate sleep each night and to take a 15to 20-min nap in the afternoon. This nap may be sufficient for some patients with mild narcolepsy, but most also require treatment with wake-promoting medications. Modafinil is used quite often because it has fewer side effects than amphetamines and a relatively long half-life; for most patients, 200–400 mg each morning is very effective. Methylphenidate (10–20 mg bid) or dextroamphetamine (10 mg bid) are often effective, but sympathomimetic side effects, anxiety, and the potential for abuse can be concerns. These medications are available in slow-release formulations, extending their duration of action and allowing easier dosing. Sodium oxybate (gamma hydroxybutyrate) is given twice each night and is often very valuable in improving alertness, but it can produce excessive sedation, nausea, and confusion. Cataplexy is usually much improved with antidepressants that increase noradrenergic or serotonergic tone because these medications strongly suppress REM sleep and cataplexy. Venlafaxine (37.5–150 mg each morning) and fluoxetine (10–40 mg each morning) are often quite effective. The tricyclic antidepressants, such as protriptyline (10–40 mg/d) or clomipramine (25–50 mg/d) are potent suppressors of cataplexy, but their anticholinergic effects, including sedation and dry mouth, make them less attractive.1 Sodium oxybate, given at bedtime and 3–4 h later, is also very helpful in reducing cataplexy. PART 2 Cardinal Manifestations and Presentation of Diseases Insomnia is the complaint of poor sleep and usually presents as difficulty initiating or maintaining sleep. People with insomnia are dissatisfied with their sleep and feel that it impairs their ability to function well in work, school, and social situations. Affected individuals often experience fatigue, decreased mood, irritability, malaise, and cognitive impairment. Chronic insomnia, lasting more than 3 months, occurs in about 10% of adults and is more common in women, older adults, people of lower socioeconomic status, and individuals with medical, psychiatric, and substance abuse disorders. Acute or short-term insomnia affects over 30% of adults and is often precipitated by stressful life events such as a major illness or loss, change of occupation, medications, and substance abuse. If the acute insomnia triggers maladaptive behaviors such as increased nocturnal light exposure, frequently checking the clock, or attempting to sleep more by napping, it can lead to chronic insomnia. Most insomnia begins in adulthood, but many patients may be predisposed and report easily disturbed sleep predating the insomnia, suggesting that their sleep is lighter than usual. Clinical studies and animal models indicate that insomnia is associated with activation 1No antidepressant has been approved by the U.S. Food and Drug Administration (FDA) for treating narcolepsy. during sleep of brain areas normally active only during wakefulness. The polysomnogram is rarely used in the evaluation of insomnia, as it typically confirms the patient’s subjective report of long latency to sleep and numerous awakenings but usually adds little new information. Many patients with insomnia have increased fast (beta) activity in the EEG during sleep; this fast activity is normally present only during wakefulness, which may explain why some patients report feeling awake for much of the night. The MSLT is rarely used in the evaluation of insomnia because, despite their feelings of low energy, most people with insomnia do not easily fall asleep during the day, and on the MSLT, their average sleep latencies are usually longer than normal. Many factors can contribute to insomnia, and obtaining a careful history is essential so one can select therapies targeting the underlying factors. The assessment should focus on identifying predisposing, precipitating, and perpetuating factors. Psychophysiologic Factors Many patients with insomnia have negative expectations and conditioned arousal that interfere with sleep. These individuals may worry about their insomnia during the day and have increasing anxiety as bedtime approaches if they anticipate a poor night of sleep. While attempting to sleep, they may frequently check the clock, which only heightens anxiety and frustration. They may find it easier to sleep in a new environment rather than their bedroom, as it lacks the negative associations. Inadequate Sleep Hygiene Patients with insomnia sometimes develop counterproductive behaviors that contribute to their insomnia. These can include daytime napping that reduces sleep drive at night; an irregular sleep-wake schedule that disrupts their circadian rhythms; use of wake-promoting substances (e.g., caffeine, tobacco) too close to bedtime; engaging in alerting or stressful activities close to bedtime (e.g., arguing with a partner, work-related emailing and texting while in bed, sleeping with a smartphone or tablet at the bedside); and routinely using the bedroom for activities other than sleep or sex (e.g., TV, work), so the bedroom becomes associated with arousing or stressful feelings. Psychiatric Conditions About 80% of patients with psychiatric disorders have sleep complaints, and about half of all chronic insomnia occurs in association with a psychiatric disorder. Depression is classically associated with early morning awakening, but it can also interfere with the onset and maintenance of sleep. Mania and hypomania can disrupt sleep and often are associated with substantial reductions in the total amount of sleep. Anxiety disorders can lead to racing thoughts and rumination that interfere with sleep and can be very problematic if the patient’s mind becomes active midway through the night. Panic attacks can occur during sleep and need to be distinguished from other parasomnias. Insomnia is common in schizophrenia and other psychoses, often resulting in fragmented sleep, less deep NREM sleep, and sometimes reversal of the day-night sleep pattern. Medications and Drugs of Abuse A wide variety of psychoactive drugs can interfere with sleep. Caffeine, which has a half-life of 6–9 h, can disrupt sleep for up to 8–14 h, depending on the dose, variations in metabolism, and an individual’s caffeine sensitivity. Insomnia can also result from use of prescription medications too close to bedtime (e.g., theophylline, stimulants, antidepressants, glucocorticoids). Conversely, withdrawal of sedating medications such as alcohol, narcotics, or benzodiazepines can cause insomnia. Alcohol taken just before bed can shorten sleep latency, but it often produces rebound insomnia 2–3 h later as it wears off. This same problem with sleep maintenance can occur with short-acting benzodiazepines such as alprazolam. Medical Conditions A large number of medical conditions disrupt sleep. Pain from rheumatologic disorders or a painful neuropathy commonly disrupts sleep. Some patients may sleep poorly because of respiratory conditions such as asthma, chronic obstructive pulmonary disease, cystic fibrosis, congestive heart failure, or restrictive lung disease, and some of these disorders are worse at night in bed due to circadian variations in airway resistance and postural changes that can result in paroxysmal nocturnal dyspnea. Many women experience poor sleep with the hormonal changes of menopause. Gastroesophageal reflux is also a common cause of difficulty sleeping. Neurologic Disorders Dementia (Chap. 35) is often associated with poor sleep, probably due to a variety of factors, including napping during the day, altered circadian rhythms, and perhaps a weakened output of the brain’s sleep-promoting mechanisms. In fact, insomnia and nighttime wandering are some of the most common causes for institutionalization of patients with dementia, because they place a larger burden on caregivers. Conversely, in cognitively intact elderly men, fragmented sleep and poor sleep quality are associated with subsequent cognitive decline. Patients with Parkinson’s disease may sleep poorly due to rigidity, dementia, and other factors. Fatal familial insomnia is a very rare neurodegenerative condition caused by mutations in the prion protein gene, and although insomnia is a common early symptom, most patients present with other obvious neurologic signs such dementia, myoclonus, dysarthria, or autonomic dysfunction. Treatment of insomnia improves quality of life and can promote long-term health. With improved sleep, patients often report less daytime fatigue, improved cognition, and more energy. Treating the insomnia can also improve the comorbid disease. For example, management of insomnia at the time of diagnosis of major depression often improves the response to antidepressants and reduces the risk of relapse. Sleep loss can heighten the perception of pain, so a similar approach is warranted in acute and chronic pain management. The treatment plan should target all putative contributing factors: establish good sleep hygiene, treat medical disorders, use behavioral therapies for anxiety and negative conditioning, and use pharmacotherapy and/or psychotherapy for psychiatric disorders. Behavioral therapies should be the first-line treatment, followed by judicious use of sleep-promoting medications if needed. If the history suggests that a medical or psychiatric disease contributes to the insomnia, then it should be addressed by, for example, treating the pain, improving breathing, and switching or adjusting the timing of medications. Attention should be paid to improving sleep hygiene and avoiding counterproductive, arousing behaviors before bedtime. Patients should establish a regular bedtime and wake time, even on weekends, to help synchronize their circadian rhythms and sleep patterns. The amount of time allocated for sleep should not be more than their actual total amount of sleep. In the 30 min before bedtime, patients should establish a relaxing “wind-down” routine that can include a warm bath, listening to music, meditation, or other relaxation techniques. The bedroom should be off-limits to computers, televisions, radios, smartphones, videogames, and tablets. Once in bed, patients should try to avoid thinking about anything stressful or arousing such as problems with relationships or work. If they cannot fall asleep within 20 min, it often helps to get out of bed and read or listen to relaxing music in dim light as a form of distraction from any anxiety, but artificial light, including light from a television, cell phone, or computer, should be avoided, because light itself suppresses melatonin secretion and is arousing. Table 38-2 outlines some of the key aspects of good sleep hygiene to improve insomnia. CBT uses a combination of the techniques above plus additional methods to improve insomnia. A trained therapist may use cognitive psychology techniques to reduce excessive worrying about sleep and to reframe faulty beliefs about the insomnia and its daytime consequences. The therapist may also teach the patient relaxation Helpful Behaviors Behaviors to Avoid Use the bed only for sleep and sex Avoid behaviors that interfere with sleep physiology, including: • If you cannot sleep within 20 min, get out of bed and read or do • Napping, especially after 3:00 PM other relaxing activities in dim light • Attempting to sleep too early before returning to bed Make quality sleep a priority In the 2–3 h before bedtime, avoid: • Go to bed and get up at the same • Heavy eating (comfortable bed, bedroom quiet Develop a consistent bedtime When trying to fall asleep, avoid: routine. For example: • Prepare for sleep with 20–30 min of • Thinking about life issues relaxation (e.g., soft music, medita • Reviewing events of the day tion, yoga, pleasant reading) techniques, such as progressive muscle relaxation or meditation, to reduce autonomic arousal, intrusive thoughts, and anxiety. If insomnia persists after treatment of these contributing factors, pharmacotherapy is often used on a nightly or intermittent basis. A variety of sedatives can improve sleep. Antihistamines, such as diphenhydramine, are the primary active ingredient in most over-the-counter sleep aids. These may be of benefit when used intermittently, but often produce rapid tolerance and can produce anticholinergic side effects such as dry mouth and constipation, which limit their use, particularly in the elderly. Benzodiazepine receptor agonists (BzRAs) are an effective and well-tolerated class of medications for insomnia. BzRAs bind to the GABAA receptor and potentiate the postsynaptic response to GABA. GABAA receptors are found throughout the brain, and BzRAs may globally reduce neural activity and may enhance the activity of specific sleep-promoting GABAergic pathways. Classic BzRAs include lorazepam, triazolam, and clonazepam, whereas newer agents such as zolpidem and zaleplon have more selective affinity for the α1 subunit of the GABAA receptor. Specific BzRAs are often chosen based on the desired duration of action. The most commonly prescribed agents in this family are zaleplon (5–20 mg), with a half-life of 1–2 h; zolpidem (5–10 mg) and triazolam (0.125–0.25 mg), with half-lives of 2–4 h; eszopiclone (1–3 mg), with a half-life of 5–8 h; and temazepam (15–30 mg), with a half-life of 8–20 h. Generally, side effects are minimal when the dose is kept low and the serum concentration is minimized during the waking hours (by using the shortest-acting effective agent). For chronic insomnia, intermittent use is recommended, unless the consequences of untreated insomnia outweigh concerns regarding chronic use. The heterocyclic antidepressants (trazodone, amitriptyline,2 and doxepin) are the most commonly prescribed alternatives to BzRAs due to their lack of abuse potential and lower cost. Trazodone (25–100 mg) is used more commonly than the tricyclic antidepressants, because it has a much shorter half-life (5–9 h) and less anticholinergic activity. Medications for insomnia are now among the most commonly prescribed medications, but they should be used cautiously. All sedatives increase the risk of injurious falls and confusion in the elderly, and therefore if needed, these medications should be used at the lowest effective dose. Morning sedation can interfere with driving and judgment, and when selecting a medication, one should 2Trazodone and amitriptyline have not been approved by the FDA for treating insomnia. 192 consider the duration of action. Benzodiazepines carry a risk of addiction and abuse, especially in patients with a history of alcohol or sedative abuse. Like alcohol, some sleep-promoting medications can worsen sleep apnea. Sedatives can also produce complex behaviors during sleep, such as sleep walking and sleep eating, although this seems more likely at higher doses. PART 2 Cardinal Manifestations and Presentation of Diseases Patients with restless legs syndrome (RLS) report an irresistible urge to move the legs. Many patients report a creepy-crawly or unpleasant deep ache within the thighs or calves, and those with more severe RLS may have discomfort in the arms as well. For most patients with RLS, these dysesthesias and restlessness are much worse in the evening and first half of the night. The symptoms appear with inactivity and can make sitting still in an airplane or when watching a movie a miserable experience. The sensations are temporarily relieved by movement, stretching, or massage. This nocturnal discomfort usually interferes with sleep, and patients may report daytime sleepiness as a consequence. RLS is very common, affecting 5–10% of adults and is more common in women and older adults. A variety of factors can cause RLS. Iron deficiency is the most common treatable cause, and iron replacement should be considered if the ferritin level is less than 50 ng/mL. RLS can also occur with peripheral neuropathies and uremia and can be worsened by pregnancy, caffeine, alcohol, antidepressants, lithium, neuroleptics, and antihistamines. Genetic factors contribute to RLS, and polymorphisms in a variety of genes (BTBD9, MEIS1, MAP2K5/LBXCOR, and PTPRD) have been linked to RLS, although as yet, the mechanism through which they cause RLS remains unknown. Roughly one-third of patients (particularly those with an early age of onset) have multiple affected family members. RLS is treated by addressing the underlying cause such as iron deficiency if present. Otherwise, treatment is symptomatic, and dopamine agonists are used most frequently. Agonists of dopamine D2/3 receptors such as pramipexole (0.25–0.5 mg q7PM) or ropinirole (0.5–4 mg q7PM) are considered first-line agents. Augmentation is a worsening of RLS such that symptoms begin earlier in the day and can spread to other body regions, and it can occur in about 25% of patients taking dopamine agonists. Other possible side effects of dopamine agonists include nausea, morning sedation, and increases in rewarding behavior such as gambling and sex. Opioids, benzodiazepines, pregabalin, and gabapentin may also be of therapeutic value. Most patients with restless legs also experience periodic limb movement disorder, although the reverse is not the case. Periodic limb movement disorder (PLMD) involves rhythmic twitches of the legs that disrupt sleep. The movements resemble a triple flex-ion reflex with extensions of the great toe and dorsiflexion of the foot for 0.5 to 5.0 s, which recur every 20–40 s during NREM sleep, in episodes lasting from minutes to hours. PLMD is diagnosed by a polysomnogram that includes recordings of the anterior tibialis and sometimes other muscles. The EEG shows that the movements of PLMD frequently cause brief arousals that disrupt sleep and can cause insomnia and daytime sleepiness. PLMD can be caused by the same factors that cause RLS (see above), and the frequency of leg movements improves with the same medications as used for RLS, including dopamine agonists. Recent genetic studies identified polymorphisms associated with RLS/PLMD, suggesting that they may have a common pathophysiology. Parasomnias are abnormal behaviors or experiences that arise from or occur during sleep. A variety of parasomnias can occur during NREM sleep, from brief confusional arousals to sleepwalking and night terrors. The presenting complaint is usually related to the behavior itself, but the parasomnias can disturb sleep continuity or lead to mild impairments in daytime alertness. Two main parasomnias occur in REM sleep: REM sleep behavior disorder (RBD) and nightmares. Sleepwalking (Somnambulism) Patients affected by this disorder carry out automatic motor activities that range from simple to complex. Individuals may walk, urinate inappropriately, eat, exit the house, or drive a car with minimal awareness. Full arousal may be difficult, and occasional individuals may respond to attempted awakening with agitation or violence. Sleepwalking arises from NREM stage N3 sleep, usually in the first few hours of the night, and the EEG usually shows the slow cortical activity of deep NREM sleep even when the patient is moving about. Sleepwalking is most common in children and adolescents, when these sleep stages are most robust. About 15% of children have occasional sleepwalking, and it persists in about 1% of adults. Episodes are usually isolated but may be recurrent in 1–6% of patients. The cause is unknown, although it has a familial basis in roughly one-third of cases. Sleepwalking can be worsened by insufficient sleep, which subsequently causes an increase in deep NREM sleep; alcohol; and stress. These should be addressed if present. Small studies have shown some efficacy of antidepressants and benzodiazepines; relaxation techniques and hypnosis can also be helpful. Patients and their families should improve home safety (e.g., replace glass doors, remove low tables to avoid tripping) to minimize the chance of injury if sleepwalking occurs. Sleep Terrors This disorder occurs primarily in young children during the first few hours of sleep during NREM stage N3 sleep. The child often sits up during sleep and screams, exhibiting autonomic arousal with sweating, tachycardia, large pupils, and hyperventilation. The individual may be difficult to arouse and rarely recalls the episode on awakening in the morning. Treatment usually consists of reassuring the parents that the condition is self-limited and benign, and like sleepwalking, it may improve by avoiding insufficient sleep. Sleep Bruxism Bruxism is an involuntary, forceful grinding of teeth during sleep that affects 10–20% of the population. The patient is usually unaware of the problem. The typical age of onset is 17–20 years, and spontaneous remission usually occurs by age 40. Sex distribution appears to be equal. In many cases, the diagnosis is made during dental examination, damage is minor, and no treatment is indicated. In more severe cases, treatment with a tooth guard is necessary to prevent tooth injury. Stress management or, in some cases, biofeedback can be useful when bruxism is a manifestation of psychological stress. There are anecdotal reports of benefit with benzodiazepines. Sleep Enuresis Bedwetting, like sleepwalking and night terrors, is another parasomnia that occurs during sleep in the young. Before age 5 or 6 years, nocturnal enuresis should be considered a normal feature of development. The condition usually improves spontaneously by puberty, has a prevalence in late adolescence of 1–3%, and is rare in adulthood. Treatment consists of bladder training exercises and behavioral therapy. Symptomatic pharmacotherapy is usually accomplished in adults with desmopressin (0.2 mg qhs), oxybutynin chloride (5 mg qhs), or imipramine (10–25 mg qhs). Important causes of nocturnal enuresis in patients who were previously continent for 6–12 months include urinary tract infections or malformations, cauda equina lesions, emotional disturbances, epilepsy, sleep apnea, and certain medications. REM Sleep Behavior Disorder (RBD) RBD (Video 38-2) is distinct from other parasomnias in that it occurs during REM sleep. The patient or the bed partner usually reports agitated or violent behavior during sleep, and upon awakening, the patient can often report a dream that accompanied the movements. During normal REM sleep, nearly all skeletal muscles are paralyzed, but in patients with RBD, the polysomnogram often shows limb movements during REM sleep, lasting for seconds to minutes. The movements can be dramatic, and it is not uncommon for the patient or the bed partner to be injured. RBD primarily afflicts older men, and most either have or will develop a neurodegenerative disorder. In longitudinal studies of RBD, half of the patients developed a synucleinopathy such as Parkinson’s disease (Chap. 449) or dementia with Lewy bodies (Chap. 448), or occasionally multiple system atrophy (Chap. 454), within 12 years, and over 80% developed a synucleinopathy by 20 years. RBD can occur in patients taking antidepressants, and in some, these medications may unmask this early indicator of neurodegeneration. Synucleinopathies probably cause neuronal loss in brainstem regions that regulate muscle atonia during REM sleep, and loss of these neurons permits movements to break through during REM sleep. RBD also occurs in about 30% of patients with narcolepsy, but the underlying cause is probably different, as they seem to be at no increased risk of a neurodegenerative disorder. Many patients with RBD have sustained improvement with clonazepam (0.5–2.0 mg qhs).3 Melatonin at doses up to 9 mg nightly may also prevent attacks. A subset of patients presenting with either insomnia or hypersomnia may have a disorder of sleep timing rather than sleep generation. Disorders of sleep timing can be either organic (i.e., due to an abnormality of circadian pacemaker[s]) or environmental/behavioral (i.e., due to a disruption of environmental synchronizers). Effective therapies aim to entrain the circadian rhythm of sleep propensity to an appropriate phase. Delayed Sleep-Wake Phase Disorder Delayed sleep-wake phase disorder (DSWPD) is characterized by: (1) reported sleep onset and wake times intractably later than desired; (2) actual sleep times at nearly the same clock hours daily; and (3) if conducted at the habitual delayed sleep time, essentially normal sleep on polysomnography (except for delayed sleep onset). Patients with DSWPD exhibit an abnormally delayed endogenous circadian phase, which can be assessed by measuring, in a dimly lit environment, the onset of secretion of the endogenous circadian rhythm of pineal melatonin in either the blood or saliva, as light suppresses melatonin secretion. Dim-light melatonin onset (DLMO) in DSWPD patients typically occurs later in the evening than normal, which is about 8:00–9:00 pm (i.e., about 1–2 h before habitual bedtime). Patients tend to be young adults. The delayed circadian phase could be due to: (1) an abnormally long, genetically determined intrinsic period of the endogenous circadian pacemaker; (2) reduced phase-advancing capacity of the pacemaker; (3) slower rate of buildup of homeostatic sleep drive during wakefulness; or (4) an irregular prior sleep-wake schedule, characterized by frequent nights when the patient chooses to remain awake while exposed to artificial light well past midnight (for personal, social, school, or work reasons). In most cases, it is difficult to distinguish among these factors, as patients with either a behaviorally induced or biologically driven circadian phase delay may both exhibit a similar circadian phase delay in DLMO, making it difficult for both to fall asleep at the desired hour. DSWPD is a self-perpetuating condition that can persist for years and may not respond to attempts to reestablish normal bedtime hours. Treatment methods involving phototherapy with blue-enriched light during the morning hours and/or melatonin administration in the evening hours show promise in these patients, although the relapse rate is high. Patients with this circadian rhythm sleep disorder can be distinguished from those who have sleep-onset insomnia because DSWPD patients show late onset of dim-light melatonin secretion. Advanced Sleep-Wake Phase Disorder Advanced sleep-wake phase disorder (ASWPD) is the converse of DSWPD. Most commonly, this syndrome occurs in older people, 15% of whom report that they cannot sleep past 5:00 am, with twice that number complaining that they wake up too early at least several times per week. Patients with ASWPD are sleepy during the evening hours, even in social settings. Sleep-wake timing in ASWPD patients can interfere with a normal social life. Patients with this circadian rhythm sleep disorder can be distinguished from those who have early wakening due to insomnia because ASWPD patients show early onset of dim-light melatonin secretion. In addition to age-related ASWPD, an early-onset familial variant of this condition has also been reported. In two families in which ASWPD was inherited in an autosomal dominant pattern, the syndrome was 3No medications have been approved by the FDA for the treatment of RBD. due to missense mutations in a circadian clock component (in the 193 casein kinase binding domain of PER2 in one family, and in casein kinase I delta in the other) that altered the circadian period. Patients with ASWPD may benefit from bright-light and/or blue enriched phototherapy during the evening hours to reset the circadian pacemaker to a later hour. Non-24-h Sleep-Wake Rhythm Disorder Non-24-h sleep-wake rhythm disorder (N24SWRD) can occur when the primary synchronizing input (i.e., the light-dark cycle) from the environment to the circadian pacemaker is compromised (as occurs in many blind people with no light perception) or when the maximal phase-advancing capacity of the circadian pacemaker cannot accommodate the difference between the 24-h geophysical day and the intrinsic period of the patient’s circadian pacemaker, resulting in loss of entrainment to the 24-h day. Rarely, self-selected exposure to artificial light may, in some sighted patients, inadvertently entrain the circadian pacemaker to a >24-h schedule. Affected patients with N24SWRD have difficulty maintaining a stable phase relationship between the output of the pacemaker and the 24-h day. Such patients typically present with an incremental pattern of successive delays in sleep propensity, progressing in and out of phase with local time. When the N24SWRD patient’s endogenous circadian rhythms are out of phase with the local environment, nighttime insomnia coexists with excessive daytime sleepiness. Conversely, when the endogenous circadian rhythms are in phase with the local environment, symptoms remit. The interval between symptomatic phases may last several weeks to several months in N24SWRD, depending on the period of the underlying nonentrained rhythm and the 24-h day. Nightly low-dose (0.5 mg) melatonin administration may improve sleep and, in some cases, induce synchronization of the circadian pacemaker. Shift-Work Disorder More than 7 million workers in the United States regularly work at night, either on a permanent or rotating schedule. Many more begin the commute to work or school between 4:00 am and 7:00 am, requiring them to commute and then work during the time of day that they would otherwise be asleep. In addition, each week, millions of “day” workers and students elect to remain awake at night or awaken very early in the morning to work or study to meet work or school deadlines, drive long distances, compete in sporting events, or participate in recreational activities. Such schedules can result in both sleep loss and misalignment of circadian rhythms with respect to the sleep-wake cycle. The circadian timing system usually fails to adapt successfully to the inverted schedules required by overnight work or the phase advance required by early morning (4:00 am to 7:00 am) start times. This leads to a misalignment between the desired work-rest schedule and the output of the pacemaker and to disturbed daytime sleep in most individuals. Excessive work hours (per day or per week), insufficient time off between consecutive days of work or school, and transmeridian travel may be contributing factors. Sleep deficiency, increased length of time awake prior to work, and misalignment of circadian phase produce decreased alertness and performance, increased reaction time, and increased risk of performance lapses, thereby resulting in greater safety hazards among night workers and other sleep-deprived individuals. Sleep disturbance nearly doubles the risk of a fatal work accident. Long-term night shift workers have higher rates of breast, colorectal, and prostate cancer and of cardiac, gastrointestinal, and reproductive disorders. The World Health Organization has added night-shift work to its list of probable carcinogens. Sleep onset begins in local brain regions before gradually sweeping over the entire brain as sensory thresholds rise and consciousness is lost. A sleepy individual struggling to remain awake may attempt to continue performing routine and familiar motor tasks during the transition state between wakefulness and stage N1 sleep, while unable to adequately process sensory input from the environment. Motor vehicle operators who fail to heed the warning signs of sleepiness are especially vulnerable to sleep-related accidents, as sleep processes can intrude involuntarily upon the waking brain, causing catastrophic consequences. Such sleep-related attentional failures typically last only seconds but are known on occasion to persist for longer durations. 194 There is a significant increase in the risk of sleep-related, fatal-to-thedriver highway crashes in the early morning and late afternoon hours, coincident with bimodal peaks in the daily rhythm of sleep tendency. Resident physicians constitute another group of workers at greater risk for accidents and other adverse consequences of lack of sleep and misalignment of the circadian rhythm. Recurrent scheduling of resident physicians to work shifts of ≥24 consecutive hours impairs psychomotor performance to a degree that is comparable to alcohol intoxication, doubles the risk of attentional failures among intensive care unit resident physicians working at night, and significantly increases the risk of serious medical errors in intensive care units, including a fivefold increase in the risk of serious diagnostic mistakes. Some 20% of hospital resident physicians report making a fatigue-related mistake that injured a patient, and 5% admit making a fatigue-related mistake that resulted in the death of a patient. Moreover, working for >24 consecutive hours increases the risk of percutaneous injuries and more than doubles the risk of motor vehicle crashes on the commute home. For these reasons, in 2008, the Institute of Medicine concluded that the practice of scheduling resident physicians to work for more than 16 consecutive hours without sleep is hazardous for both resident physicians and their patients. From 5 to 15% of individuals scheduled to work at night or in the early morning hours have much greater-than-average difficulties remaining awake during night work and sleeping during the day; these individuals are diagnosed with chronic and severe shift-work disorder (SWD). Patients with this disorder have a level of excessive sleepiness during work at night or in the early morning and insomnia during day sleep that the physician judges to be clinically significant; the condition is associated with an increased risk of sleep-related accidents and with some of the illnesses associated with night-shift work. Patients with chronic and severe SWD are profoundly sleepy at work. In fact, their sleep latencies during night work average just 2 min, comparable to mean daytime sleep latency durations of patients with narcolepsy or severe sleep apnea. Caffeine is frequently used by night workers to promote wakefulness. However, it cannot forestall sleep indefinitely, and it does not shield users from sleep-related performance lapses. Postural changes, exercise, and strategic placement of nap opportunities can sometimes temporarily reduce the risk of fatigue-related performance lapses. Properly timed exposure to blue-enriched light or bright white light can directly enhance alertness and facilitate more rapid adaptation to night-shift work. Modafinil (200 mg) or armodafinil (150 mg) 30–60 min before the start of each night shift is an effective treatment for the excessive sleepiness during night work in patients with SWD. Although treatment with modafinil or armodafinil significantly improves performance and reduces sleep propensity and the risk of lapses of attention during night work, affected patients remain excessively sleepy. Fatigue risk management programs for night shift workers should promote education about sleep, increase awareness of the hazards associated with sleep deficiency and night work, and screen for common sleep disorders. Work schedules should be designed to minimize: (1) exposure to night work; (2) the frequency of shift rotations; (3) the number of consecutive night shifts; and (4) the duration of night shifts. PART 2 Cardinal Manifestations and Presentation of Diseases Jet Lag Disorder Each year, more than 60 million people fly from one time zone to another, often resulting in excessive daytime sleepiness, sleep-onset insomnia, and frequent arousals from sleep, particularly in the latter half of the night. The syndrome is transient, typically lasting 2–14 d depending on the number of time zones crossed, the direction of travel, and the traveler’s age and phase-shifting capacity. Travelers who spend more time outdoors at their destination reportedly adapt more quickly than those who remain in hotel rooms, presumably due to brighter (outdoor) light exposure. Avoidance of antecedent sleep loss and obtaining naps on the afternoon prior to overnight travel can reduce the difficulties associated with extended wakefulness. Laboratory studies suggest that low doses of melatonin can enhance sleep efficiency, but only if taken when endogenous melatonin concentrations are low (i.e., during the biologic daytime). In addition to jet lag associated with travel across time zones, many patients report a behavioral pattern that has been termed social jet lag, in which bedtimes and wake times on weekends or days off occur 4–8 h later than during the week. Such recurrent displacement of the timing of the sleep-wake cycle is common in adolescents and young adults and is associated with sleep-onset insomnia, poorer academic performance, increased risk of depressive symptoms, and excessive daytime sleepiness. Prominent circadian variations have been reported in the incidence of acute myocardial infarction, sudden cardiac death, and stroke, the leading causes of death in the United States. Platelet aggregability is increased in the early morning hours, coincident with the peak incidence of these cardiovascular events. Recurrent circadian disruption combined with chronic sleep deficiency, such as occurs during night-shift work, is associated with increased plasma glucose concentrations after a meal due to inadequate pancreatic insulin secretion. Night shift workers with elevated fasting glucose have an increased risk of progressing to diabetes. Blood pressure of night workers with sleep apnea is higher than that of day workers. A better understanding of the possible role of circadian rhythmicity in the acute destabilization of a chronic condition such as atherosclerotic disease could improve the understanding of its pathophysiology. Diagnostic and therapeutic procedures may also be affected by the time of day at which data are collected. Examples include blood pressure, body temperature, the dexamethasone suppression test, and plasma cortisol levels. The timing of chemotherapy administration has been reported to have an effect on the outcome of treatment. In addition, both the toxicity and effectiveness of drugs can vary with time of day. For example, more than a fivefold difference has been observed in mortality rates following administration of toxic agents to experimental animals at different times of day. Anesthetic agents are particularly sensitive to time-of-day effects. Finally, the physician must be aware of the public health risks associated with the ever-increasing demands made by the 24/7 schedules in our round-the-clock society. John W. Winkelman, MD, PhD and Gary S. Richardson, MD contributed to this chapter in the prior edition and some material from that chapter has been retained here. VIDEO 38-1 A typical episode of severe cataplexy. The patient is joking and then falls to the ground with an abrupt loss of muscle tone. The electromyogram recordings (four lower traces on the right) show reductions in muscle activity during the period of paralysis. The electroencephalogram (top two traces) shows wakefulness throughout the episode. (Video courtesy of Giuseppe Plazzi, University of Bologna.) VIDEO 38-2 Typical aggressive movements in rapid eye movement (REM) sleep behavior disorder. (Video courtesy of Dr. Carlos Schenck, University of Minnesota Medical School.) Disorders of the Eye Jonathan C. Horton THE HuMAN VISuAL SYSTEM The visual system provides a supremely efficient means for the rapid assimilation of information from the environment to aid in the guid-ance of behavior. The act of seeing begins with the capture of images focused by the cornea and lens on a light-sensitive membrane in the back of the eye called the retina. The retina is actually part of the brain, banished to the periphery to serve as a transducer for the conversion of patterns of light energy into neuronal signals. Light is absorbed by pigment in two types of photoreceptors: rods and cones. In the human retina there are 100 million rods and 5 million cones. The rods oper-ate in dim (scotopic) illumination. The cones function under daylight (photopic) conditions. The cone system is specialized for color percep-tion and high spatial resolution. The majority of cones are within the macula, the portion of the retina that serves the central 10° of vision. In the middle of the macula a small pit termed the fovea, packed exclu-sively with cones, provides the best visual acuity. Photoreceptors hyperpolarize in response to light, activating bipo-lar, amacrine, and horizontal cells in the inner nuclear layer. After processing of photoreceptor responses by this complex retinal circuit, the flow of sensory information ultimately converges on a final com-mon pathway: the ganglion cells. These cells translate the visual image impinging on the retina into a continuously varying barrage of action potentials that propagates along the primary optic pathway to visual centers within the brain. There are a million ganglion cells in each retina and hence a million fibers in each optic nerve. Ganglion cell axons sweep along the inner surface of the retina in the nerve fiber layer, exit the eye at the optic disc, and travel through the optic nerve, optic chiasm, and optic tract to reach targets in the brain. The majority of fibers synapse on cells in the lateral geniculate body, a thalamic relay station. Cells in the lateral geniculate body project in turn to the primary visual cortex. This afferent retinoge-niculocortical sensory pathway provides the neural substrate for visual perception. Although the lateral geniculate body is the main target of the retina, separate classes of ganglion cells project to other subcorti-cal visual nuclei involved in different functions. Ganglion cells that mediate pupillary constriction and circadian rhythms are light sensi-tive owing to a novel visual pigment, melanopsin. Pupil responses are mediated by input to the pretectal olivary nuclei in the midbrain. The pretectal nuclei send their output to the Edinger-Westphal nuclei, which in turn provide parasympathetic innervation to the iris sphinc-ter via an interneuron in the ciliary ganglion. Circadian rhythms are timed by a retinal projection to the suprachiasmatic nucleus. Visual orientation and eye movements are served by retinal input to the supe-rior colliculus. Gaze stabilization and optokinetic reflexes are governed by a group of small retinal targets known collectively as the brainstem accessory optic system. The eyes must be rotated constantly within their orbits to place and maintain targets of visual interest on the fovea. This activity, called foveation, or looking, is governed by an elaborate efferent motor sys-tem. Each eye is moved by six extraocular muscles that are supplied by cranial nerves from the oculomotor (III), trochlear (IV), and abducens (VI) nuclei. Activity in these ocular motor nuclei is coordinated by pontine and midbrain mechanisms for smooth pursuit, saccades, and gaze stabilization during head and body movements. Large regions of the frontal and parietooccipital cortex control these brainstem eye movement centers by providing descending supranuclear input. 39 SECTion 4 DiSoRDERS of EyES, EARS, noSE, AnD THRoAT In approaching a patient with reduced vision, the first step is to decide whether refractive error is responsible. In emmetropia, parallel rays from infinity are focused perfectly on the retina. Sadly, this condition is enjoyed by only a minority of the population. In myopia, the globe is too long, and light rays come to a focal point in front of the retina. Near objects can be seen clearly, but distant objects require a diverging lens in front of the eye. In hyperopia, the globe is too short, and hence a converging lens is used to supplement the refractive power of the eye. In astigmatism, the corneal surface is not perfectly spherical, necessitating a cylindrical corrective lens. As an alternative to eyeglasses or contact lenses, refractive error can be corrected by performing laser in situ keratomileusis (LASIK) or photorefractive keratectomy (PRK) to alter the curvature of the cornea. With the onset of middle age, presbyopia develops as the lens within the eye becomes unable to increase its refractive power to accommodate on near objects. To compensate for presbyopia an emmetropic patient must use reading glasses. A patient already wearing glasses for distance correction usually switches to bifocals. The only exception is a myopic patient, who may achieve clear vision at near simply by removing glasses containing the distance prescription. Refractive errors usually develop slowly and remain stable after adolescence, except in unusual circumstances. For example, the acute onset of diabetes mellitus can produce sudden myopia because of lens edema induced by hyperglycemia. Testing vision through a pinhole aperture is a useful way to screen quickly for refractive error. If visual acuity is better through a pinhole than it is with the unaided eye, the patient needs refraction to obtain best corrected visual acuity. The Snellen chart is used to test acuity at a distance of 6 m (20 ft). For convenience, a scale version of the Snellen chart called the Rosenbaum card is held at 36 cm (14 in.) from the patient (Fig. 39-1). All subjects should be able to read the 6/6 m (20/20 ft) line with each eye using their refractive correction, if any. Patients who need reading glasses because of presbyopia must wear them for accurate testing with the Rosenbaum card. If 6/6 (20/20) acuity is not present in each eye, the deficiency in vision must be explained. If it is worse than 6/240 (20/800), acuity should be recorded in terms of counting fingers, hand motions, light perception, or no light perception. Legal blindness is defined by the Internal Revenue Service as a best corrected acuity of 6/60 (20/200) or less in the better eye or a binocular visual field subtending 20° or less. For driving the laws vary by state, but most states require a corrected acuity of 6/12 (20/40) in at least one eye for unrestricted privileges. Patients with a homonymous hemianopia should not drive. The pupils should be tested individually in dim light with the patient fixating on a distant target. There is no need to check the near response if the pupils respond briskly to light, because isolated loss of constriction (miosis) to accommodation does not occur. For this reason, the ubiquitous abbreviation PERRLA (pupils equal, round, and reactive to light and accommodation) implies a wasted effort with the last step. However, it is important to test the near response if the light response is poor or absent. Light-near dissociation occurs with neurosyphilis (Argyll Robertson pupil), with lesions of the dorsal midbrain (Parinaud’s syndrome), and after aberrant regeneration (oculomotor nerve palsy, Adie’s tonic pupil). CHAPTER 39 Disorders of the Eye PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 39-1 The Rosenbaum card is a miniature, scale version of the Snellen chart for testing visual acuity at near. When the visual acuity is recorded, the Snellen distance equivalent should bear a nota-tion indicating that vision was tested at near, not at 6 m (20 ft), or else the Jaeger number system should be used to report the acuity. An eye with no light perception has no pupillary response to direct light stimulation. If the retina or optic nerve is only partially injured, the direct pupillary response will be weaker than the consensual pupillary response evoked by shining a light into the healthy fellow eye. A relative afferent pupillary defect (Marcus Gunn pupil) can be elicited with the swinging flashlight test (Fig. 39-2). It is an extremely useful sign in retrobulbar optic neuritis and other optic nerve diseases, in which it may be the sole objective evidence for disease. In bilateral optic neuropathy, no afferent pupil defect is present if the optic nerves are affected equally. Subtle inequality in pupil size, up to 0.5 mm, is a fairly common finding in normal persons. The diagnosis of essential or physiologic anisocoria is secure as long as the relative pupil asymmetry remains constant as ambient lighting varies. Anisocoria that increases in dim light indicates a sympathetic paresis of the iris dilator muscle. The triad of miosis with ipsilateral ptosis and anhidrosis constitutes Horner’s syndrome, although anhidrosis is an inconstant feature. Brainstem stroke, carotid dissection, and neoplasm impinging on the sympathetic chain occasionally are identified as the cause of Horner’s syndrome, but most cases are idiopathic. Anisocoria that increases in bright light suggests a parasympathetic palsy. The first concern is an oculomotor nerve paresis. This possibility is excluded if the eye movements are full and the patient has no ptosis or diplopia. Acute pupillary dilation (mydriasis) can result from FIguRE 39-2 Demonstration of a relative afferent pupil defect (Marcus Gunn pupil) in the left eye, done with the patient fixating on a distant target. A. With dim background lighting, the pupils are equal and relatively large. B. Shining a flashlight into the right eye evokes equal, strong constriction of both pupils. C. Swinging the flashlight over to the damaged left eye causes dilation of both pupils, although they remain smaller than in A. Swinging the flashlight back over to the healthy right eye would result in symmetric constriction back to the appearance shown in B. Note that the pupils always remain equal; the damage to the left retina/optic nerve is revealed by weaker bilateral pupil constriction to a flashlight in the left eye compared with the right eye. (From P Levatin: Arch Ophthalmol 62:768, 1959. Copyright © 1959 American Medical Association. All rights reserved.) damage to the ciliary ganglion in the orbit. Common mechanisms are infection (herpes zoster, influenza), trauma (blunt, penetrating, surgical), and ischemia (diabetes, temporal arteritis). After denervation of the iris sphincter the pupil does not respond well to light, but the response to near is often relatively intact. When the near stimulus is removed, the pupil redilates very slowly compared with the normal pupil, hence the term tonic pupil. In Adie’s syndrome a tonic pupil is present, sometimes in conjunction with weak or absent tendon reflexes in the lower extremities. This benign disorder, which occurs predominantly in healthy young women, is assumed to represent a mild dysautonomia. Tonic pupils are also associated with Shy-Drager syndrome, segmental hypohidrosis, diabetes, and amyloidosis. Occasionally, a tonic pupil is discovered incidentally in an otherwise completely normal, asymptomatic individual. The diagnosis is confirmed by placing a drop of dilute (0.125%) pilocarpine into each eye. Denervation hypersensitivity produces pupillary constriction in a tonic pupil, whereas the normal pupil shows no response. Pharmacologic dilatation from accidental or deliberate instillation of anticholinergic agents (atropine, scopolamine drops) into the eye also can produce pupillary mydriasis. In this situation, normal strength (1%) pilocarpine causes no constriction. Both pupils are affected equally by systemic medications. They are small with narcotic use (morphine, heroin) and large with anticholinergics (scopolamine). Parasympathetic agents (pilocarpine, demecarium bromide) used to treat glaucoma produce miosis. In any patient with an unexplained pupillary abnormality, a slit-lamp examination is helpful to exclude surgical trauma to the iris, an occult foreign body, perforating injury, intraocular inflammation, adhesions (synechia), angle-closure glaucoma, and iris sphincter rupture from blunt trauma. Eye movements are tested by asking the patient, with both eyes open, to pursue a small target such as a penlight into the cardinal fields of gaze. Normal ocular versions are smooth, symmetric, full, and maintained in all directions without nystagmus. Saccades, or quick refixation eye movements, are assessed by having the patient look back and forth between two stationary targets. The eyes should move rapidly and accurately in a single jump to their target. Ocular alignment can be judged by holding a penlight directly in front of the patient at about 1 m. If the eyes are straight, the corneal light reflex will be centered in the middle of each pupil. To test eye alignment more precisely, the cover test is useful. The patient is instructed to look at a small fixation target in the distance. One eye is covered suddenly while the second eye is observed. If the second eye shifts to fixate on the target, it was misaligned. If it does not move, the first eye is uncovered and the test is repeated on the second eye. If neither eye moves the eyes are aligned orthotropically. If the eyes are orthotropic in primary gaze but the patient complains of diplopia, the cover test should be performed with the head tilted or turned in whatever direction elicits diplopia. With practice, the examiner can detect an ocular deviation (heterotropia) as small as 1–2° with the cover test. In a patient with vertical diplopia, a small deviation can be difficult to detect and easy to dismiss. The magnitude of the deviation can be measured by placing a prism in front of the misaligned eye to determine the power required to neutralize the fixation shift evoked by covering the other eye. Temporary press-on plastic Fresnel prisms, prism eyeglasses, or eye muscle surgery can be used to restore binocular alignment. Stereoacuity is determined by presenting targets with retinal disparity separately to each eye by using polarized images. The most popular office tests measure a range of thresholds from 800–40 seconds of arc. Normal stereoacuity is 40 seconds of arc. If a patient achieves this level of stereoacuity, one is assured that the eyes are aligned orthotropically and that vision is intact in each eye. Random dot stereograms have no monocular depth cues and provide an excellent screening test for strabismus and amblyopia in children. The retina contains three classes of cones, with visual pigments of differing peak spectral sensitivity: red (560 nm), green (530 nm), and blue (430 nm). The red and green cone pigments are encoded on the X chromosome, and the blue cone pigment on chromosome 7. Mutations of the blue cone pigment are exceedingly rare. Mutations of the red and green pigments cause congenital X-linked color blindness in 8% of males. Affected individuals are not truly color blind; rather, they differ from normal subjects in the way they perceive color and how they combine primary monochromatic lights to match a particular color. Anomalous trichromats have three cone types, but a mutation in one cone pigment (usually red or green) causes a shift in peak spectral sensitivity, altering the proportion of primary colors required to achieve a color match. Dichromats have only two cone types and therefore will accept a color match based on only two primary colors. 197 Anomalous trichromats and dichromats have 6/6 (20/20) visual acuity, but their hue discrimination is impaired. Ishihara color plates can be used to detect red-green color blindness. The test plates contain a hidden number that is visible only to subjects with color confusion from red-green color blindness. Because color blindness is almost exclusively X-linked, it is worth screening only male children. The Ishihara plates often are used to detect acquired defects in color vision, although they are intended as a screening test for congenital color blindness. Acquired defects in color vision frequently result from disease of the macula or optic nerve. For example, patients with a history of optic neuritis often complain of color desaturation long after their visual acuity has returned to normal. Color blindness also can result from bilateral strokes involving the ventral portion of the occipital lobe (cerebral achromatopsia). Such patients can perceive only shades of gray and also may have difficulty recognizing faces (prosopagnosia). Infarcts of the dominant occipital lobe sometimes give rise to color anomia. Affected patients can discriminate colors but cannot name them. Vision can be impaired by damage to the visual system anywhere from the eyes to the occipital lobes. One can localize the site of the lesion with considerable accuracy by mapping the visual field deficit by finger confrontation and then correlating it with the topographic anatomy of the visual pathway (Fig. 39-3). Quantitative visual field mapping is performed by computer-driven perimeters that present a target of variable intensity at fixed positions in the visual field (Fig. 39-3A). By generating an automated printout of light thresholds, these static perimeters provide a sensitive means of detecting scotomas in the visual field. They are exceedingly useful for serial assessment of visual function in chronic diseases such as glaucoma and pseudotumor cerebri. The crux of visual field analysis is to decide whether a lesion is before, at, or behind the optic chiasm. If a scotoma is confined to one eye, it must be due to a lesion anterior to the chiasm, involving either the optic nerve or the retina. Retinal lesions produce scotomas that correspond optically to their location in the fundus. For example, a superior-nasal retinal detachment results in an inferior-temporal field cut. Damage to the macula causes a central scotoma (Fig. 39-3B). Optic nerve disease produces characteristic patterns of visual field loss. Glaucoma selectively destroys axons that enter the superotemporal or inferotemporal poles of the optic disc, resulting in arcuate scotomas shaped like a Turkish scimitar, which emanate from the blind spot and curve around fixation to end flat against the horizontal meridian (Fig. 39-3C). This type of field defect mirrors the arrangement of the nerve fiber layer in the temporal retina. Arcuate or nerve fiber layer scotomas also result from optic neuritis, ischemic optic neuropathy, optic disc drusen, and branch retinal artery or vein occlusion. Damage to the entire upper or lower pole of the optic disc causes an altitudinal field cut that follows the horizontal meridian (Fig. 39-3D). This pattern of visual field loss is typical of ischemic optic neuropathy but also results from retinal vascular occlusion, advanced glaucoma, and optic neuritis. About half the fibers in the optic nerve originate from ganglion cells serving the macula. Damage to papillomacular fibers causes a cecocentral scotoma that encompasses the blind spot and macula (Fig. 39-3E). If the damage is irreversible, pallor eventually appears in the temporal portion of the optic disc. Temporal pallor from a cecocentral scotoma may develop in optic neuritis, nutritional optic neuropathy, toxic optic neuropathy, Leber’s hereditary optic neuropathy, Kjer’s dominant optic atrophy, and compressive optic neuropathy. It is worth mentioning that the temporal side of the optic disc is slightly paler than the nasal side in most normal individuals. Therefore, it sometimes can be difficult to decide whether the temporal pallor visible on fundus examination represents a pathologic change. Pallor of the nasal rim of the optic disc is a less equivocal sign of optic atrophy. At the optic chiasm, fibers from nasal ganglion cells decussate into the contralateral optic tract. Crossed fibers are damaged more by CHAPTER 39 Disorders of the Eye Monocular prechiasmal field defects: PART 2 Cardinal Manifestations and Presentation of Diseases Normal field Central scotoma Nerve-fiber bundle Altitudinal Cecocentral Enlarged blind-spot right eye (arcuate) scotoma scotoma scotoma with peripheral constriction Binocular chiasmal or postchiasmal field defects: Homonymous hemianopia with macular sparing FIguRE 39-3 Ventral view of the brain, correlating patterns of visual field loss with the sites of lesions in the visual pathway. The visual fields overlap partially, creating 120° of central binocular field flanked by a 40° monocular crescent on either side. The visual field maps in this figure were done with a computer-driven perimeter (Humphrey Instruments, Carl Zeiss, Inc.). It plots the retinal sensitivity to light in the central 30° by using a gray scale format. Areas of visual field loss are shown in black. The examples of common monocular, prechiasmal field defects are all shown for the right eye. By convention, the visual fields are always recorded with the left eye’s field on the left and the right eye’s field on the right, just as the patient sees the world. compression than are uncrossed fibers. As a result, mass lesions of The insidious development of a bitemporal hemianopia often goes the sellar region cause a temporal hemianopia in each eye. Tumors unnoticed by the patient and will escape detection by the physician anterior to the optic chiasm, such as meningiomas of the tuberculum unless each eye is tested separately. sella, produce a junctional scotoma characterized by an optic neu-It is difficult to localize a postchiasmal lesion accurately, because ropathy in one eye and a superior-temporal field cut in the other eye injury anywhere in the optic tract, lateral geniculate body, optic radia(Fig. 39-3G). More symmetric compression of the optic chiasm by a tions, or visual cortex can produce a homonymous hemianopia (i.e., pituitary adenoma (see Fig. 403-1), meningioma, craniopharyngioma, a temporal hemifield defect in the contralateral eye and a matching glioma, or aneurysm results in a bitemporal hemianopia (Fig. 39-3H). nasal hemifield defect in the ipsilateral eye) (Fig. 39-3I). A unilateral postchiasmal lesion leaves the visual acuity in each eye unaffected, although the patient may read the letters on only the left or right half of the eye chart. Lesions of the optic radiations tend to cause poorly matched or incongruous field defects in each eye. Damage to the optic radiations in the temporal lobe (Meyer’s loop) produces a superior quadrantic homonymous hemianopia (Fig. 39-3J), whereas injury to the optic radiations in the parietal lobe results in an inferior quadrantic homonymous hemianopia (Fig. 39-3K). Lesions of the primary visual cortex give rise to dense, congruous hemianopic field defects. Occlusion of the posterior cerebral artery supplying the occipital lobe is a common cause of total homonymous hemianopia. Some patients with hemianopia after occipital stroke have macular sparing, because the macular representation at the tip of the occipital lobe is supplied by collaterals from the middle cerebral artery (Fig. 39-3L). Destruction of both occipital lobes produces cortical blindness. This condition can be distinguished from bilateral prechiasmal visual loss by noting that the pupil responses and optic fundi remain normal. RED OR PAINFuL EYE Corneal Abrasions Corneal abrasions are seen best by placing a drop of fluorescein in the eye and looking with the slit lamp, using a cobalt-blue light. A penlight with a blue filter will suffice if a slit lamp is not available. Damage to the corneal epithelium is revealed by yellow fluorescence of the exposed basement membrane underlying the epithelium. It is important to check for foreign bodies. To search the conjunctival fornices, the lower lid should be pulled down and the upper lid everted. A foreign body can be removed with a moistened cotton-tipped applicator after a drop of a topical anesthetic such as proparacaine has been placed in the eye. Alternatively, it may be possible to flush the foreign body from the eye by irrigating copiously with saline or artificial tears. If the corneal epithelium has been abraded, antibiotic ointment and a patch should be applied to the eye. A drop of an intermediate-acting cycloplegic such as cyclopentolate hydrochloride 1% helps reduce pain by relaxing the ciliary body. The eye should be reexamined the next day. Minor abrasions may not require patching, antibiotics, or cycloplegia. Subconjunctival Hemorrhage This results from rupture of small vessels bridging the potential space between the episclera and the conjunctiva. Blood dissecting into this space can produce a spectacular red eye, but vision is not affected and the hemorrhage resolves without treatment. Subconjunctival hemorrhage is usually spontaneous but can result from blunt trauma, eye rubbing, or vigorous coughing. Occasionally it is a clue to an underlying bleeding disorder. Pinguecula Pinguecula is a small, raised conjunctival nodule at the temporal or nasal limbus. In adults such lesions are extremely common and have little significance unless they become inflamed (pingueculitis). They are more apt to occur in workers with frequent outdoor exposure. A pterygium resembles a pinguecula but has crossed the limbus to encroach on the corneal surface. Removal is justified when symptoms of irritation or blurring develop, but recurrence is a common problem. Blepharitis This refers to inflammation of the eyelids. The most common form occurs in association with acne rosacea or seborrheic dermatitis. The eyelid margins usually are colonized heavily by staphylococci. Upon close inspection, they appear greasy, ulcerated, and crusted with scaling debris that clings to the lashes. Treatment consists of strict eyelid hygiene, using warm compresses and eyelash scrubs with baby shampoo. An external hordeolum (sty) is caused by staphylococcal infection of the superficial accessory glands of Zeis or Moll located in the eyelid margins. An internal hordeolum occurs after suppurative infection of the oil-secreting meibomian glands within the tarsal plate of the eyelid. Topical antibiotics such as bacitracin/ polymyxin B ophthalmic ointment can be applied. Systemic antibiotics, usually tetracyclines or azithromycin, sometimes are necessary for treatment of meibomian gland inflammation (meibomitis) or chronic, severe blepharitis. A chalazion is a painless, chronic granulomatous inflammation of a meibomian gland that produces a pealike nodule 199 within the eyelid. It can be incised and drained or injected with glucocorticoids. Basal cell, squamous cell, or meibomian gland carcinoma should be suspected with any nonhealing ulcerative lesion of the eyelids. Dacryocystitis An inflammation of the lacrimal drainage system, dacryocystitis can produce epiphora (tearing) and ocular injection. Gentle pressure over the lacrimal sac evokes pain and reflux of mucus or pus from the tear puncta. Dacryocystitis usually occurs after obstruction of the lacrimal system. It is treated with topical and systemic antibiotics, followed by probing, silicone stent intubation, or surgery to reestablish patency. Entropion (inversion of the eyelid) or ectropion (sagging or eversion of the eyelid) can also lead to epiphora and ocular irritation. Conjunctivitis Conjunctivitis is the most common cause of a red, irritated eye. Pain is minimal, and visual acuity is reduced only slightly. The most common viral etiology is adenovirus infection. It causes a watery discharge, a mild foreign-body sensation, and photophobia. Bacterial infection tends to produce a more mucopurulent exudate. Mild cases of infectious conjunctivitis usually are treated empirically with broad-spectrum topical ocular antibiotics such as sulfacetamide 10%, polymyxin-bacitracin, or a trimethoprim-polymyxin combination. Smears and cultures usually are reserved for severe, resistant, or recurrent cases of conjunctivitis. To prevent contagion, patients should be admonished to wash their hands frequently, not to touch their eyes, and to avoid direct contact with others. Allergic Conjunctivitis This condition is extremely common and often is mistaken for infectious conjunctivitis. Itching, redness, and epiphora are typical. The palpebral conjunctiva may become hypertropic with giant excrescences called cobblestone papillae. Irritation from contact lenses or any chronic foreign body also can induce formation of cobblestone papillae. Atopic conjunctivitis occurs in subjects with atopic dermatitis or asthma. Symptoms caused by allergic conjunctivitis can be alleviated with cold compresses, topical vasoconstrictors, antihistamines, and mast cell stabilizers such as cromolyn sodium. Topical glucocorticoid solutions provide dramatic relief of immune-mediated forms of conjunctivitis, but their long-term use is ill advised because of the complications of glaucoma, cataract, and secondary infection. Topical nonsteroidal anti-inflammatory drugs (NSAIDs) (e.g., ketorolac tromethamine) are better alternatives. Keratoconjunctivitis Sicca Also known as dry eye, this produces a burning foreign-body sensation, injection, and photophobia. In mild cases the eye appears surprisingly normal, but tear production measured by wetting of a filter paper (Schirmer strip) is deficient. A variety of systemic drugs, including antihistaminic, anticholinergic, and psychotropic medications, result in dry eye by reducing lacrimal secretion. Disorders that involve the lacrimal gland directly, such as sarcoidosis and Sjögren’s syndrome, also cause dry eye. Patients may develop dry eye after radiation therapy if the treatment field includes the orbits. Problems with ocular drying are also common after lesions affecting cranial nerve V or VII. Corneal anesthesia is particularly dangerous, because the absence of a normal blink reflex exposes the cornea to injury without pain to warn the patient. Dry eye is managed by frequent and liberal application of artificial tears and ocular lubricants. In severe cases the tear puncta can be plugged or cauterized to reduce lacrimal outflow. Keratitis Keratitis is a threat to vision because of the risk of corneal clouding, scarring, and perforation. Worldwide, the two leading causes of blindness from keratitis are trachoma from chlamydial infection and vitamin A deficiency related to malnutrition. In the United States, contact lenses play a major role in corneal infection and ulceration. They should not be worn by anyone with an active eye infection. In evaluating the cornea, it is important to differentiate between a superficial infection (keratoconjunctivitis) and a deeper, more serious ulcerative process. The latter is accompanied by greater visual loss, pain, photo-phobia, redness, and discharge. Slit-lamp examination shows disruption of the corneal epithelium, a cloudy infiltrate or abscess in the CHAPTER 39 Disorders of the Eye 200 stroma, and an inflammatory cellular reaction in the anterior chamber. In severe cases, pus settles at the bottom of the anterior chamber, giving rise to a hypopyon. Immediate empirical antibiotic therapy should be initiated after corneal scrapings are obtained for Gram’s stain, Giemsa stain, and cultures. Fortified topical antibiotics are most effective, supplemented with subconjunctival antibiotics as required. A fungal etiology should always be considered in a patient with keratitis. Fungal infection is common in warm humid climates, especially after penetration of the cornea by plant or vegetable material. Herpes Simplex The herpesviruses are a major cause of blindness from keratitis. Most adults in the United States have serum antibodies to herpes simplex, indicating prior viral infection (Chap. 216). Primary ocular infection generally is caused by herpes simplex type 1 rather than type 2. It manifests as a unilateral follicular blepharoconjunctivitis that is easily confused with adenoviral conjunctivitis unless telltale vesicles appear on the periocular skin or conjunctiva. A dendritic pattern of corneal epithelial ulceration revealed by fluorescein staining is pathognomonic for herpes infection but is seen in only a minority of primary infections. Recurrent ocular infection arises from reactivation of the latent herpesvirus. Viral eruption in the corneal epithelium may result in the characteristic herpes dendrite. Involvement of the corneal stroma produces edema, vascularization, and iridocyclitis. Herpes keratitis is treated with topical antiviral agents, cycloplegics, and oral acyclovir. Topical glucocorticoids are effective in mitigating corneal scarring but must be used with extreme caution because of the danger of corneal melting and perforation. Topical glucocorticoids also carry the risk of prolonging infection and inducing glaucoma. Herpes Zoster Herpes zoster from reactivation of latent varicella (chickenpox) virus causes a dermatomal pattern of painful vesicular dermatitis. Ocular symptoms can occur after zoster eruption in any branch of the trigeminal nerve but are particularly common when vesicles form on the nose, reflecting nasociliary (V1) nerve involvement (Hutchinson’s sign). Herpes zoster ophthalmicus produces corneal dendrites, which can be difficult to distinguish from those seen in herpes simplex. Stromal keratitis, anterior uveitis, raised intra-ocular pressure, ocular motor nerve palsies, acute retinal necrosis, and postherpetic scarring and neuralgia are other common sequelae. Herpes zoster ophthalmicus is treated with antiviral agents and cycloplegics. In severe cases, glucocorticoids may be added to prevent permanent visual loss from corneal scarring. Episcleritis This is an inflammation of the episclera, a thin layer of connective tissue between the conjunctiva and the sclera. Episcleritis resembles conjunctivitis, but it is a more localized process and discharge is absent. Most cases of episcleritis are idiopathic, but some occur in the setting of an autoimmune disease. Scleritis refers to a deeper, more severe inflammatory process that frequently is associated with a connective tissue disease such as rheumatoid arthritis, lupus erythematosus, polyarteritis nodosa, granulomatosis with polyangiitis (Wegener’s), or relapsing polychondritis. The inflammation and thickening of the sclera can be diffuse or nodular. In anterior forms of scleritis, the globe assumes a violet hue and the patient complains of severe ocular tenderness and pain. With posterior scleritis, the pain and redness may be less marked, but there is often proptosis, choroidal effusion, reduced motility, and visual loss. Episcleritis and scleritis should be treated with NSAIDs. If these agents fail, topical or even systemic glucocorticoid therapy may be necessary, especially if an underlying autoimmune process is active. uveitis Involving the anterior structures of the eye, uveitis also is called iritis or iridocyclitis. The diagnosis requires slit-lamp examination to identify inflammatory cells floating in the aqueous humor or deposited on the corneal endothelium (keratic precipitates). Anterior uveitis develops in sarcoidosis, ankylosing spondylitis, juvenile rheumatoid arthritis, inflammatory bowel disease, psoriasis, reactive arthritis, and Behçet’s disease. It also is associated with herpes infections, syphilis, Lyme disease, onchocerciasis, tuberculosis, and leprosy. Although anterior uveitis can occur in conjunction with many diseases, no cause is found to explain the majority of cases. For this reason, PART 2 Cardinal Manifestations and Presentation of Diseases laboratory evaluation usually is reserved for patients with recurrent or severe anterior uveitis. Treatment is aimed at reducing inflammation and scarring by judicious use of topical glucocorticoids. Dilatation of the pupil reduces pain and prevents the formation of synechiae. Posterior uveitis This is diagnosed by observing inflammation of the vitreous, retina, or choroid on fundus examination. It is more likely than anterior uveitis to be associated with an identifiable systemic disease. Some patients have panuveitis, or inflammation of both the anterior and posterior segments of the eye. Posterior uveitis is a manifestation of autoimmune diseases such as sarcoidosis, Behçet’s disease, Vogt-Koyanagi-Harada syndrome, and inflammatory bowel disease. It also accompanies diseases such as toxoplasmosis, onchocerciasis, cysticercosis, coccidioidomycosis, toxocariasis, and histoplasmosis; infections caused by organisms such as Candida, Pneumocystis carinii, Cryptococcus, Aspergillus, herpes, and cytomegalovirus (see Fig. 219-1); and other diseases, such as syphilis, Lyme disease, tuberculosis, cat-scratch disease, Whipple’s disease, and brucellosis. In multiple sclerosis, chronic inflammatory changes can develop in the extreme periphery of the retina (pars planitis or intermediate uveitis). Acute Angle-Closure glaucoma This is an unusual but frequently misdiagnosed cause of a red, painful eye. Asian populations have a particularly high risk of angle-closure glaucoma. Susceptible eyes have a shallow anterior chamber because the eye has either a short axial length (hyperopia) or a lens enlarged by the gradual development of cataract. When the pupil becomes mid-dilated, the peripheral iris blocks aqueous outflow via the anterior chamber angle and the intraocular pressure rises abruptly, producing pain, injection, corneal edema, obscurations, and blurred vision. In some patients, ocular symptoms are overshadowed by nausea, vomiting, or headache, prompting a fruitless workup for abdominal or neurologic disease. The diagnosis is made by measuring the intraocular pressure during an acute attack or by performing gonioscopy, a procedure that allows one to observe a narrow chamber angle with a mirrored contact lens. Acute angle closure is treated with acetazolamide (PO or IV), topical beta blockers, prostaglandin analogues, α2-adrenergic agonists, and pilocarpine to induce miosis. If these measures fail, a laser can be used to create a hole in the peripheral iris to relieve pupillary block. Many physicians are reluctant to dilate patients routinely for fundus examination because they fear precipitating an angle-closure glaucoma. The risk is actually remote and more than outweighed by the potential benefit to patients of discovering a hidden fundus lesion visible only through a fully dilated pupil. Moreover, a single attack of angle closure after pharmacologic dilatation rarely causes any permanent damage to the eye and serves as an inadvertent provocative test to identify patients with narrow angles who would benefit from prophylactic laser iridectomy. Endophthalmitis This results from bacterial, viral, fungal, or parasitic infection of the internal structures of the eye. It usually is acquired by hematogenous seeding from a remote site. Chronically ill, diabetic, or immunosuppressed patients, especially those with a history of indwelling IV catheters or positive blood cultures, are at greatest risk for endogenous endophthalmitis. Although most patients have ocular pain and injection, visual loss is sometimes the only symptom. Septic emboli from a diseased heart valve or a dental abscess that lodge in the retinal circulation can give rise to endophthalmitis. White-centered retinal hemorrhages known as Roth’s spots (Fig. 39-4) are considered pathognomonic for subacute bacterial endocarditis, but they also appear in leukemia, diabetes, and many other conditions. Endophthalmitis also occurs as a complication of ocular surgery, especially glaucoma filtering, occasionally months or even years after the operation. An occult penetrating foreign body or unrecognized trauma to the globe should be considered in any patient with unexplained intraocular infection or inflammation. TRANSIENT OR SuDDEN VISuAL LOSS Amaurosis Fugax This term refers to a transient ischemic attack of the retina (Chap. 446). Because neural tissue has a high rate of metabolism, interruption of blood flow to the retina for more than a few seconds results in transient monocular blindness, a term used interchangeably with amaurosis fugax. Patients describe a rapid fading of vision like a curtain descending, sometimes affecting only a portion of the visual field. Amaurosis fugax usually results from an embolus that becomes stuck within a retinal arteriole (Fig. 39-5). If the embolus breaks up or passes, flow is restored and vision returns quickly to normal without permanent damage. With prolonged interruption of blood flow, the inner retina suffers infarction. Ophthalmoscopy reveals zones of whitened, edematous retina following the distribution of branch retinal arterioles. Complete occlusion of the central retinal artery produces arrest of blood flow and a milky retina with a cherry-red fovea (Fig. 39-6). Emboli are composed of cholesterol (Hollenhorst plaque), calcium, or platelet-fibrin debris. The most common source is an atherosclerotic plaque in the carotid artery or aorta, although emboli also can arise from the heart, especially in patients with diseased valves, atrial fibrillation, or wall motion abnormalities. FIguRE 39-4 Roth’s spot, cotton-wool spot, and retinal hemor-rhages in a 48-year-old liver transplant patient with candidemia from immunosuppression. FIguRE 39-6 Central retinal artery occlusion in a 78-year-old man reducing acuity to counting fingers in the right eye. Note the splinter hemorrhage on the optic disc and the slightly milky appearance to the macula with a cherry-red fovea. In rare instances, amaurosis fugax results from low central retinal artery perfusion pressure in a patient with a critical stenosis of the ipsilateral carotid artery and poor collateral flow via the circle of Willis. In this situation, amaurosis fugax develops when there is a dip in systemic blood pressure or a slight worsening of the carotid stenosis. Sometimes there is contralateral motor or sensory loss, indicating concomitant hemispheric cerebral ischemia. Retinal arterial occlusion also occurs rarely in association with retinal migraine, lupus erythematosus, anticardiolipin antibodies, anticoagulant deficiency states (protein S, protein C, and antithrombin deficiency), pregnancy, IV drug abuse, blood dyscrasias, dysproteinemias, and temporal arteritis. Marked systemic hypertension causes sclerosis of retinal arterioles, splinter hemorrhages, focal infarcts of the nerve fiber layer (cottonwool spots), and leakage of lipid and fluid (hard exudate) into the macula (Fig. 39-7). In hypertensive crisis, sudden visual loss can result from vasospasm of retinal arterioles and retinal ischemia. In addition, acute hypertension may produce visual loss from ischemic swelling of the optic disc. Patients with acute hypertensive retinopathy should be treated by lowering the blood pressure. However, the blood pressure should not be reduced precipitously, because there is a danger of optic disc infarction from sudden hypoperfusion. Impending branch or central retinal vein occlusion can produce prolonged visual obscurations that resemble those described by patients with amaurosis fugax. The veins appear engorged and phlebitic, with numerous retinal hemorrhages (Fig. 39-8). In some patients, venous blood flow recovers spontaneously, whereas others evolve a frank obstruction with extensive retinal bleeding (“blood and thunder” appearance), infarction, and visual loss. Venous occlusion of the retina is often idiopathic, but hypertension, diabetes, and glaucoma CHAPTER 39 Disorders of the Eye FIguRE 39-5 Hollenhorst plaque lodged at the bifurcation of a retinal arteriole proves that a patient is shedding emboli from the carotid artery, great vessels, or heart. FIguRE 39-7 Hypertensive retinopathy with blurred optic disc, scattered hemorrhages, cotton-wool spots (nerve fiber layer infarcts), and foveal exudate in a 62-year-old man with chronic renal failure and a systolic blood pressure of 220. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 39-10 Retrobulbar optic neuritis is characterized by a nor-mal fundus examination initially, hence the rubric “the doctor sees nothing, and the patient sees nothing.” Optic atrophy develops after severe or repeated attacks. rarely. It is important to biopsy an arterial segment of at least 3 cm and to examine a sufficient number of tissue sections prepared from the specimen. Posterior Ischemic Optic Neuropathy This is an uncommon cause of acute visual loss, induced by the combination of severe anemia and hypotension. Cases have been reported after major blood loss during surgery (especially in patients undergoing cardiac or lumbar spine operations), exsanguinating trauma, gastrointestinal bleeding, and renal dialysis. The fundus usually appears normal, although optic disc swelling develops if the process extends anteriorly far enough to reach the globe. Vision can be salvaged in some patients by prompt blood transfusion and reversal of hypotension. Optic Neuritis This is a common inflammatory disease of the optic nerve. In the Optic Neuritis Treatment Trial (ONTT), the mean age of patients was 32 years, 77% were female, 92% had ocular pain (especially with eye movements), and 35% had optic disc swelling. In most patients, the demyelinating event was retrobulbar and the ocular fundus appeared normal on initial examination (Fig. 39-10), although optic disc pallor slowly developed over subsequent months. Virtually all patients experience a gradual recovery of vision after a single episode of optic neuritis, even without treatment. This rule is so reliable that failure of vision to improve after a first attack of optic neuritis casts doubt on the original diagnosis. Treatment with high-dose IV methylprednisolone (250 mg every 6 h for 3 days) followed by oral prednisone (1 mg/kg per day for 11 days) makes no difference in ultimate acuity 6 months after the attack, but the recovery of visual function occurs more rapidly. Therefore, when visual loss is severe (worse than 20/100), IV followed by PO glucocorticoids are often recommended. For some patients, optic neuritis remains an isolated event. However, the ONTT showed that the 15-year cumulative probability of developing clinically definite multiple sclerosis after optic neuritis is 50%. A brain magnetic resonance (MR) scan is advisable in every patient with a first attack of optic neuritis. If two or more plaques are present on initial imaging, treatment should be considered to prevent the development of additional demyelinating lesions (Chap. 458). This disease usually affects young men, causing gradual, painless, severe central visual loss in one eye, followed weeks to years later by the same process in the other eye. Acutely, the optic disc appears mildly plethoric with surface capillary telangiectasias but no vascular leakage on fluorescein angiography. Eventually optic atrophy ensues. Leber’s optic neuropathy is caused by a point mutation at codon 11778 in the mitochondrial gene encoding nicotinamide adenine dinucleotide FIguRE 39-8 Central retinal vein occlusion can produce massive retinal hemorrhage (“blood and thunder”), ischemia, and vision loss. are prominent risk factors. Polycythemia, thrombocythemia, or other factors leading to an underlying hypercoagulable state should be corrected; aspirin treatment may be beneficial. Anterior Ischemic Optic Neuropathy (AION) This is caused by insufficient blood flow through the posterior ciliary arteries that supply the optic disc. It produces painless monocular visual loss that is sudden in onset, followed sometimes by stuttering progression. The optic disc appears swollen and surrounded by nerve fiber layer splinter hemorrhages (Fig. 39-9). AION is divided into two forms: arteritic and nonarteritic. The nonarteritic form is most common. No specific cause can be identified, although diabetes and hypertension are common risk factors. A crowded disc architecture and small optic cup predispose to the development of nonarteritic AION. No treatment is available. About 5% of patients, especially those age >60, develop the arteritic form of AION in conjunction with giant-cell (temporal) arteritis (Chap. 385). It is urgent to recognize arteritic AION so that high doses of glucocorticoids can be instituted immediately to prevent blindness in the second eye. Symptoms of polymyalgia rheumatica may be present; the sedimentation rate and C-reactive protein level are usually elevated. In a patient with visual loss from suspected arteritic AION, temporal artery biopsy is mandatory to confirm the diagnosis. Glucocorticoids should be started immediately, without waiting for the biopsy to be completed. The diagnosis of arteritic AION is difficult to sustain in the face of a negative temporal artery biopsy, but such cases do occur dehydrogenase (NADH) subunit 4. Additional mutations responsible for the disease have been identified, most in mitochondrial genes that encode proteins involved in electron transport. Mitochondrial mutations that cause Leber’s neuropathy are inherited from the mother by all her children, but usually only sons develop symptoms. FIguRE 39-9 Anterior ischemic optic neuropathy from temporal arteritis in a 67-year-old woman with acute disc swelling, splinter hemorrhages, visual loss, and an erythrocyte sedimentation rate of 70 mm/h. FIguRE 39-11 Optic atrophy is not a specific diagnosis but refers to the combination of optic disc pallor, arteriolar narrowing, and nerve fiber layer destruction produced by a host of eye diseases, especially optic neuropathies. Toxic Optic Neuropathy This can result in acute visual loss with bilateral optic disc swelling and central or cecocentral scotomas. Such cases have been reported to result from exposure to ethambutol, methyl alcohol (moonshine), ethylene glycol (antifreeze), or carbon monoxide. In toxic optic neuropathy, visual loss also can develop gradually and produce optic atrophy (Fig. 39-11) without a phase of acute optic disc edema. Many agents have been implicated as a cause of toxic optic neuropathy, but the evidence supporting the association for many is weak. The following is a partial list of potential offending drugs or toxins: disulfiram, ethchlorvynol, chloramphenicol, amiodarone, monoclonal anti-CD3 antibody, ciprofloxacin, digitalis, streptomycin, lead, arsenic, thallium, d-penicillamine, isoniazid, emetine, sildenafil, tadalafil, vardenafil, and sulfonamides. Deficiency states induced by starvation, malabsorption, or alcoholism can lead to insidious visual loss. Thiamine, vitamin B12, and folate levels should be checked in any patient with unexplained bilateral central scotomas and optic pallor. Papilledema This connotes bilateral optic disc swelling from raised intracranial pressure (Fig. 39-12). Headache is a common but not invariable accompaniment. All other forms of optic disc swelling (e.g., 203 from optic neuritis or ischemic optic neuropathy) should be called “optic disc edema”. This convention is arbitrary but serves to avoid confusion. Often it is difficult to differentiate papilledema from other forms of optic disc edema by fundus examination alone. Transient visual obscurations are a classic symptom of papilledema. They can occur in only one eye or simultaneously in both eyes. They usually last seconds but can persist longer. Obscurations follow abrupt shifts in posture or happen spontaneously. When obscurations are prolonged or spontaneous, the papilledema is more threatening. Visual acuity is not affected by papilledema unless the papilledema is severe, longstanding, or accompanied by macular edema and hemorrhage. Visual field testing shows enlarged blind spots and peripheral constriction (Fig. 39-3F). With unremitting papilledema, peripheral visual field loss progresses in an insidious fashion while the optic nerve develops atrophy. In this setting, reduction of optic disc swelling is an ominous sign of a dying nerve rather than an encouraging indication of resolving papilledema. Evaluation of papilledema requires neuroimaging to exclude an intracranial lesion. MR angiography is appropriate in selected cases to search for a dural venous sinus occlusion or an arteriovenous shunt. If neuroradiologic studies are negative, the subarachnoid opening pressure should be measured by lumbar puncture. An elevated pressure, with normal cerebrospinal fluid, points by exclusion to the diagnosis of pseudotumor cerebri (idiopathic intracranial hypertension). The majority of patients are young, female, and obese. Treatment with a carbonic anhydrase inhibitor such as acetazolamide lowers intracranial pressure by reducing the production of cerebrospinal fluid. Weight reduction is vital: bariatric surgery should be considered in patients who cannot lose weight by diet control. If vision loss is severe or progressive, a shunt should be performed without delay to prevent blindness. Occasionally, emergency surgery is required for sudden blindness caused by fulminant papilledema. Optic Disc Drusen These are refractile deposits within the substance of the optic nerve head (Fig. 39-13). They are unrelated to drusen of the retina, which occur in age-related macular degeneration. Optic disc drusen are most common in people of northern European descent. Their diagnosis is obvious when they are visible as glittering particles on the surface of the optic disc. However, in many patients they are hidden beneath the surface, producing pseudopapilledema. It is important to recognize optic disc drusen to avoid an unnecessary evaluation for papilledema. Ultrasound or computed tomography (CT) scanning is sensitive for detection of buried optic disc drusen because they contain calcium. In most patients, optic disc drusen are an incidental, innocuous finding, but they can produce visual obscurations. On perimetry they give rise to enlarged blind spots and arcuate CHAPTER 39 Disorders of the Eye FIguRE 39-12 Papilledema means optic disc edema from raised intracranial pressure. This young woman developed acute papill-edema, with hemorrhages and cotton-wool spots, as a rare side effect of treatment with tetracycline for acne. FIguRE 39-13 Optic disc drusen are calcified, mulberry-like deposits of unknown etiology within the optic disc, giving rise to “pseudopapilledema.” 204 scotomas from damage to the optic disc. With increasing age, drusen tend to become more exposed on the disc surface as optic atrophy develops. Hemorrhage, choroidal neovascular membrane, and AION are more likely to occur in patients with optic disc drusen. No treatment is available. Vitreous Degeneration This occurs in all individuals with advancing age, leading to visual symptoms. Opacities develop in the vitreous, casting annoying shadows on the retina. As the eye moves, these distracting “floaters” move synchronously, with a slight lag caused by inertia of the vitreous gel. Vitreous traction on the retina causes mechanical stimulation, resulting in perception of flashing lights. This photopsia is brief and is confined to one eye, in contrast to the bilateral, prolonged scintillations of cortical migraine. Contraction of the vitreous can result in sudden separation from the retina, heralded by an alarming shower of floaters and photopsia. This process, known as vitreous detachment, is a common involutional event in the elderly. It is not harmful unless it damages the retina. A careful examination of the dilated fundus is important in any patient complaining of floaters or photopsia to search for peripheral tears or holes. If such a lesion is found, laser application can forestall a retinal detachment. Occasionally a tear ruptures a retinal blood vessel, causing vitreous hemorrhage and sudden loss of vision. On attempted ophthalmoscopy the fundus is hidden by a dark haze of blood. Ultrasound is required to examine the interior of the eye for a retinal tear or detachment. If the hemorrhage does not resolve spontaneously, the vitreous can be removed surgically. Vitreous hemorrhage also results from the fragile neovascular vessels that proliferate on the surface of the retina in diabetes, sickle cell anemia, and other ischemic ocular diseases. Retinal Detachment This produces symptoms of floaters, flashing lights, and a scotoma in the peripheral visual field corresponding to the detachment (Fig. 39-14). If the detachment includes the fovea, there is an afferent pupil defect and the visual acuity is reduced. In most eyes, retinal detachment starts with a hole, flap, or tear in the peripheral retina (rhegmatogenous retinal detachment). Patients with peripheral retinal thinning (lattice degeneration) are particularly vulnerable to this process. Once a break has developed in the retina, liquefied vitreous is free to enter the subretinal space, separating the retina from the pigment epithelium. The combination of vitreous traction on the retinal surface and passage of fluid behind the retina leads inexorably to detachment. Patients with a history of myopia, trauma, or prior cataract extraction are at greatest risk for retinal detachment. The diagnosis is confirmed by ophthalmoscopic examination of the dilated eye. Classic Migraine (See also Chap. 447) This usually occurs with a visual aura lasting about 20 min. In a typical attack, a small central disturbance in the field of vision marches toward the periphery, leaving a transient PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 39-14 Retinal detachment appears as an elevated sheet of retinal tissue with folds. In this patient, the fovea was spared, so acuity was normal, but an inferior detachment produced a superior scotoma. scotoma in its wake. The expanding border of migraine scotoma has a scintillating, dancing, or zigzag edge, resembling the bastions of a fortified city, hence the term fortification spectra. Patients’ descriptions of fortification spectra vary widely and can be confused with amaurosis fugax. Migraine patterns usually last longer and are perceived in both eyes, whereas amaurosis fugax is briefer and occurs in only one eye. Migraine phenomena also remain visible in the dark or with the eyes closed. Generally they are confined to either the right or the left visual hemifield, but sometimes both fields are involved simultaneously. Patients often have a long history of stereotypic attacks. After the visual symptoms recede, headache develops in most patients. Transient Ischemic Attacks Vertebrobasilar insufficiency may result in acute homonymous visual symptoms. Many patients mistakenly describe symptoms in the left or right eye when in fact the symptoms are occurring in the left or right hemifield of both eyes. Interruption of blood supply to the visual cortex causes a sudden fogging or graying of vision, occasionally with flashing lights or other positive phenomena that mimic migraine. Cortical ischemic attacks are briefer in duration than migraine, occur in older patients, and are not followed by headache. There may be associated signs of brainstem ischemia, such as diplopia, vertigo, numbness, weakness, and dysarthria. Stroke Stroke occurs when interruption of blood supply from the posterior cerebral artery to the visual cortex is prolonged. The only finding on examination is a homonymous visual field defect that stops abruptly at the vertical meridian. Occipital lobe stroke usually is due to thrombotic occlusion of the vertebrobasilar system, embolus, or dissection. Lobar hemorrhage, tumor, abscess, and arteriovenous malformation are other common causes of hemianopic cortical visual loss. Factitious (Functional, Nonorganic) Visual Loss This is claimed by hysterics or malingerers. The latter account for the vast majority, seeking sympathy, special treatment, or financial gain by feigning loss of sight. The diagnosis is suspected when the history is atypical, physical findings are lacking or contradictory, inconsistencies emerge on testing, and a secondary motive can be identified. In our litigious society, the fraudulent pursuit of recompense has spawned an epidemic of factitious visual loss. CHRONIC VISuAL LOSS Cataract Cataract is a clouding of the lens sufficient to reduce vision. Most cataracts develop slowly as a result of aging, leading to gradual impairment of vision. The formation of cataract occurs more rapidly in patients with a history of ocular trauma, uveitis, or diabetes mellitus. Cataracts are acquired in a variety of genetic diseases, such as myotonic dystrophy, neurofibromatosis type 2, and galactosemia. Radiation therapy and glucocorticoid treatment can induce cataract as a side effect. The cataracts associated with radiation or glucocorticoids have a typical posterior subcapsular location. Cataract can be detected by noting an impaired red reflex when viewing light reflected from the fundus with an ophthalmoscope or by examining the dilated eye with the slit lamp. The only treatment for cataract is surgical extraction of the opacified lens. Millions of cataract operations are performed each year around the globe. The operation generally is done under local anesthesia on an outpatient basis. A plastic or silicone intraocular lens is placed within the empty lens capsule in the posterior chamber, substituting for the natural lens and leading to rapid recovery of sight. More than 95% of patients who undergo cataract extraction can expect an improvement in vision. In some patients, the lens capsule remaining in the eye after cataract extraction eventually turns cloudy, causing secondary loss of vision. A small opening, called a posterior capsulotomy, is made in the lens capsule with a laser to restore clarity. glaucoma Glaucoma is a slowly progressive, insidious optic neuropathy that usually is associated with chronic elevation of intraocular pressure. After cataract, it is the most common cause of blindness in the world. It is especially prevalent in people of African descent. The mechanism by which raised intraocular pressure injures the optic FIguRE 39-15 Glaucoma results in “cupping” as the neural rim is destroyed and the central cup becomes enlarged and excavated. The cup-to-disc ratio is about 0.8 in this patient. FIguRE 39-16 Age-related macular degeneration consisting of scattered yellow drusen in the macula (dry form) and a crescent of fresh hemorrhage temporal to the fovea from a subretinal neovascu-lar membrane (wet form). CHAPTER 39 Disorders of the Eye nerve is not understood. Axons entering the inferotemporal and superotemporal aspects of the optic disc are damaged first, producing typical nerve fiber bundle or arcuate scotomas on perimetric testing. As fibers are destroyed, the neural rim of the optic disc shrinks and the physiologic cup within the optic disc enlarges (Fig. 39-15). This process is referred to as pathologic “cupping.” The cup-to-disc diameter is expressed as a fraction (e.g., 0.2). The cup-to-disc ratio ranges widely in normal individuals, making it difficult to diagnose glaucoma reliably simply by observing an unusually large or deep optic cup. Careful documentation of serial examinations is helpful. In a patient with physiologic cupping the large cup remains stable, whereas in a patient with glaucoma it expands relentlessly over the years. Observation of progressive cupping and detection of an arcuate scotoma or a nasal step on computerized visual field testing is sufficient to establish the diagnosis of glaucoma. Optical coherence tomography reveals corresponding loss of fibers along the arcuate pathways in the nerve fiber layer. About 95% of patients with glaucoma have open anterior chamber angles. In most affected individuals the intraocular pressure is elevated. The cause of elevated intraocular pressure is unknown, but it is associated with gene mutations in the heritable forms. Surprisingly, a third of patients with open-angle glaucoma have an intraocular pressure within the normal range of 10–20 mmHg. For this so-called normal or low-tension form of glaucoma, high myopia is a risk factor. Chronic angle-closure glaucoma and chronic open-angle glaucoma are usually asymptomatic. Only acute angle-closure glaucoma causes a red or painful eye, from abrupt elevation of intraocular pressure. In all forms of glaucoma, foveal acuity is spared until end-stage disease is reached. For these reasons, severe and irreversible damage can occur before either the patient or the physician recognizes the diagnosis. Screening of patients for glaucoma by noting the cup-to-disc ratio on ophthalmoscopy and by measuring intraocular pressure is vital. Glaucoma is treated with topical adrenergic agonists, cholinergic agonists, beta blockers, and prostaglandin analogues. Occasionally, systemic absorption of beta blocker from eyedrops can be sufficient to cause side effects of bradycardia, hypotension, heart block, bronchospasm, or depression. Topical or oral carbonic anhydrase inhibitors are used to lower intraocular pressure by reducing aqueous production. Laser treatment of the trabecular meshwork in the anterior chamber angle improves aqueous outflow from the eye. If medical or laser treatments fail to halt optic nerve damage from glaucoma, a filter must be constructed surgically (trabeculectomy) or a drainage device placed to release aqueous from the eye in a controlled fashion. Macular Degeneration This is a major cause of gradual, painless, bilateral central visual loss in the elderly. It occurs in a nonexudative (dry) form and an exudative (wet) form. Inflammation may be important in both forms of macular degeneration; susceptibility is associated with variants in the gene for complement factor H, an inhibitor of the alternative complement pathway. The nonexudative process begins with the accumulation of extracellular deposits called drusen underneath the retinal pigment epithelium. On ophthalmoscopy, they are pleomorphic but generally appear as small discrete yellow lesions clustered in the macula (Fig. 39-16). With time they become larger, more numerous, and confluent. The retinal pigment epithelium becomes focally detached and atrophic, causing visual loss by interfering with photoreceptor function. Treatment with vitamins C and E, beta-carotene, and zinc may retard dry macular degeneration. Exudative macular degeneration, which develops in only a minority of patients, occurs when neovascular vessels from the choroid grow through defects in Bruch’s membrane and proliferate underneath the retinal pigment epithelium or the retina. Leakage from these vessels produces elevation of the retina, with distortion (metamorphopsia) and blurring of vision. Although the onset of these symptoms is usually gradual, bleeding from a subretinal choroidal neovascular membrane sometimes causes acute visual loss. Neovascular membranes can be difficult to see on fundus examination because they are located beneath the retina. Fluorescein angiography and optical coherence tomography, a technique for acquiring images of the retina in cross-section, are extremely useful for their detection. Major or repeated hemorrhage under the retina from neovascular membranes results in fibrosis, development of a round (disciform) macular scar, and permanent loss of central vision. A major therapeutic advance has occurred with the discovery that exudative macular degeneration can be treated with intraocular injection of antagonists to vascular endothelial growth factor. Bevacizumab, ranibizumab, or aflibercept is administered by direct injection into the vitreous cavity, beginning on a monthly basis. These antibodies cause the regression of neovascular membranes by blocking the action of vascular endothelial growth factor, thereby improving visual acuity. Central Serous Chorioretinopathy This primarily affects males between the ages of 20 and 50 years. Leakage of serous fluid from the choroid causes small, localized detachment of the retinal pigment epithelium and the neurosensory retina. These detachments produce acute or chronic symptoms of metamorphopsia and blurred vision when the macula is involved. They are difficult to visualize with a direct ophthalmoscope because the detached retina is transparent and only slightly elevated. Optical coherence tomography shows fluid beneath the retina, and fluorescein angiography shows dye streaming into the subretinal space. The cause of central serous chorioretinopathy is unknown. Symptoms may resolve spontaneously if the retina reattaches, but recurrent detachment is common. Laser photocoagulation has benefited some patients with this condition. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 39-17 Proliferative diabetic retinopathy in a 25-year-old man with an 18-year history of diabetes, showing neovascular vessels emanating from the optic disc, retinal and vitreous hemorrhage, cot-ton-wool spots, and macular exudate. Round spots in the periphery represent recently applied panretinal photocoagulation. Diabetic Retinopathy A rare disease until 1921, when the discovery of insulin resulted in a dramatic improvement in life expectancy for patients with diabetes mellitus, diabetic retinopathy is now a leading cause of blindness in the United States. The retinopathy takes years to develop but eventually appears in nearly all cases. Regular surveillance of the dilated fundus is crucial for any patient with diabetes. In advanced diabetic retinopathy, the proliferation of neovascular vessels leads to blindness from vitreous hemorrhage, retinal detachment, and glaucoma (Fig. 39-17). These complications can be avoided in most patients by administration of panretinal laser photocoagulation at the appropriate point in the evolution of the disease. For further discussion of the manifestations and management of diabetic retinopathy, see Chaps. 417–419. Retinitis Pigmentosa This is a general term for a disparate group of rod-cone dystrophies characterized by progressive night blindness, visual field constriction with a ring scotoma, loss of acuity, and an abnormal electroretinogram (ERG). It occurs sporadically or in an autosomal recessive, dominant, or X-linked pattern. Irregular black deposits of clumped pigment in the peripheral retina, called bone spicules because of their vague resemblance to the spicules of cancellous bone, give the disease its name (Fig. 39-18). The name is actually a misnomer because retinitis pigmentosa is not an inflammatory process. Most cases are due to a mutation in the gene for rhodopsin, the rod photopigment, or in the gene for peripherin, a glycoprotein located in photoreceptor outer segments. Vitamin A (15,000 IU/d) slightly retards the deterioration of the ERG in patients with retinitis pigmentosa but has no beneficial effect on visual acuity or fields. FIguRE 39-18 Retinitis pigmentosa with black clumps of pigment known as “bone spicules.” The patient had peripheral visual field loss with sparing of central (macular) vision. Leber’s congenital amaurosis, a rare cone dystrophy, has been treated by replacement of the missing RPE65 protein through gene therapy, resulting in modest improvement in visual function. Some forms of retinitis pigmentosa occur in association with rare, hereditary systemic diseases (olivopontocerebellar degeneration, Bassen-Kornzweig disease, Kearns-Sayre syndrome, Refsum’s disease). Chronic treatment with chloroquine, hydroxychloroquine, and phenothiazines (especially thioridazine) can produce visual loss from a toxic retinopathy that resembles retinitis pigmentosa. Epiretinal Membrane This is a fibrocellular tissue that grows across the inner surface of the retina, causing metamorphopsia and reduced visual acuity from distortion of the macula. A crinkled, cellophane-like membrane is visible on the retinal examination. Epiretinal membrane is most common in patients over 50 years of age and is usually unilateral. Most cases are idiopathic, but some occur as a result of hypertensive retinopathy, diabetes, retinal detachment, or trauma. When visual acuity is reduced to the level of about 6/24 (20/80), vitrectomy and surgical peeling of the membrane to relieve macular puckering are recommended. Contraction of an epiretinal membrane sometimes gives rise to a macular hole. Most macular holes, however, are caused by local vitreous traction within the fovea. Vitrectomy can improve acuity in selected cases. Melanoma and Other Tumors Melanoma is the most common primary tumor of the eye (Fig. 39-19). It causes photopsia, an enlarging scotoma, and loss of vision. A small melanoma is often difficult to differentiate from a benign choroidal nevus. Serial examinations are required to document a malignant pattern of growth. Treatment of melanoma is controversial. Options include enucleation, local resection, and irradiation. Metastatic tumors to the eye outnumber primary tumors. Breast and lung carcinomas have a special propensity to spread to the choroid or iris. Leukemia and lymphoma also commonly invade ocular tissues. Sometimes their only sign on eye examination is cellular debris in the vitreous, which can masquerade as a chronic posterior uveitis. Retrobulbar tumor of the optic nerve (meningioma, glioma) or chiasmal tumor (pituitary adenoma, meningioma) produces gradual visual loss with few objective findings except for optic disc pallor. Rarely, sudden expansion of a pituitary adenoma from infarction and bleeding (pituitary apoplexy) causes acute retrobulbar visual loss, with headache, nausea, and ocular motor nerve palsies. In any patient with visual field loss or optic atrophy, CT or MR scanning should be considered if the cause remains unknown after careful review of the history and thorough examination of the eye. FIguRE 39-19 Melanoma of the choroid, appearing as an elevated dark mass in the inferior fundus, with overlying hemorrhage. The black line denotes the plane of the optical coherence tomography scan (below) showing the subretinal tumor. When the globes appear asymmetric, the clinician must first decide which eye is abnormal. Is one eye recessed within the orbit (enophthalmos), or is the other eye protuberant (exophthalmos, or proptosis)? A small globe or a Horner’s syndrome can give the appearance of enophthalmos. True enophthalmos occurs commonly after trauma, from atrophy of retrobulbar fat, or from fracture of the orbital floor. The position of the eyes within the orbits is measured by using a Hertel exophthalmometer, a handheld instrument that records the position of the anterior corneal surface relative to the lateral orbital rim. If this instrument is not available, relative eye position can be judged by bending the patient’s head forward and looking down upon the orbits. A proptosis of only 2 mm in one eye is detectable from this perspective. The development of proptosis implies a space-occupying lesion in the orbit and usually warrants CT or MR imaging. graves’ Ophthalmopathy This is the leading cause of proptosis in adults (Chap. 405). The proptosis is often asymmetric and can even appear to be unilateral. Orbital inflammation and engorgement of the extra-ocular muscles, particularly the medial rectus and the inferior rectus, account for the protrusion of the globe. Corneal exposure, lid retraction, conjunctival injection, restriction of gaze, diplopia, and visual loss from optic nerve compression are cardinal symptoms. Graves’ eye disease is a clinical diagnosis, but laboratory testing can be useful. The serum level of thyroid-stimulating immunoglobulins is often elevated. Orbital imaging usually reveals enlarged extraocular eye muscles, but not always. Graves’ ophthalmopathy can be treated with oral prednisone (60 mg/d) for 1 month, followed by a taper over several months. Worsening of symptoms upon glucocorticoid withdrawal is common. Topical lubricants, taping the eyelids closed at night, moisture chambers, and eyelid surgery are helpful to limit exposure of ocular tissues. Radiation therapy is not effective. Orbital decompression should be performed for severe, symptomatic exophthalmos or if visual function is reduced by optic nerve compression. In patients with diplopia, prisms or eye muscle surgery can be used to restore ocular alignment in primary gaze. Orbital Pseudotumor This is an idiopathic, inflammatory orbital syndrome that is distinguished from Graves’ ophthalmopathy by the prominent complaint of pain. Other symptoms include diplopia, ptosis, proptosis, and orbital congestion. Evaluation for sarcoidosis, granulomatosis with polyangiitis, and other types of orbital vasculitis or collagen-vascular disease is negative. Imaging often shows swollen eye muscles (orbital myositis) with enlarged tendons. By contrast, in Graves’ ophthalmopathy, the tendons of the eye muscles usually are spared. The Tolosa-Hunt syndrome (Chap. 455) may be regarded as an extension of orbital pseudotumor through the superior orbital fissure into the cavernous sinus. The diagnosis of orbital pseudotumor is difficult. Biopsy of the orbit frequently yields nonspecific evidence of fat infiltration by lymphocytes, plasma cells, and eosinophils. A dramatic response to a therapeutic trial of systemic glucocorticoids indirectly provides the best confirmation of the diagnosis. Orbital Cellulitis This causes pain, lid erythema, proptosis, conjunctival chemosis, restricted motility, decreased acuity, afferent pupillary defect, fever, and leukocytosis. It often arises from the paranasal sinuses, especially by contiguous spread of infection from the ethmoid sinus through the lamina papyracea of the medial orbit. A history of recent upper respiratory tract infection, chronic sinusitis, thick mucus secretions, or dental disease is significant in any patient with suspected orbital cellulitis. Blood cultures should be obtained, but they are usually negative. Most patients respond to empirical therapy with broad-spectrum IV antibiotics. Occasionally, orbital cellulitis follows an overwhelm-207 ing course, with massive proptosis, blindness, septic cavernous sinus thrombosis, and meningitis. To avert this disaster, orbital cellulitis should be managed aggressively in the early stages, with immediate imaging of the orbits and antibiotic therapy that includes coverage of methicillin-resistant Staphylococcus aureus (MRSA). Prompt surgical drainage of an orbital abscess or paranasal sinusitis is indicated if optic nerve function deteriorates despite antibiotics. Tumors Tumors of the orbit cause painless, progressive proptosis. The most common primary tumors are cavernous hemangioma, lymphangioma, neurofibroma, schwannoma, dermoid cyst, adenoid cystic carcinoma, optic nerve glioma, optic nerve meningioma, and benign mixed tumor of the lacrimal gland. Metastatic tumor to the orbit occurs frequently in breast carcinoma, lung carcinoma, and lymphoma. Diagnosis by fine-needle aspiration followed by urgent radiation therapy sometimes can preserve vision. Carotid Cavernous Fistulas With anterior drainage through the orbit, these fistulas produce proptosis, diplopia, glaucoma, and corkscrew, arterialized conjunctival vessels. Direct fistulas usually result from trauma. They are easily diagnosed because of the prominent signs produced by high-flow, high-pressure shunting. Indirect fistulas, or dural arteriovenous malformations, are more likely to occur spontaneously, especially in older women. The signs are more subtle, and the diagnosis frequently is missed. The combination of slight proptosis, diplopia, enlarged muscles, and an injected eye often is mistaken for thyroid ophthalmopathy. A bruit heard upon auscultation of the head or reported by the patient is a valuable diagnostic clue. Imaging shows an enlarged superior ophthalmic vein in the orbits. Carotid cavernous shunts can be eliminated by intravascular embolization. PTOSIS Blepharoptosis This is an abnormal drooping of the eyelid. Unilateral or bilateral ptosis can be congenital, from dysgenesis of the levator palpebrae superioris, or from abnormal insertion of its aponeurosis into the eyelid. Acquired ptosis can develop so gradually that the patient is unaware of the problem. Inspection of old photographs is helpful in dating the onset. A history of prior trauma, eye surgery, contact lens use, diplopia, systemic symptoms (e.g., dysphagia or peripheral muscle weakness), or a family history of ptosis should be sought. Fluctuating ptosis that worsens late in the day is typical of myasthenia gravis. Examination should focus on evidence for proptosis, eyelid masses or deformities, inflammation, pupil inequality, or limitation of motility. The width of the palpebral fissures is measured in primary gaze to determine the degree of ptosis. The ptosis will be underestimated if the patient compensates by lifting the brow with the frontalis muscle. Mechanical Ptosis This occurs in many elderly patients from stretching and redundancy of eyelid skin and subcutaneous fat (dermatochalasis). The extra weight of these sagging tissues causes the lid to droop. Enlargement or deformation of the eyelid from infection, tumor, trauma, or inflammation also results in ptosis on a purely mechanical basis. Aponeurotic Ptosis This is an acquired dehiscence or stretching of the aponeurotic tendon, which connects the levator muscle to the tarsal plate of the eyelid. It occurs commonly in older patients, presumably from loss of connective tissue elasticity. Aponeurotic ptosis is also a common sequela of eyelid swelling from infection or blunt trauma to the orbit, cataract surgery, or contact lens use. Myogenic Ptosis The causes of myogenic ptosis include myasthenia gravis (Chap. 461) and a number of rare myopathies that manifest with ptosis. The term chronic progressive external ophthalmoplegia refers to a spectrum of systemic diseases caused by mutations of mitochondrial DNA. As the name implies, the most prominent findings are symmetric, slowly progressive ptosis and limitation of eye movements. In general, diplopia is a late symptom because all eye movements are reduced equally. In the Kearns-Sayre variant, retinal pigmentary changes and abnormalities of cardiac conduction develop. CHAPTER 39 Disorders of the Eye 208 Peripheral muscle biopsy shows characteristic “ragged-red fibers.” Oculopharyngeal dystrophy is a distinct autosomal dominant disease with onset in middle age, characterized by ptosis, limited eye movements, and trouble swallowing. Myotonic dystrophy, another autosomal dominant disorder, causes ptosis, ophthalmoparesis, cataract, and pigmentary retinopathy. Patients have muscle wasting, myotonia, frontal balding, and cardiac abnormalities. Neurogenic Ptosis This results from a lesion affecting the innervation to either of the two muscles that open the eyelid: Müller’s muscle or the levator palpebrae superioris. Examination of the pupil helps distinguish between these two possibilities. In Horner’s syndrome, the eye with ptosis has a smaller pupil and the eye movements are full. In an oculomotor nerve palsy, the eye with the ptosis has a larger or a normal pupil. If the pupil is normal but there is limitation of adduction, elevation, and depression, a pupil-sparing oculomotor nerve palsy is likely (see next section). Rarely, a lesion affecting the small, central subnucleus of the oculomotor complex will cause bilateral ptosis with normal eye movements and pupils. The first point to clarify is whether diplopia persists in either eye after the opposite eye is covered. If it does, the diagnosis is monocular diplopia. The cause is usually intrinsic to the eye and therefore has no dire implications for the patient. Corneal aberrations (e.g., keratoconus, pterygium), uncorrected refractive error, cataract, or foveal traction may give rise to monocular diplopia. Occasionally it is a symptom of malingering or psychiatric disease. Diplopia alleviated by covering one eye is binocular diplopia and is caused by disruption of ocular alignment. Inquiry should be made into the nature of the double vision (purely side-by-side versus partial vertical displacement of images), mode of onset, duration, intermittency, diurnal variation, and associated neurologic or systemic symptoms. If the patient has diplopia while being examined, motility testing should reveal a deficiency corresponding to the patient’s symptoms. However, subtle limitation of ocular excursions is often difficult to detect. For example, a patient with a slight left abducens nerve paresis may appear to have full eye movements despite a complaint of horizontal diplopia upon looking to the left. In this situation, the cover test provides a more sensitive method for demonstrating the ocular misalignment. It should be conducted in primary gaze and then with the head turned and tilted in each direction. In the above example, a cover test with the head turned to the right will maximize the fixation shift evoked by the cover test. Occasionally, a cover test performed in an asymptomatic patient during a routine examination will reveal an ocular deviation. If the eye movements are full and the ocular misalignment is equal in all directions of gaze (concomitant deviation), the diagnosis is strabismus. In this condition, which affects about 1% of the population, fusion is disrupted in infancy or early childhood. To avoid diplopia, vision is suppressed from the nonfixating eye. In some children, this leads to impaired vision (amblyopia, or “lazy” eye) in the deviated eye. Binocular diplopia results from a wide range of processes: infectious, neoplastic, metabolic, degenerative, inflammatory, and vascular. One must decide whether the diplopia is neurogenic in origin or is due to restriction of globe rotation by local disease in the orbit. Orbital pseudotumor, myositis, infection, tumor, thyroid disease, and muscle entrapment (e.g., from a blowout fracture) cause restrictive diplopia. The diagnosis of restriction is usually made by recognizing other associated signs and symptoms of local orbital disease. Omission of high-resolution orbital imaging is a common mistake in the evaluation of diplopia. Myasthenia gravis (See also Chap. 461) This is a major cause of diplopia. The diplopia is often intermittent, variable, and not confined to any single ocular motor nerve distribution. The pupils are always normal. Fluctuating ptosis may be present. Many patients have a purely ocular form of the disease, with no evidence of systemic muscular weakness. The diagnosis can be confirmed by an IV edrophonium injection, which produces a transient reversal of eyelid or eye muscle PART 2 Cardinal Manifestations and Presentation of Diseases weakness. Blood tests for antibodies against the acetylcholine receptor or the MuSK protein can establish the diagnosis but are frequently negative in the purely ocular form of myasthenia gravis. Botulism from food or wound poisoning can mimic ocular myasthenia. After restrictive orbital disease and myasthenia gravis are excluded, a lesion of a cranial nerve supplying innervation to the extraocular muscles is the most likely cause of binocular diplopia. Oculomotor Nerve The third cranial nerve innervates the medial, inferior, and superior recti; inferior oblique; levator palpebrae superioris; and the iris sphincter. Total palsy of the oculomotor nerve causes ptosis, a dilated pupil, and leaves the eye “down and out” because of the unopposed action of the lateral rectus and superior oblique. This combination of findings is obvious. More challenging is the diagnosis of early or partial oculomotor nerve palsy. In this setting any combination of ptosis, pupil dilation, and weakness of the eye muscles supplied by the oculomotor nerve may be encountered. Frequent serial examinations during the evolving phase of the palsy help ensure that the diagnosis is not missed. The advent of an oculomotor nerve palsy with a pupil involvement, especially when accompanied by pain, suggests a compressive lesion, such as a tumor or circle of Willis aneurysm. Neuroimaging should be obtained, along with a CT or MR angiogram. Occasionally, a catheter arteriogram must be done to exclude an aneurysm. A lesion of the oculomotor nucleus in the rostral midbrain produces signs that differ from those caused by a lesion of the nerve itself. There is bilateral ptosis because the levator muscle is innervated by a single central subnucleus. There is also weakness of the contralateral superior rectus, because it is supplied by the oculomotor nucleus on the other side. Occasionally both superior recti are weak. Isolated nuclear oculomotor palsy is rare. Usually neurologic examination reveals additional signs that suggest brainstem damage from infarction, hemorrhage, tumor, or infection. Injury to structures surrounding fascicles of the oculomotor nerve descending through the midbrain has given rise to a number of classic eponymic designations. In Nothnagel’s syndrome, injury to the superior cerebellar peduncle causes ipsilateral oculomotor palsy and contralateral cerebellar ataxia. In Benedikt’s syndrome, injury to the red nucleus results in ipsilateral oculomotor palsy and contralateral tremor, chorea, and athetosis. Claude’s syndrome incorporates features of both of these syndromes, by injury to both the red nucleus and the superior cerebellar peduncle. Finally, in Weber’s syndrome, injury to the cerebral peduncle causes ipsilateral oculomotor palsy with contra-lateral hemiparesis. In the subarachnoid space the oculomotor nerve is vulnerable to aneurysm, meningitis, tumor, infarction, and compression. In cerebral herniation, the nerve becomes trapped between the edge of the tentorium and the uncus of the temporal lobe. Oculomotor palsy also can result from midbrain torsion and hemorrhages during herniation. In the cavernous sinus, oculomotor palsy arises from carotid aneurysm, carotid cavernous fistula, cavernous sinus thrombosis, tumor (pituitary adenoma, meningioma, metastasis), herpes zoster infection, and the Tolosa-Hunt syndrome. The etiology of an isolated, pupil-sparing oculomotor palsy often remains an enigma even after neuroimaging and extensive laboratory testing. Most cases are thought to result from microvascular infarction of the nerve somewhere along its course from the brainstem to the orbit. Usually the patient complains of pain. Diabetes, hypertension, and vascular disease are major risk factors. Spontaneous recovery over a period of months is the rule. If this fails to occur or if new findings develop, the diagnosis of microvascular oculomotor nerve palsy should be reconsidered. Aberrant regeneration is common when the oculomotor nerve is injured by trauma or compression (tumor, aneurysm). Miswiring of sprouting fibers to the levator muscle and the rectus muscles results in elevation of the eyelid upon downgaze or adduction. The pupil also constricts upon attempted adduction, elevation, or depression of the globe. Aberrant regeneration is not seen after oculomotor palsy from microvascular infarct and hence vitiates that diagnosis. Trochlear Nerve The fourth cranial nerve originates in the midbrain, just caudal to the oculomotor nerve complex. Fibers exit the brainstem dorsally and cross to innervate the contralateral superior oblique. The principal actions of this muscle are to depress and intort the globe. A palsy therefore results in hypertropia and excyclotorsion. The cyclotorsion seldom is noticed by patients. Instead, they complain of vertical diplopia, especially upon reading or looking down. The vertical diplopia also is exacerbated by tilting the head toward the side with the muscle palsy and alleviated by tilting it away. This “head tilt test” is a cardinal diagnostic feature. Isolated trochlear nerve palsy results from all the causes listed above for the oculomotor nerve except aneurysm. The trochlear nerve is particularly apt to suffer injury after closed head trauma. The free edge of the tentorium is thought to impinge on the nerve during a concussive blow. Most isolated trochlear nerve palsies are idiopathic and hence are diagnosed by exclusion as “microvascular.” Spontaneous improvement occurs over a period of months in most patients. A base-down prism (conveniently applied to the patient’s glasses as a stick-on Fresnel lens) may serve as a temporary measure to alleviate diplopia. If the palsy does not resolve, the eyes can be realigned by weakening the inferior oblique muscle. Abducens Nerve The sixth cranial nerve innervates the lateral rectus muscle. A palsy produces horizontal diplopia, worse on gaze to the side of the lesion. A nuclear lesion has different consequences, because the abducens nucleus contains interneurons that project via the medial longitudinal fasciculus to the medial rectus subnucleus of the contra-lateral oculomotor complex. Therefore, an abducens nuclear lesion produces a complete lateral gaze palsy from weakness of both the ipsilateral lateral rectus and the contralateral medial rectus. Foville’s syndrome after dorsal pontine injury includes lateral gaze palsy, ipsilateral facial palsy, and contralateral hemiparesis incurred by damage to descending corticospinal fibers. Millard-Gubler syndrome from ventral pontine injury is similar except for the eye findings. There is lateral rectus weakness only, instead of gaze palsy, because the abducens fascicle is injured rather than the nucleus. Infarct, tumor, hemorrhage, vascular malformation, and multiple sclerosis are the most common etiologies of brainstem abducens palsy. After leaving the ventral pons, the abducens nerve runs forward along the clivus to pierce the dura at the petrous apex, where it enters the cavernous sinus. Along its subarachnoid course it is susceptible to meningitis, tumor (meningioma, chordoma, carcinomatous meningitis), subarachnoid hemorrhage, trauma, and compression by aneurysm or dolichoectatic vessels. At the petrous apex, mastoiditis can produce deafness, pain, and ipsilateral abducens palsy (Gradenigo’s syndrome). In the cavernous sinus, the nerve can be affected by carotid aneurysm, carotid cavernous fistula, tumor (pituitary adenoma, meningioma, nasopharyngeal carcinoma), herpes infection, and Tolosa-Hunt syndrome. Unilateral or bilateral abducens palsy is a classic sign of raised intracranial pressure. The diagnosis can be confirmed if papilledema is observed on fundus examination. The mechanism is still debated but probably is related to rostral-caudal displacement of the brainstem. The same phenomenon accounts for abducens palsy from Chiari malformation or low intracranial pressure (e.g., after lumbar puncture, spinal anesthesia, or spontaneous dural cerebrospinal fluid leak). Treatment of abducens palsy is aimed at prompt correction of the underlying cause. However, the cause remains obscure in many instances despite diligent evaluation. As was mentioned above for isolated trochlear or oculomotor palsy, most cases are assumed to represent microvascular infarcts because they often occur in the setting of diabetes or other vascular risk factors. Some cases may develop as a postinfectious mononeuritis (e.g., after a viral flu). Patching one eye, occluding one eyeglass lens with tape, or applying a temporary prism will provide relief of diplopia until the palsy resolves. If recovery is incomplete, eye muscle surgery nearly always can realign the eyes, at least in primary position. A patient with an abducens palsy that fails to improve should be reevaluated for an occult etiology (e.g., chordoma, carcinomatous meningitis, carotid cavernous fistula, myasthenia 209 gravis). Skull base tumors are easily missed even on contrast-enhanced neuroimaging studies. Multiple Ocular Motor Nerve Palsies These should not be attributed to spontaneous microvascular events affecting more than one cranial nerve at a time. This remarkable coincidence does occur, especially in diabetic patients, but the diagnosis is made only in retrospect after all other diagnostic alternatives have been exhausted. Neuroimaging should focus on the cavernous sinus, superior orbital fissure, and orbital apex, where all three ocular motor nerves are in close proximity. In a diabetic or immunocompromised host, fungal infection (Aspergillus, Mucorales, Cryptococcus) is a common cause of multiple nerve palsies. In a patient with systemic malignancy, carcinomatous meningitis is a likely diagnosis. Cytologic examination may be negative despite repeated sampling of the cerebrospinal fluid. The cancer-associated Lambert-Eaton myasthenic syndrome also can produce ophthalmoplegia. Giant cell (temporal) arteritis occasionally manifests as diplopia from ischemic palsies of extraocular muscles. Fisher’s syndrome, an ocular variant of Guillain-Barré, produces ophthalmoplegia with areflexia and ataxia. Often the ataxia is mild, and the reflexes are normal. Antiganglioside antibodies (GQ1b) can be detected in about 50% of cases. Supranuclear Disorders of gaze These are often mistaken for multiple ocular motor nerve palsies. For example, Wernicke’s encephalopathy can produce nystagmus and a partial deficit of horizontal and vertical gaze that mimics a combined abducens and oculomotor nerve palsy. The disorder occurs in malnourished or alcoholic patients and can be reversed by thiamine. Infarct, hemorrhage, tumor, multiple sclerosis, encephalitis, vasculitis, and Whipple’s disease are other important causes of supranuclear gaze palsy. Disorders of vertical gaze, especially downward saccades, are an early feature of progressive supranuclear palsy. Smooth pursuit is affected later in the course of the disease. Parkinson’s disease, Huntington’s disease, and olivopontocerebellar degeneration also can affect vertical gaze. The frontal eye field of the cerebral cortex is involved in generation of saccades to the contralateral side. After hemispheric stroke, the eyes usually deviate toward the lesioned side because of the unopposed action of the frontal eye field in the normal hemisphere. With time, this deficit resolves. Seizures generally have the opposite effect: the eyes deviate conjugately away from the irritative focus. Parietal lesions disrupt smooth pursuit of targets moving toward the side of the lesion. Bilateral parietal lesions produce Bálint’s syndrome, which is characterized by impaired eye-hand coordination (optic ataxia), difficulty initiating voluntary eye movements (ocular apraxia), and visuospatial disorientation (simultanagnosia). Horizontal gaze Descending cortical inputs mediating horizontal gaze ultimately converge at the level of the pons. Neurons in the paramedian pontine reticular formation are responsible for controlling conjugate gaze toward the same side. They project directly to the ipsilateral abducens nucleus. A lesion of either the paramedian pontine reticular formation or the abducens nucleus causes an ipsilateral conjugate gaze palsy. Lesions at either locus produce nearly identical clinical syndromes, with the following exception: vestibular stimulation (oculocephalic maneuver or caloric irrigation) will succeed in driving the eyes conjugately to the side in a patient with a lesion of the paramedian pontine reticular formation but not in a patient with a lesion of the abducens nucleus. INTERNUCLEAR OPHTHALMOPLEgIA This results from damage to the medial longitudinal fasciculus ascending from the abducens nucleus in the pons to the oculomotor nucleus in the midbrain (hence, “internuclear”). Damage to fibers carrying the conjugate signal from abducens interneurons to the contralateral medial rectus motoneurons results in a failure of adduction on attempted lateral gaze. For example, a patient with a left internuclear ophthalmoplegia (INO) will have slowed or absent adducting movements of the left eye (Fig. 39-20). A patient CHAPTER 39 Disorders of the Eye PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 39-20 Left internuclear ophthalmoplegia (INO). A. In primary position of gaze, the eyes appear normal. B. Horizontal gaze to the left is intact. C. On attempted horizontal gaze to the right, the left eye fails to adduct. In mildly affected patients, the eye may adduct partially or more slowly than normal. Nystagmus is usually present in the abducted eye. D. T2-weighted axial magnetic resonance image through the pons showing a demyelinating plaque in the left medial longitudinal fasciculus (arrow). with bilateral injury to the medial longitudinal fasciculus will have bilateral INO. Multiple sclerosis is the most common cause, although tumor, stroke, trauma, or any brainstem process may be responsible. One-and-a-half syndrome is due to a combined lesion of the medial longitudinal fasciculus and the abducens nucleus on the same side. The patient’s only horizontal eye movement is abduction of the eye on the other side. Vertical gaze This is controlled at the level of the midbrain. The neuronal circuits affected in disorders of vertical gaze are not fully elucidated, but lesions of the rostral interstitial nucleus of the medial longitudinal fasciculus and the interstitial nucleus of Cajal cause supra-nuclear paresis of upgaze, downgaze, or all vertical eye movements. Distal basilar artery ischemia is the most common etiology. Skew deviation refers to a vertical misalignment of the eyes, usually constant in all positions of gaze. The finding has poor localizing value because skew deviation has been reported after lesions in widespread regions of the brainstem and cerebellum. PARINAUd’S SYNdROME Also known as dorsal midbrain syndrome, this is a distinct supranuclear vertical gaze disorder caused by damage to the posterior commissure. It is a classic sign of hydrocephalus from aqueductal stenosis. Pineal region or midbrain tumors, cysticercosis, and stroke also cause Parinaud’s syndrome. Features include loss of upgaze (and sometimes downgaze), convergence-retraction nystagmus on attempted upgaze, downward ocular deviation (“setting sun” sign), lid retraction (Collier’s sign), skew deviation, pseudoabducens palsy, and light-near dissociation of the pupils. Nystagmus This is a rhythmic oscillation of the eyes, occurring physiologically from vestibular and optokinetic stimulation or pathologically in a wide variety of diseases (Chap. 28). Abnormalities of the eyes or optic nerves, present at birth or acquired in childhood, can produce a complex, searching nystagmus with irregular pendular (sinusoidal) and jerk features. Examples are albinism, Leber’s congenital amaurosis, and bilateral cataract. This nystagmus is commonly referred to as congenital sensory nystagmus. This is a poor term because even in children with congenital lesions, the nystagmus does not appear until weeks after birth. Congenital motor nystagmus, which looks similar to congenital sensory nystagmus, develops in the absence of any abnormality of the sensory visual system. Visual acuity also is reduced in congenital motor nystagmus, probably by the nystagmus itself, but seldom below a level of 20/200. JERk NYSTAgMUS This is characterized by a slow drift off the target, followed by a fast corrective saccade. By convention, the nystagmus is named after the quick phase. Jerk nystagmus can be downbeat, upbeat, horizontal (left or right), and torsional. The pattern of nystagmus may vary with gaze position. Some patients will be oblivious to their nystagmus. Others will complain of blurred vision or a subjective to-and-fro movement of the environment (oscillopsia) corresponding to the nystagmus. Fine nystagmus may be difficult to see on gross examination of the eyes. Observation of nystagmoid movements of the optic disc on ophthalmoscopy is a sensitive way to detect subtle nystagmus. gAzE-EVOkEd NYSTAgMUS This is the most common form of jerk nystagmus. When the eyes are held eccentrically in the orbits, they have a natural tendency to drift back to primary position. The subject compensates by making a corrective saccade to maintain the deviated eye position. Many normal patients have mild gaze-evoked nystagmus. Exaggerated gaze-evoked nystagmus can be induced by drugs (sedatives, anticonvulsants, alcohol); muscle paresis; myasthenia gravis; demyelinating disease; and cerebellopontine angle, brainstem, and cerebellar lesions. VESTIBULAR NYSTAgMUS Vestibular nystagmus results from dysfunction of the labyrinth (Ménière’s disease), vestibular nerve, or vestibular nucleus in the brainstem. Peripheral vestibular nystagmus often occurs in discrete attacks, with symptoms of nausea and vertigo. There may be associated tinnitus and hearing loss. Sudden shifts in head position may provoke or exacerbate symptoms. the craniocervical junction (Chiari malformation, basilar invagination). It also has been reported in brainstem or cerebellar stroke, lithium or anticonvulsant intoxication, alcoholism, and multiple sclerosis. Upbeat nystagmus is associated with damage to the pontine tegmentum from stroke, demyelination, or tumor. opsoclonus This rare, dramatic disorder of eye movements consists of bursts of consecutive saccades (saccadomania). When the saccades are confined to the horizontal plane, the term ocular flutter is preferred. It can result from viral encephalitis, trauma, or a paraneoplastic effect of neuroblastoma, breast carcinoma, and other malignancies. It has also been reported as a benign, transient phenomenon in otherwise healthy patients. ChaPter 42 Disorders of Smell and Taste Use of the Hand-Held Ophthalmoscope Homayoun Tabandeh, Morton F. Goldberg Examination of the living human retina provides a unique opportu-nity for the direct study of nervous, vascular, and connective tissues. 40e Many systemic disorders have retinal manifestations that are valuable for screening, diagnosis, and management of these conditions. Furthermore, retinal involvement in systemic disorders, such as diabetes mellitus, is a major cause of morbidity. Early recognition by ophthalmoscopic screening is a key factor in effective treatment. Ophthalmoscopy has the potential to be one of the most “high-yield” elements of the physical examination. Effective ophthalmoscopy requires a basic understanding of ocular structures and ophthalmoscopic techniques and recognition of abnormal findings. The eye consists of a shell (cornea and sclera), lens, iris diaphragm, ciliary body, choroid, and retina. The anterior chamber is the space between the cornea and the lens, and it is filled with aqueous humor. The space between the posterior aspect of the lens and the retina is filled by vitreous gel. The choroid and the retina cover the posterior two-thirds of the sclera internally. The cornea and the lens form the focusing system of the eye, while the retina functions as the photo-receptor system, translating light to neuronal signals that are in turn transmitted to the brain via the optic nerve and visual pathways. The choroid is a layer of highly vascularized tissue that nourishes the retina and is located between the sclera and the retina. The retinal pigment epithelium (RPE) layer is a monolayer of pigmented cells that are adherent to the overlying retinal photoreceptor cells. RPE plays a major role in retinal photoreceptor metabolism. The important areas that are visible by ophthalmoscopy include the macula, optic disc, retinal blood vessels, and retinal periphery (Fig. 40e-1). The macula is the central part of the retina and is responsible for detailed vision (acuity) and perception of color. The macula is defined clinically as the area of the retina centered on the posterior pole of the fundus, measuring about 5 disc diameters (DD) (7–8 mm) and bordered by the optic disc nasally and the temporal vascular arcades superiorly and inferiorly. Temporally, the macula extends for about 2.5 DD from its center. The fovea, in the central part of the macula, corresponds to the site of sharpest visual acuity. It is approximately 1 DD in size and appears darker in color than the surrounding area. The center of the fovea, the foveola, has a depressed pit-like configuration measuring about 350 μm. The optic disc measures about 1.5 mm and is located about 4 mm (2.5 DD) nasal to the fovea. It contains the central retinal artery and vein as they branch, a central excavation (cup), and a peripheral neural rim. Normally, the cup-to-disc ratio is less than 0.6. The cup is located temporal to the entry of the disc vessels. The normal optic disc is yellow/ pink in color. It has clear and well-defined margins and is in the same plane as the retina (Fig. 40e-2). Pathologic findings include pallor (atrophy), swelling, and enlarged cupping. The equator of the fundus is clinically defined as the area that includes the internal opening of the vortex veins. The peripheral retina extends from the equator anteriorly to the ora serrata. FIgURE 40e-1 Diagram showing the landmarks of the normal fundus. The macula is bounded by the superior and inferior vascular arcades and extends for 5 disc diameters (DD) temporal to the optic disc (optic nerve head). The central part of the macula (fovea) is located 2.5 DD temporal to the optic disc. The peripheral fundus is arbitrarily defined as the area extending anteriorly from the opening of the vortex veins to the ora serrata (the juncture between the retina and ciliary body). (Drawing courtesy of Juan R. Garcia. Used with permission from Johns Hopkins University.) CHAPTER 40e Use of the Hand-Held Ophthalmoscope FIgURE 40e-2 Photograph of a normal left optic disc illustrating branching of the central retinal vein and artery, a physiologic cup, surface capillaries, and distinct margin. The cup is located temporal to the entry of the disc vessels. (From H Tabandeh, MF Goldberg: Retina in Systemic Disease: A Color Manual of Ophthalmoscopy. New York, Thieme, 2009.) There are a number of ways to visualize the retina, including direct ophthalmoscopy, binocular indirect ophthalmoscopy, and slit-lamp biomicroscopy. Most nonophthalmologists prefer direct ophthalmoscopy, performed with a hand-held ophthalmoscope, because the technique is simple to master and the device is very portable. Ophthalmologists often use slit-lamp biomicroscopy and indirect ophthalmoscopy to obtain a more extensive view of the fundus. Direct ophthalmoscopes are simple hand-held devices that include a small light source for illumination, a viewing aperture through which the examiner looks at the retina, and a lens dial used for correction of the examiner’s and the patient’s refractive errors. A more recent design, the PanOptic ophthalmoscope, provides a wider field of view. How to Use a Direct Ophthalmoscope Good alignment is the key. The goal is to align the examiner’s eye with the viewing aperture of the ophthalmoscope, the patient’s pupil, and the area of interest on the retina. Both the patient and the examiner should be in a comfortable position (sitting or lying for the patient, sitting or standing for the examiner). Dilating the pupil and dimming the room lights make the examination easier. Steps for performing direct ophthalmoscopy are summarized in Table 40e-1. The PanOptic ophthalmoscope is a type of direct ophthalmoscope that is designed to provide a wider view of the fundus and has slightly more magnification than the standard direct ophthalmoscope. Steps for using the PanOptic Ophthalmoscope are summarized in Table 40e-2. • Instruct the patient to remove glasses, keep the head straight, and to look steadily at a distant target straight in front. You may keep or remove your own glasses. Position your head at the same level as the patient’s head. • Use your right eye and right hand to examine the patient’s right eye, and use your left eye and left hand to examine the patient’s left eye. the ophthalmoscope light as a pen light, briefly examine the external features of the eye, including lashes, lid margins, conjunctiva, sclera, iris, and pupil shape, size, and reactivity. the ophthalmoscope light into the patient’s pupil at arm’s length and observe the red reflex. Note abnormalities of the red reflex such as an opacity of the media. up a +10 D lens in the lens wheel, while examining the eye from 10 cm, allows magnified viewing of the anterior segment of the eye. the power of the lens in the wheel to zero, and move closer to the patient. Identify the optic disc by pointing the ophthalmoscope about 15° nasally or by following a blood vessel toward the apex of any branching. If the retina is out of focus, turn the lens dial either way, without moving your head. If the disc becomes clearer, keep turning until best focus is achieved; if it becomes more blurred, turn the dial the other way. • Once you visualize the optic nerve, note its shape, size, color, margins, and the cup. Also note the presence of any venous pulsation or surrounding pigment, such as a choroidal or scleral crescent. • Next, examine the macula. The macula is the area between the superior and inferior temporal vascular arcades, and its center is the fovea. You can examine the macula by pointing your ophthalmoscope about 15° temporal to the optic disc. Alternatively, ask the patient to look into the center of the light. Note the foveal reflex and the presence of any hemorrhage, exudate, abnormal blood vessels, scars, deposits, or other abnormalities. • Examine the retinal blood vessels by re-identifying the optic disc and following each of the four main branches away from the disc. The veins are dark red and relatively large. The arteries are narrower and bright red. • Ask the patient to look in the eight cardinal directions to allow you to view the peripheral fundus. In a patient with a well-dilated pupil, it is possible to visualize as far as the equator. PART 2 Cardinal Manifestations and Presentation of Diseases the ophthalmoscope: Look through the scope at an object that is at least 10 to 15 feet away. Sharpen the image of the object by using the focusing wheel. Set the aperture dial to “small” or home position. the scope on, and adjust the light intensity to “Maximum.” the patient to look straight ahead. Move the ophthalmoscope close to the patient until the eyecup touches the patient’s brow. The eye-cup should be compressed about half its length to optimize the view. the optic disc. the fundus as described in Table 40e-1. Common age-related changes include diminished foveal light reflex, drusen (small yellow subretinal deposits), mild RPE atrophy, and pigment clumping. Retinal hemorrhages may take various shapes and sizes depending on their location within the retina (Figs. 40e-3 and 40e-4). Flame-shaped hemorrhages are located at the level of the superficial nerve fiber layer and represent bleeding from the inner capillary network of the retina. A white-centered hemorrhage is a superficial flame-shaped hemorrhage with an area of central whitening, often representing edema, focal necrosis, or cellular infiltration. Causes of white-centered hemorrhage include bacterial endocarditis and septicemia (Roth spots), lymphoproliferative disorders, diabetes mellitus, hypertension, anemia, and collagen vascular disorders. Dot hemorrhages are small, round, superficial hemorrhages that also originate from the superficial capillary network of the retina. They resemble microaneurysms. Blot hemorrhages are slightly larger in size, dark, and intraretinal. They represent bleeding from the deep capillary network of the retina. Subhyaloid hemorrhages are variable in shape and size and tend to be larger than other types of hemorrhages. They often have a fluid level (“boat-shaped” hemorrhage) and are located within the space between the vitreous and the retina. Subretinal hemorrhages are located deep (external) to the retina. The retinal vessels can be seen crossing over (internal to) such hemorrhages. Subretinal hemorrhages are variable in size and most commonly are caused by choroidal neovascularization (e.g., wet macular degeneration). FIgURE 40e-3 Superficial flame-shaped hemorrhages, dot hem-orrhages, and microaneurysms in a patient with nonproliferative diabetic retinopathy. patientwithchronicleukemia. Conditions associated with retinal hemorrhages include diseases causing retinal microvasculopathy (Table 40e-3), retinitis, retinal macroaneurysm, papilledema, subarachnoid hemorrhage (Terson’s syndrome), Valsalva retinopathy, trauma (ocular injury, head injury, compression injuries of chest and abdomen, shaken baby syndrome, strangulation), macular degeneration, and posterior vitreous detachment. Hyperviscosity states may produce dot and blot hemorrhages, dilated veins (“string of sausages” appearance), optic disc edema, and exudates; similar changes can occur with adaptation to high altitude in mountain climbers. Microaneurysms are outpouchings of the retinal capillaries, appearing as red dots (similar to dot hemorrhages) and measuring 15–50 μm. Microaneurysms have increased permeability and may bleed or leak, resulting in localized retinal hemorrhage or edema. A micro-aneurysm ultimately thromboses and disappears within 3–6 months. Microaneurysms may occur in any condition that causes retinal microvasculopathy (Table 40e-3). microemboli, e.g., talc retinopathy secondary to intravenous drug abuse, septicemia, endocarditis, Purtscher’s retinopathy artery disease, carotid-cavernous fistula, aortic arch syndrome retinopathy, head/neck irradiation Hard exudates are well-circumscribed, shiny, yellow deposits located within the retina. They arise at the margins of areas of retinal edema and indicate increased capillary permeability. Hard exudates contain lipoproteins and lipid-laden macrophages. They may clear spontaneously or following laser photocoagulation, often within 6 months. Hard exudates may occur in isolation or may be scattered throughout the fundus. They may occur in a circular (circinate) pattern centered around an area of leaking microaneurysms. A macular star consists of a radiating, star-shaped pattern of hard exudates that is characteristically seen in severe systemic hypertension and in neuroretinitis associated with cat-scratch disease. Conditions associated with hard exudates include those causing retinal microvasculopathy (Table 40e– 3), papilledema, neuroretinitis such as cat-scratch disease and Lyme disease, retinal vascular lesions (macroaneurysm, retinal capillary hemangioma, Coats’ disease), intraocular tumors, and wet age-related macular degeneration. Drusen may be mistaken for hard exudates on ophthalmoscopy. Unlike hard exudates, drusen are nonrefractile subretinal deposits with blurred margins. They are usually seen in association with age-related macular degeneration. Cotton-wool spots are yellow/white superficial retinal lesions with indistinct feathery borders measuring 0.25–1 DD in size (Fig. 40e-5). They represent areas of edema within the retinal nerve fiber layer due to focal ischemia. Cotton-wool spots usually resolve spontaneously within 3 months. If the underlying ischemic condition persists, new lesions can develop in different locations. Cotton-wool spots often occur in conjunction with retinal hemorrhages and microaneurysms and represent retinal microvasculopathy caused by a number of systemic conditions (Table 40e-3). They may occur in isolation in HIV retinopathy, systemic lupus erythematosus, anemia, bodily trauma, other systemic conditions (Purtscher’s/Purtscher’s-like retinopathy), and interferon therapy. Retinal neovascular complexes are irregular meshworks of fine blood vessels that grow in response to severe retinal ischemia or chronic inflammation (Fig. 40e-6). They may occur on or adjacent to the optic disc or elsewhere in the retina. Neovascular complexes are very CHAPTER 40e Use of the Hand-Held Ophthalmoscope FIgURE 40e-5 Cotton-wool spots, yellow-white superficial lesions with characteristic feathery borders, in a patient with hypertensive retinopathy. (From H Tabandeh, MF Goldberg: Retina in Systemic Disease: A Color Manual of Ophthalmoscopy. New York, Thieme, 2009.) PART 2 Cardinal Manifestations and Presentation of Diseases FIgURE 40e-6 Optic disc neovascularization in a patient with severeproliferativediabeticretinopathy.Multiplehardexudatesare also present. fragile and have a high risk for hemorrhaging, often causing visual loss. Diseases associated with retinal neovascularization include conditions that cause severe retinal microvasculopathy, especially diabetic and sickle cell retinopathies (Table 40e-3), intraocular tumors, intra-ocular inflammation (sarcoidosis, chronic uveitis), and chronic retinal detachment. Common sources of retinal emboli include carotid artery atheromatous plaque, cardiac valve and septal abnormalities, cardiac arrhythmias, atrial myxoma, bacterial endocarditis, septicemia, fungemia, and intravenous drug abuse. Platelet emboli are yellowish in appearance and conform to the shape of the blood vessel. They usually originate from an atheromatous plaque within the carotid artery and can cause transient loss of vision (amaurosis fugax). Cholesterol emboli, otherwise termed Hollenhorst plaques, are yellow crystalline deposits that are commonly found at the bifurcations of the retinal arteries and may be associated with amaurosis fugax. Calcific emboli have a pearly white appearance, are larger than the platelet and cholesterol emboli, and tend to lodge in the larger retinal arteries in or around the optic disc. Calcific emboli often result in retinal arteriolar occlusion. Septic emboli can cause white-centered retinal hemorrhages (Roth spots), retinal microabscesses, and endogenous endophthalmitis. Fat embolism and amniotic fluid embolism are characterized by multiple small vessel occlusions, typically causing cotton-wool spots and few hemorrhages (Purtscher’s-like retinopathy). Talc embolism occurs with intravenous drug abuse and is characterized by multiple refractile deposits within the small retinal vessels. Any severe form of retinal artery embolism may result in retinal ischemia and its sequelae, including retinal neovascularization. Cherry red spot at the macula is the term used to describe the dark red appearance of the central foveal area in comparison to the surrounding macular region (Fig. 40e-7). This appearance is most commonly due to a relative loss of transparency of the parafoveal retina resulting from ischemic cloudy swelling or storage of macromolecules within the ganglion cell layer. Diseases associated with a cherry red spot at the macula include central retinal artery occlusion, sphingolipidoses, and mucolipidoses. FIgURE 40e-7 Cherry red spot at the macula and cloudy swelling of the macula in a patient with central retinal artery occlusion due to embolus originating from a carotid artery atheromatous plaque. Retinal crystals appear as fine, refractile, yellow-white deposits. Associated conditions include infantile cystinosis, primary hyperoxaluria, secondary oxalosis, Sjögren-Larson syndrome, intravenous drug abuse (talc retinopathy), and drugs such as tamoxifen, canthaxanthin, nitrofurantoin, methoxyflurane, and ethylene glycol. Crystals may also be seen in primary retinal diseases such as juxtafoveal telangiectasia, gyrate atrophy, and Bietti’s crystalline degeneration. Old microemboli may mimic retinal crystals. Vascular sheathing appears as a yellow-white cuff surrounding a retinal artery or vein (Fig. 40e-8). Diseases associated with retinal vascular sheathing include sarcoidosis, tuberculosis, toxoplasmosis, syphilis, HIV, retinitis (cytomegalovirus, herpes zoster, and herpes simplex), Lyme disease, cat-scratch disease, multiple sclerosis, chronic leukemia, amyloidosis, Behçet’s disease, retinal vasculitis, retinal vascular occlusion, and chronic uveitis. FIgURE 40e-8 Vascular sheathing over the optic disc in a patient with neurosarcoidosis. Retinal detachment is the separation of the retina from the underlying RPE. There are three main types: (1) serous/exudative, (2) tractional, and (3) rhegmatogenous retinal detachment. In serous retinal detachment, the location of the subretinal fluid is position-dependent, characteristically gravitating to the lowermost part of the fundus (shifting fluid sign), and retinal breaks are absent. Diseases associated with serous/exudative retinal detachment include severe systemic hypertension, dural arteriovenous shunt, retinal vascular anomalies, hyperviscosity syndromes, papilledema, posterior uveitis, scleritis, orbital inflammation, and intraocular neoplasms such as choroidal melanoma, choroidal metastasis, lymphoma, and multiple myeloma. Tractional retinal detachment is caused by internal traction on the retina in the absence of a retinal break. The retina in the area of detachment is immobile and concaved internally. Fibrovascular proliferation is a frequent associated finding. Conditions associated with tractional retinal detachment include vascular proliferative retinopathies such as severe proliferative diabetic retinopathy, branch retinal vein occlusion, sickle cell retinopathy, and retinopathy of prematurity. Ocular trauma, proliferative vitreoretinopathy, and intraocular inflammation are other causes of a tractional retinal detachment. Rhegmatogenous retinal detachment is caused by the presence of a retinal break, allowing fluid from the vitreous cavity to gain access to the subretinal space. The surface of the retina is usually convex forward. Rhegmatogenous retinal detachment has a corrugated appearance, and undulates with eye movement. Causes of retinal breaks include posterior vitreous detachment, severe vitreoretinal traction, trauma, intraocular surgery, retinitis, and atrophic holes. Optic disc swelling is abnormal elevation of the optic disc with blurring of its margins (Fig. 40e-9). The term “papilledema” is used to describe swelling of the optic disc secondary to elevation of intra-cranial pressure. In papilledema, the normal venous pulsation at the disc is characteristically absent. The differential diagnosis of optic disc swelling includes papilledema, anterior optic neuritis (papillitis), central retinal vein occlusion, anterior ischemic optic neuropathy, toxic optic neuropathy, hereditary optic neuropathy, neuroretinitis, diabetic papillopathy, hypertension (Fig. 40e-10), respiratory failure, carotid-cavernous fistula, optic disc nerve infiltration (glioma, lymphoma, leukemia, sarcoidosis, and granulomatous infections), ocular hypotony, chronic intraocular inflammation, optic disc drusen (pseudopapilledema), and high hypermetropia (pseudopapilledema). FIgURE 40e-10 Optic disc edema and retinal hemorrhages in a patient with malignant hypertension. Choroidal mass lesions appear thickened and may or may not be associated with increased pigmentation. Pigmented mass lesions include choroidal nevus (usually flat), choroidal malignant melanoma (Fig. 40e-11), and melanocytoma. Nonpigmented lesions include amelanotic choroidal melanoma, choroidal metastasis, retinoblastoma, capillary hemangioma, granuloma (e.g., Toxocara canis), choroidal detachment, choroidal hemorrhage, and wet age-related macular degeneration. Other rare tumors that may be visible on ophthalmoscopy include CHAPTER 40e Use of the Hand-Held Ophthalmoscope FIgURE 40e-9 Optic disc swelling in a patient with papilledema due to idiopathic intracranial hypertension. The optic disc is hyper- emic,withindistinctmargins.Superficialhemorrhagesarepresent. FIgURE 40e-11 Choroidal malignant melanoma. The lesion is highly elevated and pigmented, and has subretinal orange pigment deposits characteristic for malignant melanoma. 40e-6 osteoma, astrocytoma (e.g., tuberous sclerosis), neurilemmoma, and leiomyoma. The differential diagnosis of flat pigmented lesions of the fundus is summarized in Table 40e-4. The appearance of chorioretinal scarring from old Toxoplasma chorioretinitis is shown in Fig. 40e-12. FIgURE 40e-12 Chorioretinal scarring due to old Toxoplasma cho-rioretinitis. The lesion is flat and pigmented. Areas of hypopigmenta-tion are also present. PART 2 Cardinal Manifestations and Presentation of Diseases retinopathy in systemic diseases: Usher’s syndrome, abetalipoproteinemia, Refsum’s disease, Kearns-Sayre syndrome, Alström’s syndrome, Cockayne’s syndrome, Friedreich’s ataxia, mucopolysaccharidoses, paraneoplastic syndrome • Infections: congenital rubella (salt and pepper retinopathy), congenital • Infections: Toxoplasma gondii, Toxocara canis, syphilis, cytomegalovirus, herpes zoster and herpes simplex viruses, west Nile virus, histoplasmosis, parasitic infection • Choroiditis: sarcoidosis, sympathetic ophthalmia, Vogt-Koyanagi-Harada infarct: severe hypertension, sickle cell hemoglobinopathies • Trauma, cryotherapy, laser photocoagulation scars • Drugs: chloroquine/hydroxychloroquine, thioridazine, chlorpromazine, hypertrophy of the retinal pigment epithelium Video Library of Neuro-Ophthalmology Shirley H. Wray The proper control of eye movements requires the coordinated activity of many different anatomic structures in the peripheral and central nervous system, and in turn, manifestations of a diverse array 41e of neurologic and medical disorders are revealed as disorders of eye movement. In this remarkable video collection, an introduction to distinctive eye movement disorders encountered in the context of neuromuscular, paraneoplastic, demyelinating, neurovascular, and neurodegenerative disorders is presented. Cases with Multiple Sclerosis Video 41e-1 Fisher’s One-and-a-Half Syndrome (ID164-2) Video 41e-2 A Case of Ocular Flutter (ID166-2) Video 41e-3 Downbeat Nystagmus and Periodic Alternating Nystagmus (ID168-6) Video 41e-4 Bilateral Internuclear Ophthalmoplegia (ID933-1) Cases with Myasthenia Gravis or Mitochondrial Myopathy Video 41e-5 Unilateral Ptosis: Myasthenia Gravis (Thymic Tumor) (ID163-1) Video 41e-6 Progressive External Ophthalmoplegia (Progressive External Ophthalmoplegia: Mitochondrial Cytopathy) (ID906-2) Cases with Paraneoplastic disease Video 41e-7 Paraneoplastic Upbeat Nystagmus, Cancer of the Pancreas, Positive Anti-Hu Antibody (ID212-3) Video 41e-8 Paraneoplastic Ocular Flutter, Small-Cell Adenocarcinoma of the Lung, Negative Marker (ID936-7) Video 41e-9 Opsoclonus/Flutter, Bilateral Sixth Nerve Palsy, Adenocarcinoma of the Breast, Negative Marker (ID939-8) Cases with Fisher’s Syndrome 41e-1 Video 41e-10 Bilateral Ptosis: Facial Diplegia, Total External Ophthalmoplegia, Positive Anti-GQ1b Antibody (ID944-1) Cases with Vascular disease Video 41e-11 Retinal Emboli (Film or Fundus) (ID16-1) Video 41e-12 Third Nerve Palsy (Microinfarct) (ID939-2) Case with Neurodegenerative disease Video 41e-13 Apraxia of Eyelid Opening (Progressive Supranuclear Palsy) (ID932-3) Case of Thyroid-Associated ophthalmopathy Video 41e-14 Restrictive Orbitopathy of Graves’ Disease, Bilateral Exophthalmos (ID925-4) Case with Wernicke’s encephalopathy Video 41e-15 Bilateral Sixth Nerve Palsies (ID 163-3) Case with the Locked-in-Syndrome Video 41e-16 Ocular Dipping (ID 4-1) The Video Library of Neuro-Ophthalmology shows a number of cases with eye movement disorders. All the clips are taken from Dr. Shirley Wray’s collection on the NOVEL website. To access go to: http://NOVEL.utah.edu/Wray http://Respitory.Countway.Harvard.edu/Wray and/or to her book Shirley H. Wray, MD, PhD, Oxford University Press, 2014. CHAPTER 41e Video Library of Neuro-Ophthalmology Disorders of Smell and Taste Richard L. Doty, Steven M. Bromley All environmental chemicals necessary for life enter the body by the nose and mouth. The senses of smell (olfaction) and taste (gustation) monitor such chemicals, determine the flavor and palatability of foods and beverages, and warn of dangerous environmental conditions, including fire, air pollution, leaking natural gas, and bacteria-laden foodstuffs. These senses contribute significantly to quality of life and, when dysfunctional, can have untoward physical and psychological consequences. A basic understanding of these senses in health and disease is critical for the physician, because thousands of patients present to doctors’ offices each year with complaints of chemosensory dysfunction. Among the more important recent developments in neurology is the discovery that decreased smell function is among the first signs, if not the first sign, of such neurodegenerative diseases as Parkinson’s disease (PD) and Alzheimer’s disease (AD), signifying their “presymptomatic” phase. ANATOMY AND PHYSIOLOgY Olfactory System Odorous chemicals enter the front of nose during inhalation and active sniffing, as well as the back of the nose (nasopharynx) during deglutition. After reaching the highest recesses of the nasal cavity, they dissolve in the olfactory mucus and diffuse or are actively transported by specialized proteins to receptors located on the cilia of olfactory receptor cells. The cilia, dendrites, cell bodies, and proximal axonal segments of these bipolar cells are located within a unique neuroepithelium covering the cribriform plate, the superior nasal septum, superior turbinate, and sectors of the middle turbinate (Fig. 42-1). Each of the ∼6 million bipolar receptor cells expresses only one of ∼450 receptor protein types, most of which respond to more than a single chemical. When damaged, the receptor cells can be replaced by stem cells near the basement membrane. Unfortunately, such replacement is often incomplete. After coalescing into bundles surrounded by glia-like ensheathing cells (termed fila), the receptor cell axons pass through the cribriform plate to the olfactory bulbs, where they synapse with dendrites of other cell types within the glomeruli (Fig. 42-2). These spherical structures, which make up a distinct layer of the olfactory bulb, are a site of convergence of information, because many more fibers enter than leave them. Receptor cells that express the same type of receptor project to the same glomeruli, effectively making each glomerulus a functional unit. The major projection neurons of the olfactory system—the mitral and tufted cells—send primary dendrites into the glomeruli, connecting not only with the incoming receptor cell axons, but with dendrites of periglomerular cells. The activity of the mitral/tufted cells is modulated by the periglomerular cells, secondary dendrites from other mitral/tufted cells, and granule cells, the most numerous cells of the bulb. The latter cells, which are largely GABAergic, receive inputs from central brain structures and modulate the output of the mitral/tufted cells. Interestingly, like the olfactory receptor cells, some cells within the bulb undergo replacement. Thus, neuroblasts formed within the anterior subventricular zone of the brain migrate along the rostral migratory stream, ultimately becoming granule and periglomerular cells. The axons of the mitral and tufted cells synapse within the primary olfactory cortex (POC) (Fig. 42-3). The POC is defined as those cortical structures that receive direct projections from the olfactory bulb, most notably the piriform and entorhinal cortices. Although olfaction is unique in that its initial afferent projections bypass the thalamus, persons with damage to the thalamus can exhibit olfactory deficits, particularly ones of odor identification. Such deficits likely reflect the involvement of thalamic connections between the primary olfactory cortex and the orbitofrontal cortex (OFC), where odor identification occurs. The close anatomic ties between the olfactory system and the CHAPTER 42 Disorders of Smell and Taste FIguRE 42-3 Anatomy of the base of the brain showing the primary olfactory cortex. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 42-1 Anatomy of the olfactory neural pathways, showing the distribution of olfactory receptors in the roof of the nasal cavity. (Copyright David Klemm, Faculty and Curriculum Support [FACS], Georgetown University Medical Center; used with permission.) amygdala, hippocampus, and hypothalamus help to explain the intimate associations between odor perception and cognitive functions such as memory, motivation, arousal, autonomic activity, digestion, and sex. Taste System Tastants are sensed by specialized receptor cells present within taste buds—small grapefruit-like segmented structures located on the lateral margins and dorsum of the tongue, roof of the mouth, pharynx, larynx, and superior esophagus (Fig. 42-4). Lingual taste buds are imbedded in well-defined protuberances, termed fungi-form, foliate, and circumvallate papillae. After dissolving in a liquid, tastants enter the opening of the taste bud—the taste pore—and bind to receptors on microvilli, small extensions of receptor cells within each taste bud. Such binding changes the electrical potential across the taste cell, resulting in neurotransmitter release onto the first-order taste neurons. Although humans have ∼7500 taste buds, not all harbor FIguRE 42-2 Schematic of the layers and wiring of the olfactory bulb. Each receptor type (red, green, blue) projects to a common glomerulus. The neural activity within each glomerulus is modulated by periglomerular cells. The activity of the primary projection cells, the mitral and tufted cells, is modulated by granule cells, periglomerular cells, and secondary dendrites from adjacent mitral and tufted cells. (From www.med.yale.edu/neurosurg/treloar/index.html.) taste-sensitive cells; some contain only one class of receptor (e.g., cells responsive only to sugars), whereas others contain cells sensitive to more than one class. The number of taste receptor cells per taste bud ranges from zero to well over 100. A small family of three G-proteincoupled receptors (GPCRs), namely T1R1, T1R2, and T1R3, mediate sweet and umami taste sensations. Bitter sensations, on the other hand, depend on T2R receptors, a family of ∼30 GPCRs expressed on cells different from those that express the sweet and umami receptors. T2Rs sense a wide range of bitter substances but do not distinguish among them. Sour tastants are sensed by the PKD2L1 receptor, a member of the transient receptor potential protein (TRP) family. Perception of salty sensations, such as induced by sodium chloride, arises from the entry of Na+ ions into the cells via specialized membrane channels, such as the amiloride-sensitive Na+ channel. Recent studies have found that both bitter and sweet taste-related receptors are also present elsewhere in the body, most notably in the Circumvallate Foliate Fungiform Taste bud Taste pore TRC Taste bud Taste bud FIguRE 42-4 Schematic of the taste bud and its opening (pore), as well as the location of buds on the three major types of papillae: fungi-form (anterior), foliate (lateral), and circumvallate (posterior). alimentary and respiratory tracts. This important discovery generalizes the concept of taste-related chemoreception to areas of the body beyond the mouth and throat, with α-gustducin, the taste-specific G-protein α-subunit, expressed in so-called brush cells found specifically within the human trachea, lung, pancreas, and gallbladder. These brush cells are rich in nitric oxide (NO) synthase, known to defend against xenobiotic organisms, protect the mucosa from acid-induced lesions, and, in the case of the gastrointestinal tract, stimulate vagal and splanchnic afferent neurons. NO further acts on nearby cells, including enteroendocrine cells, absorptive or secretory epithelial cells, mucosal blood vessels, and cells of the immune system. Members of the T2R family of bitter receptors and the sweet receptors of the T1R family have been identified within the gastrointestinal tract and in enteroendocrine cell lines. In some cases, these receptors are important for metabolism, with the T1R3 receptors and gustducin playing decisive roles in the sensing and transport of dietary sugars from the intestinal lumen into absorptive enterocytes via a sodium-dependent glucose transporter and in regulation of hormone release from gut enteroendocrine cells. In other cases, these receptors may be important for airway protection, with a number of T2R bitter receptors in the motile cilia of the human airway that responded to bitter compounds by increasing their beat frequency. One specific T2R38 taste receptor is expressed in human upper respiratory epithelia and responds to acyl-monoserine lactone quorum-sensing molecules secreted by Pseudomonas aeruginosa and other gram-negative bacteria. Differences in T2R38 functionality, as related to TAS2R38 genotype, correlate with susceptibility to upper respiratory infections in humans. Taste information is sent to the brain via three cranial nerves (CNs): CN VII (the facial nerve, which involves the intermediate nerve with its branches, the greater petrosal and chorda tympani nerves), CN IX (the glossopharyngeal nerve), and CN X (the vagus nerve) (Fig. 42-5). CN VII innervates the anterior tongue and all of the soft palate, CN IX innervates the posterior tongue, and CN X innervates the laryngeal surface of the epiglottis, larynx, and proximal portion of the esophagus. The mandibular branch of CN V (V3) conveys somatosensory information (e.g., touch, burning, cooling, irritation) to the brain. Although not technically a gustatory nerve, CN V shares primary nerve routes with many of the gustatory nerve fibers and adds temperature, texture, CHAPTER 42 Disorders of Smell and Taste pungency, and spiciness to the taste experience. The chorda tympani nerve is famous for taking a recurrent course through the facial canal in the petrosal portion of the temporal bone, passing through the middle ear, and then exiting the skull via the petrotympanic fissure, where it joins the lingual nerve (a division of CN V) near the tongue. This nerve also carries parasympathetic fibers to the submandibular and sublingual glands, whereas the greater petrosal nerve supplies the palatine glands, thereby influencing saliva production. The axons of the projection cells, which synapse with taste buds, enter the rostral portion of the nucleus of the solitary tract (NTS) FIguRE 42-5 Schematic of the cranial nerves (CNs) that mediate taste function, including the chorda tympani nerve (CN VII), the glos-sopharyngeal nerve (CN IX), and the vagus nerve (CN X). 214 within the medulla of the brainstem (Fig. 42-5). From the NTS, neurons then project to a division of the ventroposteromedial thalamic nucleus (VPM) via the medial lemniscus. From here, projections are made to the rostral part of the frontal operculum and adjoining insula, a brain region considered the primary taste cortex (PTC). Projections from the PTC then go to the secondary taste cortex, namely the caudolateral OFC. This brain region is involved in the conscious recognition of taste qualities. Moreover, because it contains cells that are activated by several sensory modalities, it is likely a center for establishing “flavor.” The ability to smell is influenced, in everyday life, by such factors as age, gender, general health, nutrition, smoking, and reproductive state. Women typically outperform men on tests of olfactory function and retain normal smell function to a later age than do men. Significant decrements in the ability to smell are present in over 50% of the population between 65 and 80 years of age and in 75% of those 80 years of age and older (Fig. 42-6). Such presbyosmia helps to explain why many elderly report that food has little flavor, a problem that can result in nutritional disturbances. This also helps to explain why a disproportionate number of elderly die in accidental gas poisonings. A relatively complete listing of conditions and disorders that have been associated with olfactory dysfunction is presented in Table 42-1. Aside from aging, the three most common identifiable causes of long-lasting or permanent smell loss seen in the clinic are, in order of frequency, severe upper respiratory infections, head trauma, and chronic rhinosinusitis. The physiologic basis for most head trauma– related losses is the shearing and subsequent scarring of the olfactory fila as they pass from the nasal cavity into the brain cavity. The cribriform plate does not have to be fractured or show pathology for smell loss to be present. Severity of trauma, as indexed by a poor Glasgow Coma Scale score on presentation and the length of posttraumatic amnesia, is associated with higher risk of olfactory impairment. Less than 10% of posttraumatic anosmic patients will recover age-related normal function over time. This increases to nearly 25% of those with less-than-total loss. Upper respiratory infections, such as those associated with the common cold, influenza, pneumonia, or HIV, can directly and permanently harm the olfactory epithelium by decreasing receptor cell number, damaging cilia on remaining receptor cells, and inducing the replacement of sensory epithelium with respiratory epithelium. The smell loss associated with chronic rhinosinusitis is PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 42-6 Scores on the University of Pennsylvania Smell Identification Test (UPSIT) as a function of subject age and sex. Numbers by each data point indicate sample sizes. Note that women identify odorants better than men at all ages. (From RL Doty et al: Science 226:1421, 1984. Copyright © 1984 American Association for the Advancement of Science.) DiSoRDERS AnD ConDiTionS ASSoCiATED wiTH CoMPRoMiSED oLfACToRy funCTion, AS MEASuRED By oLfACToRy TESTing 22q11 deletion syndrome Liver disease AIDS/HIV infection Lubag disease Adenoid hypertrophy Medications Adrenal cortical insufficiency Migraine Age Multiple sclerosis Alcoholism Multi-infarct dementia Allergies Myasthenia gravis Alzheimer’s disease Narcolepsy with cataplexy Amyotrophic lateral sclerosis (ALS) Neoplasms, cranial/nasal Anorexia nervosa Nutritional deficiencies Asperger’s syndrome Obstructive pulmonary disease Ataxias Obesity Attention deficit/hyperactivity Obsessive compulsive disorder disease Pregnancy Congenital Pseudohypoparathyroidism Cushing’s syndrome Psychopathy Cystic fibrosis Radiation (therapeutic, cranial) Degenerative ataxias REM behavior disorder related to disease severity, with most loss occurring in cases where rhinosinusitis and polyposis are both present. Although systemic glucocorticoid therapy can usually induce short-term functional improvement, it does not, on average, return smell test scores to normal, implying that chronic permanent neural loss is present and/ or that short-term administration of systemic glucocorticoids does not completely mitigate the inflammation. It is well established that microinflammation in an otherwise seemingly normal epithelium can influence smell function. A number of neurodegenerative diseases are accompanied by olfactory impairment, including PD, AD, Huntington’s disease, Down’s syndrome, parkinsonism-dementia complex of Guam, dementia with Lewy bodies (DLB), multiple system atrophy, corticobasal degeneration, and frontotemporal dementia; smell loss can also occur in multiple sclerosis (MS) and idiopathic rapid eye movement (REM) behavioral sleep disorder (iRBD). Olfactory impairment in PD often predates the clinical diagnosis by at least 4 years. In staged cases, studies of the sequence of formation of abnormal α-synuclein aggregates and Lewy bodies suggest that the olfactory bulbs may be, along with the dorsomotor nucleus of the vagus, the first site of neural damage in PD. In postmortem studies of patients with very mild “presymptomatic” signs of AD, poorer smell function has been associated with higher levels of AD-related pathology. Smell loss is more marked in patients with early clinical manifestations of DLB than in those with mild AD. Interestingly, smell loss is minimal or nonexistent in progressive supranuclear palsy and 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced parkinsonism. In MS, the olfactory disturbance varies as a function of the plaque activity within the frontal and temporal lobes. The smell loss seen in iRBD is of the same magnitude as that found in PD. This is of particular interest because patients with iRBD frequently develop PD and hyposmia. There is some evidence that iRBD may actually represent an early associated condition of PD. REM behavior disorder is not only seen in its idiopathic form, but can also be associated with narcolepsy. This led to a recent study of narcoleptic patients with and without REM behavior disorder, which demonstrated that narcolepsy, independent of REM behavior disorder, was associated with impairments in olfactory function. Orexin A, also known as hypocretin-1, is dramatically diminished or undetectable in the cerebrospinal fluid of patients with narcolepsy and cataplexy (Chap. 38). The orexin-containing neurons in the hypothalamus project throughout the entire olfactory system (from the olfactory epithelium to the olfactory cortex), and damage to these orexin-containing projections may be one underlying mechanism for impaired olfactory performance in narcoleptic patients. The administration of intranasal orexin A (hypocretin-1) appears to result in improved olfactory function, supporting the notion that mild olfactory impairment is not only a primary feature of narcolepsy with cataplexy, but that central nervous system orexin deficiency may be a fundamental part of the mechanism for this loss. The majority of patients who present with taste dysfunction exhibit olfactory, not taste, loss. This is because most flavors attributed to taste actually depend on retronasal stimulation of the olfactory receptors during deglutition. As noted earlier, taste buds only mediate basic tastes such as sweet, sour, bitter, salty, and umami. Significant impairment of whole-mouth gustatory function is rare outside of generalized metabolic disturbances or systemic use of some medications, because taste bud regeneration occurs and peripheral damage alone would require the involvement of multiple cranial nerve pathways. Nonetheless, taste can be influenced by (1) the release of foul-tasting materials from the oral cavity from oral medical conditions or appliances (e.g., gingivitis, purulent sialadenitis), (2) transport problems of tastants to the taste buds (e.g., drying of the orolingual mucosa, infections, inflammatory conditions), (3) damage to the taste buds themselves (e.g., local trauma, invasive carcinomas), (4) damage to the neural pathways innervating the taste buds (e.g., middle ear infections), (5) damage to central structures (e.g., multiple sclerosis, tumor, epilepsy, stroke), and (6) systemic disturbances of metabolism (e.g., diabetes, thyroid disease, medications). Unlike CN VII, CN IX is relatively protected along its path, although iatrogenic interventions such as tonsillectomy, bronchoscopy, laryngoscopy, endotracheal intubation, and radiation therapy can result in selective injury. CN VII damage commonly results from mastoidectomy, tympanoplasty, and stapedectomy, in some cases inducing persistent metallic sensations. Bell’s palsy (Chap. 455) is one of the most common causes of CN VII injury that results in taste disturbance. On rare occasions, migraines (Chap. 447) are associated with a gustatory prodrome or aura, and in some cases, tastants can trigger a migraine attack. Interestingly, dysgeusia occurs in some cases of burning mouth syndrome (BMS; also termed glossodynia or glossalgia), as do dry mouth and thirst. BMS is likely associated with dysfunction of the trigeminal nerve (CN V). Some of the etiologies suggested for this poorly understood syndrome are amenable to treatment, including (1) nutritional deficiencies (e.g., iron, folic acid, B vitamins, zinc), (2) diabetes mellitus (possibly predisposing to oral candidiasis), (3) denture allergy, (4) mechanical irritation from dentures or oral devices, (5) repetitive movements of the mouth (e.g., tongue thrusting, teeth grinding, jaw clenching), (6) tongue ischemia as a result of temporal arteritis, (7) periodontal 215 disease, (8) reflux esophagitis, and (9) geographic tongue. Although both taste and smell can be adversely influenced by pharmacologic agents, drug-related taste alterations are more common. Indeed, over 250 medications have been reported to alter the ability to taste. Major offenders include antineoplastic agents, antirheumatic drugs, antibiotics, and blood pressure medications. Terbinafine, a commonly used antifungal, has been linked to taste disturbance lasting up to 3 years. In a recent controlled trial, nearly two-thirds of individuals taking eszopiclone (Lunesta) experienced a bitter dysgeusia that was stronger in women, systematically related to the time since drug administration, and positively correlated with both blood and saliva levels of the drug. Intranasal use of nasal gels and sprays containing zinc, which are common over-the-counter prophylactics for upper respiratory viral infections, has been implicated in loss of smell function. Whether their efficacy in preventing such infections, which are the most common cause of anosmia and hyposmia, outweighs their potential detriment to smell function requires study. Dysgeusia occurs commonly in the context of drugs used to treat or minimize symptoms of cancer, with a weighted prevalence from 56–76% depending on the type of cancer treatment. Attempts to prevent taste problems from such drugs using prophylactic zinc sulfate or amifostine have proven to be minimally beneficial. Although antiepileptic medications are occasionally used to treat smell or taste disturbances, the use of topiramate has been reported to result in a reversible loss of an ability to detect and recognize tastes and odors during treatment. As with olfaction, a number of systemic disorders can affect taste. These include chronic renal failure, end-stage liver disease, vitamin and mineral deficiencies, diabetes mellitus, and hypothyroidism (to name a few). In diabetes, there appears to be a progressive loss of taste beginning with glucose and then extending to other sweeteners, salty stimuli, and then all stimuli. Psychiatric conditions can be associated with chemosensory alterations (e.g., depression, schizophrenia, bulimia). A recent review of tactile, gustatory, and olfactory hallucinations demonstrated that no one type of hallucinatory experience is pathognomonic to any given diagnosis. Pregnancy proves to be a unique condition with regard to taste function. There appears to be an increase in dislike and intensity of bitter tastes during the first trimester that may help to ensure that pregnant women avoid poisons during a critical phase of fetal development. Similarly, a relative increase in the preference for salt and bitter in the second and third trimesters may support the ingestion of much needed electrolytes to expand fluid volume and support a varied diet. In most cases, a careful clinical history will establish the probable etiology of a chemosensory problem, including questions about its nature, onset, duration, and pattern of fluctuations. Sudden loss suggests the possibility of head trauma, ischemia, infection, or a psychiatric condition. Gradual loss can reflect the development of a progressive obstructive lesion. Intermittent loss suggests the likelihood of an inflammatory process. The patient should be asked about potential precipitating events, such as cold or flu infections prior to symptom onset, because these often go underappreciated. Information regarding head trauma, smoking habits, drug and alcohol abuse (e.g., intranasal cocaine, chronic alcoholism in the context of Wernicke’s and Korsakoff’s syndromes), exposures to pesticides and other toxic agents, and medical interventions is also informative. A determination of all the medications that the patient was taking before and at the time of symptom onset is important, because many can cause chemosensory disturbances. Comorbid medical conditions associated with smell impairment, such as renal failure, liver disease, hypothyroidism, diabetes, or dementia, should be assessed. Delayed puberty in association with anosmia (with or without midline craniofacial abnormalities, deafness, and renal anomalies) suggests the possibility of Kallmann’s syndrome. Recollection of epistaxis, discharge (clear, purulent, or bloody), nasal obstruction, allergies, and somatic symptoms, including headache or irritation, may have localizing value. Questions related to memory, parkinsonian signs, and seizure activity (e.g., automatisms, blackouts, CHAPTER 42 Disorders of Smell and Taste 216 auras, déjà vu) should be posed. Pending litigation and the possibility of malingering should be considered. Modern forced-choice olfactory tests can detect malingering from improbable responses. Neurologic and otorhinolaryngologic (ORL) examinations, along with appropriate brain and nasosinus imaging, aid in the evaluation of patients with olfactory or gustatory complaints. The neural evaluation should focus on cranial nerve function, with particular attention to possible skull base and intracranial lesions. Visual acuity, field, and optic disc examinations aid in the detection of intracranial mass lesions that induce intracranial pressure (papilledema) and optic atrophy, especially when considering Foster Kennedy syndrome. The ORL examination should thoroughly assess the intranasal architecture and mucosal surfaces. Polyps, masses, and adhesions of the turbinates to the septum may compromise the flow of air to the olfactory receptors, because less than a fifth of the inspired air traverses the olfactory cleft in the unobstructed state. Blood tests may be helpful to identify such conditions as diabetes, infection, heavy metal exposure, nutritional deficiency (e.g., vitamin B6 or B12), allergy, and thyroid, liver, and kidney disease. As with other sensory disorders, quantitative sensory testing is advised. Self-reports of patients can be misleading, and a number of patients who complain of chemosensory dysfunction have normal function for their age and gender. Quantitative smell and taste testing provides valid information for worker’s compensation and other legal claims, as well as a way to accurately assess treatment interventions. A number of standardized olfactory and taste tests are commercially available. Most evaluate the ability of patients to detect and identify odors or tastes. For example, the most widely used of these tests, the 40-item University of Pennsylvania Smell Identification Test (UPSIT), uses norms based on nearly 4000 normal subjects. A determination is made of both absolute dysfunction (i.e., mild loss, moderate loss, severe loss, total loss, probable malingering) and relative dysfunction (percentile rank for age and gender). Although electrophysiologic testing is available at some smell and taste centers (e.g., odor event-related potentials), they require complex stimulus presentation and recording equipment and rarely provide additional diagnostic information. With the exception of electrogustometers, commercially available taste tests have only recently become available. Most use filter paper strips impregnated with tastants, so no stimulus preparation is required. Given the various mechanisms by which olfactory and gustatory disturbance can occur, management of patients tends to be condition specific. For example, patients with hypothyroidism, diabetes, or infections often benefit from specific treatments to correct the underlying disease process that is adversely influencing chemoreception. For most patients who present primarily with obstructive/transport loss affecting the nasal and paranasal regions (e.g., allergic rhinitis, polyposis, intranasal neoplasms, nasal deviations), medical and/or surgical intervention is often beneficial. Antifungal and antibiotic treatments may reverse taste problems secondary to candidiasis or other oral infections. Chlorhexidine mouthwash mitigates some salty or bitter dysgeusias, conceivably as a result of its strong positive charge. Excessive dryness of the oral mucosa is a problem with many medications and conditions, and artificial saliva (e.g., Xerolube) or oral pilocarpine treatments may prove beneficial. Other methods to improve salivary flow include the use of mints, lozenges, or sugarless gum. Flavor enhancers may make food more palatable (e.g., monosodium glutamate), but caution is advised to avoid overusing ingredients containing sodium or sugar, particularly in circumstances when a patient also has underlying hypertension or diabetes. Medications that induce distortions of taste can often be discontinued and replaced with other types of medications or modes of therapy. As mentioned earlier, pharmacologic agents result in taste disturbances much more frequently than smell disturbances, and over 250 medications have been reported to alter the sense of taste. It is important to note, however, that many drug-related effects are long lasting and not reversed by short-term drug discontinuance. PART 2 Cardinal Manifestations and Presentation of Diseases A recent study of endoscopic sinus surgery in patients with chronic rhinosinusitis and hyposmia revealed that patients with severe olfactory dysfunction prior to the surgery had a more dramatic and sustained improvement over time compared to patients with more mild olfactory dysfunction prior to intervention. In the case of intranasal and sinus-related inflammatory conditions, such as seen with allergy, viruses, and traumas, the use of intranasal or systemic glucocorticoids may also be helpful. One common approach is to use a tapering course of oral prednisone. The utility of restoring olfaction with either topical or systemic glucocorticoids has been studied. Topical intranasal administration was found to be less effective in general than systemic administration; however, the effects of different nasal administration techniques were not analyzed; for example, intranasal glucocorticoids are more effective if administered in the Moffett’s position (head in the inverted position such as over the edge of the bed with the bridge of the nose perpendicular to the floor). After head trauma, an initial trial of glucocorticoids may help to reduce local edema and the potential deleterious deposition of scar tissue around olfactory fila at the level of the cribriform plate. Treatments are limited for patients with chemosensory loss or primary injury to neural pathways. Nonetheless, spontaneous recovery can occur. In a follow-up study of 542 patients presenting to our center with smell loss from a variety of causes, modest improvement occurred over an average time period of 4 years in about half of the participants. However, only 11% of the anosmic and 23% of the hyposmic patients regained normal age-related function. Interestingly, the amount of dysfunction present at the time of presentation, not etiology, was the best predictor of prognosis. Other predictors were age and the duration of dysfunction prior to initial testing. A nonblinded study has reported that patients with hyposmia may benefit from smelling strong odors (e.g., eucalyptol, citronella, eugenol, and phyenyl ethyl alcohol) before going to bed and immediately upon awakening each day over the course of several months. The rationale for such an approach comes from animal studies demonstrating that prolonged exposure to odorants can induce increased neural activity within the olfactory bulb. In an uncontrolled study, α-lipoic acid (400 mg/d), an essential cofactor for many enzyme complexes with possible antioxidant effects, was reported to be beneficial in mitigating smell loss following viral infection of the upper respiratory tract; controlled studies are needed to confirm this observation. This agent has also been suggested to be useful in some cases of hypogeusia and BMS. The use of zinc and vitamin A in treating olfactory disturbances is controversial, and there does not appear to be much benefit beyond replenishing established deficiencies. However, zinc has been shown to improve taste function secondary to hepatic deficiencies, and retinoids (bioactive vitamin A derivatives) are known to play an essential role in the survival of olfactory neurons. One protocol in which zinc was infused with chemotherapy treatments suggested a possible protective effect against developing taste impairment. Diseases of the alimentary tract can not only influence chemoreceptive function, but also occasionally influence vitamin B12 absorption. This can result in a relative deficiency of vitamin B12, theoretically contributing to olfactory nerve disturbance. Vitamin B2 (riboflavin) and magnesium supplements are reported in the alternative literature to aid in the management of migraine that, in turn, may be associated with smell dysfunction. Because vitamin D deficiency is a cofactor of chemotherapy-induced mucocutaneous toxicity and dysgeusia, adding vitamin D3, 1000–2000 units per day, may benefit some patients with smell and taste complaints during or following chemotherapy. A number of medications have reportedly been used with success in ameliorating olfactory symptoms, although strong scientific evidence for efficacy is generally lacking. A report that theophylline improved smell function was uncontrolled and failed to account for the fact that some meaningful improvement occurs without treatment; indeed, the percentage of responders was about the same (∼50%) as that noted by others to show spontaneous improvement over a similar time period. Antiepileptics and some antidepressants (e.g., amitriptyline) have been used to treat dysosmias and smell distortions, particularly following head trauma. Ironically, amitriptyline is also frequently on the list of medications that can ultimately distort smell and taste function, possibly from its anticholinergic effects. A recent study suggests that the use of the centrally acting acetylcholinesterase inhibitor donepezil in AD resulted in improvements on smell identification measures that correlated with overall clinician-based impressions of change in dementia severity scores. Alternative therapies, such as acupuncture, meditation, cognitive-behavioral therapy, and yoga, can help patients manage uncomfortable experiences associated with chemosensory disturbance and oral pain syndromes and to cope with the psychosocial stressors surrounding the impairment. Additionally, modification of diet and eating habits is also important. By accentuating the other sensory experiences of a meal, such as food texture, aroma, temperature, and color, one can optimize the overall eating experience for a patient. In some cases, a flavor enhancer like monosodium glutamate (MSG) can be added to foods to increase palatability and encourage intake. Proper oral and nasal hygiene and routine dental care are extremely important ways for patients to protect themselves from disorders of the mouth and nose that can ultimately result in chemosensory disturbance. Patients should be warned not to overcompensate for their taste loss by adding excessive amounts of sugar or salt. Smoking cessation and the discontinuance of oral tobacco use are essential in the management of any patient with smell and/or taste disturbance and should be repeatedly emphasized. A major and often overlooked element of therapy comes from chemosensory testing itself. Confirmation or lack of conformation of loss is beneficial to patients who come to believe, in light of unsupportive family members and medical providers, that they may be “crazy.” In cases where the loss is minor, patients can be informed of the likelihood of a more positive prognosis. Importantly, quantitative testing places the patient’s problem into overall perspective. Thus, it is often therapeutic for an older person to know that, while his or her smell function is not what it used to be, it still falls above the average of his or her peer group. Without testing, many such patients are simply told they are getting old and nothing can be done for them, leading in some cases to depression and decreased self-esteem. Disorders of Hearing Anil K. Lalwani Hearing loss is one of the most common sensory disorders in humans and can present at any age. Nearly 10% of the adult population has some hearing loss, and one-third of individuals age >65 years have a hearing loss of sufficient magnitude to require a hearing aid. The function of the external and middle ear is to amplify sound to facilitate conversion of the mechanical energy of the sound wave into an electrical signal by the inner ear hair cells, a process called mechanotransduction (Fig. 43-1). Sound waves enter the external auditory canal and set the tympanic membrane (eardrum) in motion, which in turn moves the malleus, incus, and stapes of the middle ear. Movement of the footplate of the stapes causes pressure changes in the fluid-filled inner ear, eliciting a traveling wave in the basilar membrane of the cochlea. The tympanic membrane and the ossicular chain in the middle ear serve as an impedance-matching mechanism, improving the efficiency of energy transfer from air to the fluid-filled inner ear. Stereocilia of the hair cells of the organ of Corti, which rests on the basilar membrane, are in contact with the tectorial membrane and are deformed by the traveling wave. A point of maximal displacement of the basilar membrane is determined by the frequency of the stimulating tone. High-frequency tones cause maximal displacement of the basilar membrane near the base of the cochlea, whereas for low-frequency sounds, the point of maximal displacement is toward 217 the apex of the cochlea. The inner and outer hair cells of the organ of Corti have different innervation patterns, but both are mechanoreceptors. The afferent innervation relates principally to the inner hair cells, and the efferent innervation relates principally to outer hair cells. The motility of the outer hair cells alters the micromechanics of the inner hair cells, creating a cochlear amplifier, which explains the exquisite sensitivity and frequency selectivity of the cochlea. Beginning in the cochlea, the frequency specificity is maintained at each point of the central auditory pathway: dorsal and ventral cochlear nuclei, trapezoid body, superior olivary complex, lateral lemniscus, inferior colliculus, medial geniculate body, and auditory cortex. At low frequencies, individual auditory nerve fibers can respond more or less synchronously with the stimulating tone. At higher frequencies, phase-locking occurs so that neurons alternate in response to particular phases of the cycle of the sound wave. Intensity is encoded by the amount of neural activity in individual neurons, the number of neurons that are active, and the specific neurons that are activated. There is evidence that the right and left ears as well as the central nervous system may process speech asymmetrically. Generally, a sound is processed symmetrically from the peripheral to the central auditory system. However, a “right ear advantage” exists for dichotic listening tasks, in which subjects are asked to report on competing sounds presented to each ear. In most individuals, a perceptual right ear advantage for consonant-vowel syllables, stop consonants, and words also exists. Similarly, whereas central auditory processing for sounds is symmetric with minimal lateral specialization for the most part, speech processing is lateralized. There is specialization of the left auditory cortex for speech recognition and production, and of the right hemisphere for emotional and tonal aspects of speech. Left hemisphere dominance for speech is found in 95–98% of right-handed persons and 70–80% of left-handed persons. Hearing loss can result from disorders of the auricle, external auditory canal, middle ear, inner ear, or central auditory pathways (Fig. 43-2). In general, lesions in the auricle, external auditory canal, or middle ear that impede the transmission of sound from the external environment to the inner ear cause conductive hearing loss, whereas lesions that impair mechanotransduction in the inner ear or transmission of the electrical signal along the eighth nerve to the brain cause sensorineural hearing loss. Conductive Hearing Loss The external ear, the external auditory canal, and the middle ear apparatus is designed to collect and amplify sound and efficiently transfer the mechanical energy of the sound wave to the fluid-filled cochlea. Factors that obstruct the transmission of sound or serve to dampen the acoustical energy result in conductive hearing loss. Conductive hearing loss can occur from obstruction of the external auditory canal by cerumen, debris, and foreign bodies; swelling of the lining of the canal; atresia or neoplasms of the canal; perforations of the tympanic membrane; disruption of the ossicular chain, as occurs with necrosis of the long process of the incus in trauma or infection; otosclerosis; or fluid, scarring, or neoplasms in the middle ear. Rarely, inner ear malformations or pathologies, such as superior semicircular canal dehiscence, lateral semicircular canal dysplasia, incomplete partition of the inner ear, and large vestibular aqueduct, may also be associated with conductive hearing loss. Eustachian tube dysfunction is extremely common in adults and may predispose to acute otitis media (AOM) or serous otitis media (SOM). Trauma, AOM, and chronic otitis media are the usual factors responsible for tympanic membrane perforation. While small perforations often heal spontaneously, larger defects usually require surgical intervention. Tympanoplasty is highly effective (>90%) in the repair of tympanic membrane perforations. Otoscopy is usually sufficient to diagnose AOM, SOM, chronic otitis media, cerumen impaction, tympanic membrane perforation, and eustachian tube dysfunction; tympanometry can be useful to confirm the clinical suspicion of these conditions. CHAPTER 43 Disorders of Hearing PART 2 Cardinal Manifestations and Presentation of Diseases Cholesteatoma, a benign tumor composed of stratified squamous epithelium in the middle ear or mastoid, occurs frequently in adults. This is a slowly growing lesion that destroys bone and normal ear tissue. Theories of pathogenesis include traumatic immigration and invasion of squamous epithelium through a retraction pocket, implantation of squamous epithelia in the middle ear through a perforation or surgery, and metaplasia following chronic infection and irritation. Auricle or pinna External ear ABExternal acoustic canal Tympanic membrane Semicircular canals Vestibulocochlear nerve Cochlea Stapes Incus Malleus Lobe Middle ear Eustachian tube FIguRE 43-1 Ear anatomy. A. Drawing of modified coronal section through external ear and temporal bone, with structures of the middle and inner ear demonstrated. B. High-resolution view of inner ear. On examination, there is often a perforation of the tympanic membrane filled with cheesy white squamous debris. The presence of an aural polyp obscuring the tympanic membrane is highly suggestive of an underlying cholesteatoma. A chronically draining ear that fails to respond to appropriate antibiotic therapy should raise suspicion of a cholesteatoma. Conductive hearing loss secondary to ossicular erosion is common. Surgery is required to remove this destructive process. Hearing Loss History Otologic examination Cerumen impaction TM perforation Cholesteatoma SOM AOM External auditory canal atresia/ stenosis Eustachian tube dysfunction Tympanosclerosis Pure tone and speech audiometry Conductive HL Impedance audiometry Mixed HL SNHL abnormal Impedance audiometry Acute Asymmetric/symmetric Chronic normal Otosclerosis Cerumen impaction Ossicular fixation Cholesteatoma* Temporal bone trauma* Inner ear dehiscence or “third window” AOM SOM TM perforation* Eustachian tube dysfunction Cerumen impaction Cholesteatoma* Temporal bone trauma* Ossicular discontinuity* Middle ear tumor* abnormal normal AOM TM perforation* Cholesteatoma* Temporal bone trauma* Middle ear tumors* glomus tympanicum glomus jugulare Stapes gusher syndrome* Inner ear malformation* Otosclerosis Temporal bone trauma* Inner ear dehiscence or “third window” CNS infection† Tumors† Cerebellopontine angle CNS Stroke† Trauma* Symmetric Asymmetric Inner ear malformation* Presbycusis Noise exposure Radiation therapy MRI/BAER abnormal normal Endolymphatic hydrops Labyrinthitis* Perilymphatic fistula* Radiation therapy Labyrinthitis* Inner ear malformations* Cerebellopontine angle tumors Arachnoid cyst; facial nerve tumor; lipoma; meningioma; vestibular schwannoma Multiple sclerosis† abnormal normal FIguRE 43-2 An algorithm for the approach to hearing loss. AOM, acute otitis media; BAER, brainstem auditory evoked response; CNS, cen-tral nervous system; HL, hearing loss; SNHL, sensorineural hearing loss; SOM, serous otitis media; TM, tympanic membrane. *Computed tomog-raphy scan of temporal bone. †Magnetic resonance imaging (MRI) scan. Conductive hearing loss with a normal ear canal and intact tympanic membrane suggests either ossicular pathology or the presence of “third window” in the inner ear (see below). Fixation of the stapes from otosclerosis is a common cause of low-frequency conductive hearing loss. It occurs equally in men and women and is inherited as an autosomal dominant trait with incomplete penetrance; in some cases, it may be a manifestation of osteogenesis imperfecta. Hearing impairment usually presents between the late teens and the forties. In women, the otosclerotic process is accelerated during pregnancy, and the hearing loss is often first noticeable at this time. A hearing aid or a simple outpatient surgical procedure (stapedectomy) can provide adequate auditory rehabilitation. Extension of otosclerosis beyond the stapes footplate to involve the cochlea (cochlear otosclerosis) can lead to mixed or sensorineural hearing loss. Fluoride therapy to prevent hearing loss from cochlear otosclerosis is of uncertain value. Disorders that lead to the formation of a pathologic “third window” in the inner ear can be associated with conductive hearing loss. There are normally two major openings, or windows, that connect the inner ear with the middle ear and serve as conduits for transmission of sound; these are, respectively, the oval and round windows. A third window is formed where the normally hard otic bone surrounding the inner ear is eroded; dissipation of the acoustic energy at the third window is responsible for the “inner ear conductive hearing loss.” The superior semicircular canal dehiscence syndrome resulting from erosion of the otic bone over the superior circular canal can present with conductive hearing loss that mimics otosclerosis. A common symptom is vertigo evoked by loud sounds (Tullio phenomenon), by Valsalva maneuvers that change middle ear pressure, or by applying positive pressure on the tragus (the cartilage anterior to the external opening of the ear canal). Patients with this syndrome also complain of being able to hear the movement of their eyes and neck. A large jugular bulb or jugular bulb diverticulum can create a “third window” by eroding into the vestibular aqueduct or posterior semicircular canal; the symptoms are similar to those of the superior semicircular canal dehiscence syndrome. Sensorineural Hearing Loss Sensorineural hearing loss results from either damage to the mechanotransduction apparatus of the cochlea or disruption of the electrical conduction pathway from the inner ear to the brain. Thus, injury to hair cells, supporting cells, auditory neurons, or the central auditory pathway can cause sensorineural hearing loss. Damage to the hair cells of the organ of Corti may be caused by intense noise, viral infections, ototoxic drugs (e.g., salicylates, quinine and its synthetic analogues, aminoglycoside antibiotics, loop diuretics such as furosemide and ethacrynic acid, and cancer chemotherapeutic agents such as cisplatin), fractures of the temporal bone, meningitis, cochlear otosclerosis (see above), Ménière’s disease, and aging. Congenital malformations of the inner ear may be the cause of hearing loss in some adults. Genetic predisposition alone or in concert with environmental exposures may also be responsible (see below). Presbycusis (age-associated hearing loss) is the most common cause of sensorineural hearing loss in adults. In the early stages, it is characterized by symmetric, gentle to sharply sloping high-frequency hearing loss (Fig. 43-3). With progression, the hearing loss involves all frequencies. More importantly, the hearing impairment is associated with significant loss in clarity. There is a loss of discrimination for phonemes, recruitment (abnormal growth of loudness), and particular difficulty in understanding speech in noisy environments such as at restaurants and social events. Hearing aids are helpful in enhancing the signal-to-noise ratio by amplifying sounds that are close to the listener. Although hearing aids are able to amplify sounds, they cannot restore the clarity of hearing. Thus, amplification with hearing aids may provide only limited rehabilitation once the word recognition score deteriorates below 50%. Cochlear implants are the treatment of choice when hearing aids prove inadequate, even when hearing loss is incomplete (see below). Ménière’s disease is characterized by episodic vertigo, fluctuating sensorineural hearing loss, tinnitus, and aural fullness. Tinnitus and/ or deafness may be absent during the initial attacks of vertigo, but it Right 50 dB 64% 55 dBSRT Left 70%Disc. CHAPTER 43 Disorders of Hearing FIguRE 43-3 Presbyacusis or age-related hearing loss. The audiogram shows a moderate to severe downsloping sensorineural hearing loss typical of presbyacusis. The loss of high-frequency hearing is associated with a decreased speech discrimination score; consequently, patients complain of lack of clarity of hearing, especially in a noisy background. HL, hearing threshold level; SRT, speech reception threshold. invariably appears as the disease progresses and increases in severity during acute attacks. The annual incidence of Ménière’s disease is 0.5– 7.5 per 1000; onset is most frequently in the fifth decade of life but may also occur in young adults or the elderly. Histologically, there is distention of the endolymphatic system (endolymphatic hydrops) leading to degeneration of vestibular and cochlear hair cells. This may result from endolymphatic sac dysfunction secondary to infection, trauma, autoimmune disease, inflammatory causes, or tumor; an idiopathic etiology constitutes the largest category and is most accurately referred to as Ménière’s disease. Although any pattern of hearing loss can be observed, typically, low-frequency, unilateral sensorineural hearing impairment is present. Magnetic resonance imaging (MRI) should be obtained to exclude retrocochlear pathology such as a cerebellopontine angle tumor or demyelinating disorder. Therapy is directed toward the control of vertigo. A 2-g/d low-salt diet is the mainstay of treatment for control of rotatory vertigo. Diuretics, a short course of glucocorticoids, and intratympanic gentamicin may also be useful adjuncts in recalcitrant cases. Surgical therapy of vertigo is reserved for unresponsive cases and includes endolymphatic sac decompression, labyrinthectomy, and vestibular nerve section. Both labyrinthectomy and vestibular nerve section abolish rotatory vertigo in >90% of cases. Unfortunately, there is no effective therapy for hearing loss, tinnitus, or aural fullness from Ménière’s disease. Sensorineural hearing loss may also result from any neoplastic, vascular, demyelinating, infectious, or degenerative disease or trauma affecting the central auditory pathways. HIV leads to both peripheral and central auditory system pathology and is associated with sensorineural hearing impairment. Primary diseases of the central nervous system can also present with hearing impairment. Characteristically, a reduction in clarity of hearing and speech comprehension is much greater than the loss of the ability to hear pure tone. Auditory testing is consistent with an auditory neuropathy; normal otoacoustic emissions (OAE) and an abnormal auditory brainstem response (ABR) is typical (see below). Hearing loss can accompany hereditary sensorimotor neuropathies and inherited disorders of myelin. Tumors of the cerebellopontine angle such as vestibular schwannoma and meningioma usually present with asymmetric sensorineural hearing loss with greater deterioration of speech understanding than pure tone hearing. Multiple sclerosis may present with acute unilateral or bilateral hearing loss; typically, pure tone testing remains relatively stable while speech understanding 220 fluctuates. Isolated labyrinthine infarction can present with acute hearing loss and vertigo due to a cerebrovascular accident involving the posterior circulation, usually the anterior inferior cerebellar artery; it may also be the heralding sign of impending catastrophic basilar artery infarction (Chap. 446). A finding of conductive and sensory hearing loss in combination is termed mixed hearing loss. Mixed hearing losses are due to pathology of both the middle and inner ear, as can occur in otosclerosis involving the ossicles and the cochlea, head trauma, chronic otitis media, cholesteatoma, middle ear tumors, and some inner ear malformations. Trauma resulting in temporal bone fractures may be associated with conductive, sensorineural, or mixed hearing loss. If the fracture spares the inner ear, there may simply be conductive hearing loss due to rupture of the tympanic membrane or disruption of the ossicular chain. These abnormalities can be surgically corrected. Profound hearing loss and severe vertigo are associated with temporal bone fractures involving the inner ear. A perilymphatic fistula associated with leakage of inner ear fluid into the middle ear can occur and may require surgical repair. An associated facial nerve injury is not uncommon. Computed tomography (CT) is best suited to assess fracture of the traumatized temporal bone, evaluate the ear canal, and determine the integrity of the ossicular chain and the involvement of the inner ear. Cerebrospinal fluid leaks that accompany temporal bone fractures are usually self-limited; the value of prophylactic antibiotics is uncertain. Tinnitus is defined as the perception of a sound when there is no sound in the environment. It may have a buzzing, roaring, or ringing quality and may be pulsatile (synchronous with the heartbeat). Tinnitus is often associated with either a conductive or sensorineural hearing loss. The pathophysiology of tinnitus is not well understood. The cause of the tinnitus can usually be determined by finding the cause of the associated hearing loss. Tinnitus may be the first symptom of a serious condition such as a vestibular schwannoma. Pulsatile tinnitus requires evaluation of the vascular system of the head to exclude vascular tumors such as glomus jugulare tumors, aneurysms, dural arteriovenous fistulas, and stenotic arterial lesions; it may also occur with SOM. It is most commonly associated with some abnormality of the jugular bulb such as a large jugular bulb or jugular bulb diverticulum. More than half of childhood hearing impairment is thought to be hereditary; hereditary hearing impairment (HHI) can also manifest later in life. HHI may be classified as either nonsyndromic, when hearing loss is the only clinical abnormality, or syndromic, when hearing loss is associated with anomalies in other organ systems. Nearly two-thirds of HHIs are nonsyndromic, and the remaining one-third are syndromic. Between 70 and 80% of nonsyndromic HHI is inherited in an autosomal recessive manner and designated DFNB; another 15–20% is autosomal dominant (DFNA). Less than 5% is X-linked (DFNX) or maternally inherited via the mitochondria. More than 150 loci harboring genes for nonsyndromic HHI have been mapped, with recessive loci outnumbering dominant; numerous genes have now been identified (Table 43-1). The hearing genes fall into the categories of structural proteins (MYH9, MYO7A, MYO15, TECTA, DIAPH1), transcription factors (POU3F4, POU4F3), ion channels (KCNQ4, SLC26A4), and gap junction proteins (GJB2, GJB3, GJB6). Several of these genes, including GJB2, TECTA, and TMC1, cause both autosomal dominant and recessive forms of nonsyndromic HHI. In general, the hearing loss associated with dominant genes has its onset in adolescence or adulthood, varies in severity, and progresses with age, whereas the hearing loss associated with recessive inheritance is congenital and profound. Connexin 26, product of the GJB2 gene, is particularly important because it is responsible for nearly 20% of all cases of childhood deafness; half of genetic deafness in children is GJB2-related. Two frameshift mutations, 35delG and 167delT, account for >50% of the cases; however, screening for these two mutations alone is insufficient, and sequencing of the entire gene is required to diagnose GJB2-related recessive deafness. The 167delT mutation is highly prevalent in Ashkenazi Jews; ~1 in 1765 individuals in this PART 2 Cardinal Manifestations and Presentation of Diseases population are homozygous and affected. The hearing loss can also vary among the members of the same family, suggesting that other genes or factors influence the auditory phenotype. In addition to GJB2, several other nonsyndromic genes are associated with hearing loss that progresses with age. The contribution of genetics to presbycusis is also becoming better understood. Sensitivity to aminoglycoside ototoxicity can be maternally transmitted through a mitochondrial mutation. Susceptibility to noise-induced hearing loss may also be genetically determined. There are >400 syndromic forms of hearing loss. These include Usher’s syndrome (retinitis pigmentosa and hearing loss), Waardenburg’s syndrome (pigmentary abnormality and hearing loss), Pendred’s syndrome (thyroid organification defect and hearing loss), Alport’s syndrome (renal disease and hearing loss), Jervell and Lange-Nielsen syndrome (prolonged QT interval and hearing loss), neurofibromatosis type 2 (bilateral acoustic schwannoma), and mitochondrial disorders (mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes [MELAS]; myoclonic epilepsy and ragged red fibers [MERRF]; progressive external ophthalmoplegia [PEO]) (Table 43-2). APPROACH TO THE PATIENT: Disorders of the Sense of Hearing The goal in the evaluation of a patient with auditory complaints is to determine (1) the nature of the hearing impairment (conductive vs sensorineural vs mixed), (2) the severity of the impairment (mild, moderate, severe, or profound), (3) the anatomy of the impairment (external ear, middle ear, inner ear, or central auditory pathway), and (4) the etiology. The history should elicit characteristics of the hearing loss, including the duration of deafness, unilateral versus bilateral involvement, nature of onset (sudden vs insidious), and rate of progression (rapid vs slow). Symptoms of tinnitus, vertigo, imbalance, aural fullness, otorrhea, headache, facial nerve dysfunction, and head and neck paresthesias should be noted. Information regarding head trauma, exposure to ototoxins, occupational or recreational noise exposure, and family history of hearing impairment may also be important. A sudden onset of unilateral hearing loss, with or without tinnitus, may represent a viral infection of the inner ear, vestibular schwannoma, or a stroke. Patients with unilateral hearing loss (sensory or conductive) usually complain of reduced hearing, poor sound localization, and difficulty hearing clearly with background noise. Gradual progression of a hearing deficit is common with otosclerosis, noise-induced hearing loss, vestibular schwannoma, or Ménière’s disease. Small vestibular schwannomas typically present with asymmetric hearing impairment, tinnitus, and imbalance (rarely vertigo); cranial neuropathy, in particular of the trigeminal or facial nerve, may accompany larger tumors. In addition to hearing loss, Ménière’s disease may be associated with episodic vertigo, tinnitus, and aural fullness. Hearing loss with otorrhea is most likely due to chronic otitis media or cholesteatoma. Examination should include the auricle, external ear canal, and tympanic membrane. The external ear canal of the elderly is often dry and fragile; it is preferable to clean cerumen with wall-mounted suction or cerumen loops and to avoid irrigation. In examining the eardrum, the topography of the tympanic membrane is more important than the presence or absence of the light reflex. In addition to the pars tensa (the lower two-thirds of the tympanic membrane), the pars flaccida (upper one-third of the tympanic membrane) above the short process of the malleus should also be examined for retraction pockets that may be evidence of chronic eustachian tube dysfunction or cholesteatoma. Insufflation of the ear canal is necessary to assess tympanic membrane mobility and compliance. Careful inspection of the nose, nasopharynx, and upper respiratory tract is indicated. Unilateral serous effusion should prompt a fiberoptic examination of the nasopharynx to exclude neoplasms. Cranial nerves should be evaluated with special attention to facial and trigeminal nerves, which are commonly affected with tumors involving the cerebellopontine angle. Thyroid hormone–binding protein Cytoskeletal protein Potassium channel Gap junction Gap junction Gap junction Class II nonmuscle myosin Cell adhesion molecule Unknown Transmembrane protein Tectorial membrane protein Unknown Developmental gene Cytoskeletal protein Cytoskeletal protein Transcription factor Cytoskeletal protein Cytoskeletal protein Unconventional myosin Developmental gene Vesicular glutamate transporter Transcription factor Transmembrane protein Purinergic receptor Effector of epidermal growth factor– Gap junction Gap junction Cytoskeletal protein Cytoskeletal protein Chloride/iodide transporter Transmembrane protein Transmembrane protein Trafficking of membrane vesicles Transmembrane serine protease The Rinne and Weber tuning fork tests, with a 512-Hz tuning fork, are used to screen for hearing loss, differentiate conductive from sensorineural hearing losses, and confirm the findings of audiologic evaluation. The Rinne test compares the ability to hear by air conduction with the ability to hear by bone conduction. The tines of a vibrating tuning fork are held near the opening of the external auditory canal, and then the stem is placed on the mastoid process; for direct contact, it may be placed on teeth or dentures. The patient is asked to indicate whether the tone is louder by air conduction or bone conduction. Normally, and in the presence of sensorineural hearing loss, a tone is heard louder by air conduction than by bone conduction; however, with conductive hearing loss of ≥30 dB (see “Audiologic Assessment,” below), the bone-conduction stimulus is perceived as louder than the air-conduction stimulus. For the Weber test, the stem of a vibrating tuning fork is placed on Unknown Tectorial membrane protein Gel attachment to nonsensory cell Morphogenesis and cohesion Cytoskeletal protein Reversible S-glutathionylation of CHAPTER 43 Disorders of Hearing the head in the midline and the patient is asked whether the tone is heard in both ears or better in one ear than in the other. With a unilateral conductive hearing loss, the tone is perceived in the affected ear. With a unilateral sensorineural hearing loss, the tone is perceived in the unaffected ear. A 5-dB difference in hearing between the two ears is required for lateralization. LABORATORY ASSESSMENT OF HEARINg Audiologic Assessment The minimum audiologic assessment for hearing loss should include the measurement of pure tone air-conduction and bone-conduction thresholds, speech reception threshold, word recognition score, tympanometry, acoustic reflexes, and acoustic-reflex decay. This test battery provides a screening evaluation of the entire auditory system and allows one to determine whether further PART 2 Cardinal Manifestations and Presentation of Diseases Abbreviations: BOR, branchio-oto-renal syndrome; WS, Waardenburg’s syndrome. differentiation of a sensory (cochlear) from a neural (retrocochlear) hearing loss is indicated. Pure tone audiometry assesses hearing acuity for pure tones. The test is administered by an audiologist and is performed in a sound-attenuated chamber. The pure tone stimulus is delivered with an audiometer, an electronic device that allows the presentation of specific frequencies (generally between 250 and 8000 Hz) at specific intensities. Airand bone-conduction thresholds are established for each ear. Air-conduction thresholds are determined by presenting the stimulus in air with the use of headphones. Bone-conduction thresholds are determined by placing the stem of a vibrating tuning fork or an oscillator of an audiometer in contact with the head. In the presence of a hearing loss, broad-spectrum noise is presented to the nontest ear for masking purposes so that responses are based on perception from the ear under test. The responses are measured in decibels. An audiogram is a plot of intensity in decibels of hearing threshold versus frequency. A decibel (dB) is equal to 20 times the logarithm of the ratio of the sound pressure required to achieve threshold in the patient to the sound pressure required to achieve threshold in a normal-hearing person. Therefore, a change of 6 dB represents doubling of sound pressure, and a change of 20 dB represents a tenfold change in sound pressure. Loudness, which depends on the frequency, intensity, and duration of a sound, doubles with approximately each 10-dB increase in sound pressure level. Pitch, on the other hand, does not directly correlate with frequency. The perception of pitch changes slowly in the low and high frequencies. In the middle tones, which are important for human speech, pitch varies more rapidly with changes in frequency. Pure tone audiometry establishes the presence and severity of hearing impairment, unilateral versus bilateral involvement, and the type of hearing loss. Conductive hearing losses with a large mass component, as is often seen in middle ear effusions, produce elevation of thresholds that predominate in the higher frequencies. Conductive hearing losses with a large stiffness component, as in fixation of the footplate of the stapes in early otosclerosis, produce threshold elevations in the lower frequencies. Often, the conductive hearing loss involves all frequencies, suggesting involvement of both stiffness and mass. In general, sensorineural hearing losses such as presbycusis affect higher frequencies more than lower frequencies (Fig. 43-3). An exception is Ménière’s disease, which is characteristically associated with low-frequency sensorineural hearing loss. Noise-induced hearing loss has an unusual pattern of hearing impairment in which the loss at 4000 Hz is greater than at higher frequencies. Vestibular schwannomas characteristically affect the higher frequencies, but any pattern of hearing loss can be observed. Speech recognition requires greater synchronous neural firing than is necessary for appreciation of pure tones. Speech audiometry tests the clarity with which one hears. The speech reception threshold (SRT) is defined as the intensity at which speech is recognized as a meaningful symbol and is obtained by presenting two-syllable words with an equal accent on each syllable. The intensity at which the patient can repeat 50% of the words correctly is the SRT. Once the SRT is determined, discrimination or word recognition ability is tested by presenting one-syllable words at 25–40 dB above the SRT. The words are phonetically balanced in that the phonemes (speech sounds) occur in the list of words at the same frequency that they occur in ordinary conversational English. An individual with normal hearing or conductive hearing loss can repeat 88–100% of the phonetically balanced words correctly. Patients with a sensorineural hearing loss have variable loss of discrimination. As a general rule, neural lesions produce greater deficits in discrimination than do cochlear lesions. For example, in a patient with mild asymmetric sensorineural hearing loss, a clue to the diagnosis of vestibular schwannoma is the presence of greater than expected deterioration in discrimination ability. Deterioration in discrimination ability at higher intensities above the SRT also suggests a lesion in the eighth nerve or central auditory pathways. Tympanometry measures the impedance of the middle ear to sound and is useful in diagnosis of middle ear effusions. A tympanogram is the graphic representation of change in impedance or compliance as the pressure in the ear canal is changed. Normally, the middle ear is most compliant at atmospheric pressure, and the compliance decreases as the pressure is increased or decreased (type A); this pattern is seen with normal hearing or in the presence of sensorineural hearing loss. Compliance that does not change with change in pressure suggests middle ear effusion (type B). With a negative pressure in the middle ear, as with eustachian tube obstruction, the point of maximal compliance occurs with negative pressure in the ear canal (type C). A tympanogram in which no point of maximal compliance can be obtained is most commonly seen with discontinuity of the ossicular chain (type Ad). A reduction in the maximal compliance peak can be seen in otosclerosis (type As). During tympanometry, an intense tone elicits contraction of the stapedius muscle. The change in compliance of the middle ear with contraction of the stapedius muscle can be detected. The presence or absence of this acoustic reflex is important in determining the etiology of hearing loss as well as in the anatomic localization of facial nerve paralysis. The acoustic reflex can help differentiate between conductive hearing loss due to otosclerosis and that caused by an inner ear “third window”: it is absent in otosclerosis and present in inner ear conductive hearing loss. Normal or elevated acoustic reflex thresholds in an individual with sensorineural hearing impairment suggest a cochlear hearing loss. An absent acoustic reflex in the setting of sensorineural hearing loss is not helpful in localizing the site of lesion. Assessment of acoustic reflex decay helps differentiate sensory from neural hearing losses. In neural hearing loss, such as with vestibular schwannoma, the reflex adapts or decays with time. OAEs generated by outer hair cells only can be measured with microphones inserted into the external auditory canal. The emissions may be spontaneous or evoked with sound stimulation. The presence of OAEs indicates that the outer hair cells of the organ of Corti are intact and can be used to assess auditory thresholds and to distinguish sensory from neural hearing losses. Evoked Responses Electrocochleography measures the earliest evoked potentials generated in the cochlea and the auditory nerve. Receptor potentials recorded include the cochlear microphonic, generated by the outer hair cells of the organ of Corti, and the summating potential, generated by the inner hair cells in response to sound. The whole nerve action potential representing the composite firing of the first-order neurons can also be recorded during electrocochleography. Clinically, the test is useful in the diagnosis of Ménière’s disease, where an elevation of the ratio of summating potential to action potential is seen. Brainstem auditory evoked responses (BAERs), also known as auditory brainstem responses (ABRs), are useful in differentiating the site of sensorineural hearing loss. In response to sound, five distinct electrical potentials arising from different stations along the peripheral and central auditory pathway can be identified using computer averaging from scalp surface electrodes. BAERs are valuable in situations in which patients cannot or will not give reliable voluntary thresholds. They are also used to assess the integrity of the auditory nerve and brainstem in various clinical situations, including intraoperative monitoring, and in determination of brain death. The vestibular-evoked myogenic potential (VEMP) test elicits a vestibulocolic reflex whose afferent limb arises from acoustically sensitive cells in the saccule, with signals conducted via the inferior vestibular nerve. VEMP is a biphasic, short-latency response recorded from the tonically contracted sternocleidomastoid muscle in response to loud auditory clicks or tones. VEMPs may be diminished or absent in patients with early and late Ménière’s disease, vestibular neuritis, benign paroxysmal positional vertigo, and vestibular schwannoma. On the other hand, the threshold for VEMPs may be lower in cases of superior canal dehiscence, other inner ear dehiscence, and perilymphatic fistula. Imaging Studies The choice of radiologic tests is largely determined by whether the goal is to evaluate the bony anatomy of the external, middle, and inner ear or to image the auditory nerve and brain. Axial and coronal CT of the temporal bone with fine 0.3to 0.6-mm cuts is ideal for determining the caliber of the external auditory canal, integrity of the ossicular chain, and presence of middle ear or mastoid disease; it can also detect inner ear malformations. CT is also ideal for the detection of bone erosion with chronic otitis media and cholesteatoma. Pöschl reformatting in the plane of the superior semicircular canal is required for the identification of dehiscence or absence of bone over the superior semicircular canal. MRI is superior to CT for imaging of retrocochlear pathology such as vestibular schwannoma, meningioma, other lesions of the cerebellopontine angle, demyelinating lesions of the brainstem, and brain tumors. Both CT and MRI are equally capable of identifying inner ear malformations and assessing cochlear patency for preoperative evaluation of patients for cochlear implantation. In general, conductive hearing losses are amenable to surgical correction, whereas sensorineural hearing losses are usually managed medically. Atresia of the ear canal can be surgically repaired, often with significant improvement in hearing. Tympanic membrane perforations due to chronic otitis media or trauma can be repaired with an outpatient tympanoplasty. Likewise, conductive hearing loss associated with otosclerosis can be treated by stapedectomy, which is successful in >95% of cases. Tympanostomy tubes allow the prompt return of normal hearing in individuals with middle ear 223 effusions. Hearing aids are effective and well tolerated in patients with conductive hearing losses. Patients with mild, moderate, and severe sensorineural hearing losses are regularly rehabilitated with hearing aids of varying configuration and strength. Hearing aids have been improved to provide greater fidelity and have been miniaturized. The current generation of hearing aids can be placed entirely within the ear canal, thus reducing any stigma associated with their use. In general, the more severe the hearing impairment, the larger the hearing aid required for auditory rehabilitation. Digital hearing aids lend themselves to individual programming, and multiple and directional microphones at the ear level may be helpful in noisy surroundings. Because all hearing aids amplify noise as well as speech, the only absolute solution to the problem of noise is to place the microphone closer to the speaker than the noise source. This arrangement is not possible with a self-contained, cosmetically acceptable device. A significant limitation of rehabilitation with a hearing aid is that although it is able to enhance detection of sound with amplification, it cannot restore clarity of hearing that is lost with presbycusis. Patients with unilateral deafness have difficulty with sound localization and reduced clarity of hearing in background noise. They may benefit from a CROS (contralateral routing of signal) hearing aid in which a microphone is placed on the hearing-impaired side and the sound is transmitted to the receiver placed on the contralateral ear. The same result may be obtained with a bone-anchored hearing aid (BAHA), in which a hearing aid clamps to a screw integrated into the skull on the hearing-impaired side. Like the CROS hearing aid, the BAHA transfers the acoustic signal to the contralateral hearing ear, but it does so by vibrating the skull. Patients with profound deafness on one side and some hearing loss in the better ear are candidates for a BICROS hearing aid; it differs from the CROS hearing aid in that the patient wears a hearing aid, and not simply a receiver, in the better ear. Unfortunately, while CROS and BAHA devices provide benefit, they do not restore hearing in the deaf ear. Only cochlear implants can restore hearing (see below). Increasingly, cochlear implants are being investigated for the treatment of patients with single-sided deafness; early reports show great promise in not only restoring hearing but also improving sound localization and performance in background noise. In many situations, including lectures and the theater, hearing-impaired persons benefit from assistive devices that are based on the principle of having the speaker closer to the microphone than any source of noise. Assistive devices include infrared and frequency-modulated (FM) transmission as well as an electromagnetic loop around the room for transmission to the individual’s hearing aid. Hearing aids with telecoils can also be used with properly equipped telephones in the same way. In the event that the hearing aid provides inadequate rehabilitation, cochlear implants may be appropriate (Fig. 43-4). Criteria for implantation include severe to profound hearing loss with open-set sentence cognition of ≤40% under best aided conditions. Worldwide, more than 300,000 hearing-impaired individuals have received cochlear implants. Cochlear implants are neural prostheses that convert sound energy to electrical energy and can be used to stimulate the auditory division of the eighth nerve directly. In most cases of profound hearing impairment, the auditory hair cells are lost but the ganglionic cells of the auditory division of the eighth nerve are preserved. Cochlear implants consist of electrodes that are inserted into the cochlea through the round window, speech processors that extract acoustical elements of speech for conversion to electrical currents, and a means of transmitting the electrical energy through the skin. Patients with implants experience sound that helps with speech reading, allows open-set word recognition, and helps in modulating the person’s own voice. Usually, within the first 3–6 months after implantation, adult patients can understand speech without visual cues. With the current generation of multichannel cochlear implants, nearly 75% of patients are able to converse on the telephone. CHAPTER 43 Disorders of Hearing PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 43-4 A cochlear implant is composed of an external microphone and speech processor worn on the ear and a receiver implanted underneath the temporalis muscle. The internal receiver is attached to an electrode that is placed surgically in the cochlea. The U.S. Food and Drug Administration recently approved the first hybrid cochlear implant for the treatment of high-frequency hearing loss. Patients with presbyacusis typically have normal low-frequency hearing, while suffering from high-frequency hearing loss associated with loss of clarity that cannot always be adequately rehabilitated with a hearing aid. However, these patients are not candidates for conventional cochlear implants because they have too much residual hearing. The hybrid implant has been specifically designed for this patient population; it has a shorter electrode than a conventional cochlear implant and can be introduced into the cochlea atraumatically, thus preserving low-frequency hearing. Individuals with a hybrid implant use their own natural low-frequency “acoustic” hearing and rely on the implant for providing “electrical” high-frequency hearing. Patients who have received the hybrid implant perform better on speech testing in both quiet and noisy backgrounds. For individuals who have had both eighth nerves destroyed by trauma or bilateral vestibular schwannomas (e.g., neurofibromatosis type 2), brainstem auditory implants placed near the cochlear nucleus may provide auditory rehabilitation. Tinnitus often accompanies hearing loss. As for background noise, tinnitus can degrade speech comprehension in individuals with hearing impairment. Therapy for tinnitus is usually directed toward minimizing the appreciation of tinnitus. Relief of the tinnitus may be obtained by masking it with background music. Hearing aids are also helpful in tinnitus suppression, as are tinnitus maskers, devices that present a sound to the affected ear that is more pleasant to listen to than the tinnitus. The use of a tinnitus masker is often followed by several hours of inhibition of the tinnitus. Antidepressants have been shown to be beneficial in helping patients cope with tinnitus. Hard-of-hearing individuals often benefit from a reduction in unnecessary noise in the environment (e.g., radio or television) to enhance the signal-to-noise ratio. Speech comprehension is aided by lip reading; therefore, the impaired listener should be seated so that the face of the speaker is well illuminated and easily seen. Although speech should be in a loud, clear voice, one should be aware that in sensorineural hearing losses in general and in hard-ofhearing elderly in particular, recruitment (abnormal perception of loud sounds) may be troublesome. Above all, optimal communication cannot take place without both parties giving it their full and undivided attention. Conductive hearing losses may be prevented by prompt antibiotic therapy of adequate duration for AOM and by ventilation of the middle ear with tympanostomy tubes in middle ear effusions lasting ≥12 weeks. Loss of vestibular function and deafness due to aminoglycoside antibiotics can largely be prevented by careful monitoring of serum peak and trough levels. Some 10 million Americans have noise-induced hearing loss, and 20 million are exposed to hazardous noise in their employment. Noise-induced hearing loss can be prevented by avoidance of exposure to loud noise or by regular use of ear plugs or fluid-filled ear muffs to attenuate intense sound. Table 43-3 lists loudness levels for a variety of Sore Throat, Earache, and upper Respiratory Symptoms Michael A. Rubin, Larry C. Ford, Ralph Gonzales Infections of the upper respiratory tract (URIs) have a tremendous impact on public health. They are among the most common rea-44TABLE 43-3 DECiBEL (LouDnESS) LEvEL of CoMMon EnviRonMEnTAL noiSE Source Decibel (dB) Weakest sound heard 0 Whisper 30 Normal conversation 55–65 City traffic inside car 85 OSHA monitoring requirement begins 90 Jackhammer 95 CHAPTER 44 Sore Throat, Earache, and Upper Respiratory Symptoms Abbreviation: OSHA, Occupational Safety and Health Administration. environmental sounds. High-risk activities for noise-induced hearing loss include use of electrical equipment for wood and metal working and target practice or hunting with small firearms. All internal-combustion and electric engines, including snow and leaf blowers, snowmobiles, outboard motors, and chainsaws, require protection of the user with hearing protectors. Virtually all noise-induced hearing loss is preventable through education, which should begin before the teenage years. Programs for conservation of hearing in the workplace are required by the Occupational Safety and Health Administration (OSHA) whenever the exposure over an 8-h period averages 85 dB. OSHA mandates that workers in such noisy environments have hearing monitoring and protection programs that include a preemployment screen, an annual audiologic assessment, and the mandatory use of hearing protectors. Exposure to loud sounds above 85 dB in the work environment is restricted by OSHA, with halving of allowed exposure time for each increment of 5 dB above this threshold; for example, exposure to 90 dB is permitted for 8 h; 95 dB for 4 h, and 100 dB for 2 h (Table 43-4). 90 8 92 6 95 4 97 3 100 2 102 1.5 105 1 110 0.5 115 ≤0.25 Note: Exposure to impulsive or impact noise should not exceed 140-dB peak sound pressure level. Source: From https://www.osha.gov/pls/oshaweb/owadisp.show_document?p_table= standards&p_id=9735. sons for visits to primary care providers, and although the illnesses are typically mild, their high incidence and transmission rates place them among the leading causes of time lost from work or school. Even though a minority (~25%) of cases are caused by bacteria, URIs are the leading diagnoses for which antibiotics are prescribed on an outpatient basis in the United States. The enormous consumption of antibiotics for these illnesses has contributed to the rise in antibiotic resistance among common community-acquired pathogens such as Streptococcus pneumoniae—a trend that in itself has an enormous influence on public health. Although most URIs are caused by viruses, distinguishing patients with primary viral infection from those with primary bacterial infection is difficult. Signs and symptoms of bacterial and viral URIs are typically indistinguishable. Until consistent, inexpensive, and rapid testing becomes available and is used widely, acute infections will be diagnosed largely on clinical grounds. The judicious use and potential for misuse of antibiotics in this setting pose definite challenges. Nonspecific URIs are a broadly defined group of disorders that collectively constitute the leading cause of ambulatory care visits in the United States. By definition, nonspecific URIs have no prominent localizing features. They are identified by a variety of descriptive names, including acute infective rhinitis, acute rhinopharyngitis/ nasopharyngitis, acute coryza, and acute nasal catarrh, as well as by the inclusive label common cold. The large assortment of URI classifications reflects the wide variety of causative infectious agents and the varied manifestations of common pathogens. Nearly all nonspecific URIs are caused by viruses spanning multiple virus families and many antigenic types. For instance, there are at least 100 immunotypes of rhinovirus (Chap. 223), the most common cause of URI (~30–40% of cases); other causes include influenza virus (three immunotypes; Chap. 224) as well as parainfluenza virus (four immunotypes), coronavirus (at least three immunotypes), and adenovirus (47 immunotypes) (Chap. 223). Respiratory syncytial virus (RSV), a well-established pathogen in pediatric populations, is also a recognized cause of significant disease in elderly and immunocompromised individuals. A host of additional viruses, including some viruses not typically associated with URIs (e.g., enteroviruses, rubella virus, and varicella-zoster virus), account for a small percentage of cases in adults each year. Although new diagnostic modalities (e.g., nasopharyngeal swab for polymerase chain reaction [PCR]) can assign a viral etiology, there are few specific treatment options, and no pathogen is identified in a substantial proportion of cases. A specific diagnostic workup beyond a clinical diagnosis is generally unnecessary in an otherwise healthy adult. The signs and symptoms of nonspecific URI are similar to those of other URIs but lack a pronounced localization to one particular anatomic location, such as the sinuses, pharynx, or lower airway. Nonspecific URI commonly presents as an acute, mild, and self-limited catarrhal syndrome with a median duration of ~1 week (range, 2–10 days). Signs and symptoms are diverse and frequently variable across patients, even when caused by the same virus. The principal 226 signs and symptoms of nonspecific URI include rhinorrhea (with or without purulence), nasal congestion, cough, and sore throat. Other manifestations, such as fever, malaise, sneezing, lymphadenopathy, and hoarseness, are more variable, with fever more common among infants and young children. This varying presentation may reflect differences in host response as well as in infecting organisms; myalgias and fatigue, for example, sometimes are seen with influenza and parainfluenza infections, whereas conjunctivitis may suggest infection with adenovirus or enterovirus. Findings on physical examination are frequently nonspecific and unimpressive. Between 0.5% and 2% of colds are complicated by secondary bacterial infections (e.g., rhinosinusitis, otitis media, and pneumonia), particularly in higher-risk populations such as infants, elderly persons, and chronically ill or immunosuppressed individuals. Secondary bacterial infections usually are associated with a prolonged course of illness, increased severity of illness, and localization of signs and symptoms, often as a rebound after initial clinical improvement (the “double-dip” sign). Purulent secretions from the nares or throat often are misinterpreted as an indication of bacterial sinusitis or pharyngitis. These secretions, however, can be seen in nonspecific URI and, in the absence of other clinical features, are poor predictors of bacterial infection. PART 2 Cardinal Manifestations and Presentation of Diseases Antibiotics have no role in the treatment of uncomplicated nonspecific URI, and their misuse facilitates the emergence of antimicrobial resistance; in healthy volunteers, a single course of a commonly prescribed antibiotic like azithromycin can result in macrolide resistance in oral streptococci many months later. In the absence of clinical evidence of bacterial infection, treatment remains entirely symptom based, with use of decongestants and nonsteroidal anti-inflammatory drugs. Clinical trials of zinc, vitamin C, echinacea, and other alternative remedies have revealed no consistent benefit in the treatment of nonspecific URI. Rhinosinusitis refers to an inflammatory condition involving the nasal sinuses. Although most cases of sinusitis involve more than one sinus, the maxillary sinus is most commonly involved; next, in order of frequency, are the ethmoid, frontal, and sphenoid sinuses. Each sinus is lined with a respiratory epithelium that produces mucus, which is transported out by ciliary action through the sinus ostium and into the nasal cavity. Normally, mucus does not accumulate in the sinuses, which remain mostly sterile despite their adjacency to the bacterium-filled nasal passages. When the sinus ostia are obstructed or when ciliary clearance is impaired or absent, the secretions can be retained, producing the typical signs and symptoms of sinusitis. As these secretions accumulate with obstruction, they become more susceptible to infection with a variety of pathogens, including viruses, bacteria, and fungi. Sinusitis affects a tremendous proportion of the population, accounts for millions of visits to primary care physicians each year, and is the fifth leading diagnosis for which antibiotics are prescribed. It typically is classified by duration of illness (acute vs. chronic); by etiology (infectious vs. noninfectious); and, when infectious, by the offending pathogen type (viral, bacterial, or fungal). Acute rhinosinusitis—defined as sinusitis of <4 weeks’ duration—constitutes the vast majority of sinusitis cases. Most cases are diagnosed in the ambulatory care setting and occur primarily as a consequence of a preceding viral URI. Differentiating acute bacterial from viral sinusitis on clinical grounds is difficult. Therefore, it is perhaps not surprising that antibiotics are prescribed frequently (in 85–98% of all cases) for this condition. Etiology The ostial obstruction in rhinosinusitis can arise from both infectious and noninfectious causes. Noninfectious etiologies include allergic rhinitis (with either mucosal edema or polyp obstruction), barotrauma (e.g., from deep-sea diving or air travel), and exposure to chemical irritants. Obstruction can also occur with nasal and sinus tumors (e.g., squamous cell carcinoma) or granulomatous diseases (e.g., granulomatosis with polyangiitis, rhinoscleroma), and conditions leading to altered mucus content (e.g., cystic fibrosis) can cause sinusitis through impaired mucus clearance. In ICUs, nasotracheal intubation and nasogastric tubes are major risk factors for nosocomial sinusitis. Viral rhinosinusitis is far more common than bacterial sinusitis, although relatively few studies have sampled sinus aspirates for the presence of different viruses. In the studies that have done so, the viruses most commonly isolated—both alone and with bacteria—have been rhinovirus, parainfluenza virus, and influenza virus. Bacterial causes of sinusitis have been better described. Among community-acquired cases, S. pneumoniae and nontypable Haemophilus influenzae are the most common pathogens, accounting for 50–60% of cases. Moraxella catarrhalis causes disease in a significant percentage (20%) of children but a lesser percentage of adults. Other streptococcal species and Staphylococcus aureus cause only a small percentage of cases, although there is increasing concern about methicillin-resistant S. aureus (MRSA) as an emerging cause. It is difficult to assess whether a cultured bacterium represents a true infecting organism, an insufficiently deep sample (which would not be expected to be sterile), or—especially in the case of previous sinus surgeries—a colonizing organism. Anaerobes occasionally are found in association with infections of the roots of premolar teeth that spread to the adjacent maxillary sinuses. The role of atypical organisms like Chlamydia pneumoniae and Mycoplasma pneumoniae in the pathogenesis of acute sinusitis is unclear. Nosocomial cases commonly are associated with bacteria prevalent in the hospital environment, including S. aureus, Pseudomonas aeruginosa, Serratia marcescens, Klebsiella pneumoniae, and Enterobacter species. Often, these infections are polymicrobial and can involve organisms that are highly resistant to numerous antibiotics. Fungi also are established causes of sinusitis, although most acute cases are in immunocompromised patients and represent invasive, life-threatening infections. The best-known example is rhinocerebral mucormycosis caused by fungi of the order Mucorales, which includes Rhizopus, Rhizomucor, Mucor, Lichtheimia (formerly Mycocladus, formerly Absidia), and Cunninghamella (Chap. 242). These infections classically occur in diabetic patients with ketoacidosis but can also develop in transplant recipients, patients with hematologic malignancies, and patients receiving chronic glucocorticoid or deferoxamine therapy. Other hyaline molds, such as Aspergillus and Fusarium species, also are occasional causes of this disease. Clinical Manifestations Most cases of acute sinusitis present after or in conjunction with a viral URI, and it can be difficult to discriminate the clinical features of one from the other, with timing becoming important in diagnosis (see below). A large proportion of patients with colds have sinus inflammation, although, as previously stated, true bacterial sinusitis complicates only 0.2–2% of these viral infections. Common presenting symptoms of sinusitis include nasal drainage and congestion, facial pain or pressure, and headache. Thick, purulent or discolored nasal discharge is often thought to indicate bacterial sinusitis but also occurs early in viral infections such as the common cold and is not specific to bacterial infection. Other nonspecific manifestations include cough, sneezing, and fever. Tooth pain, most often involving the upper molars, as well as halitosis are occasionally associated with bacterial sinusitis. In acute sinusitis, sinus pain or pressure often localizes to the involved sinus (particularly the maxillary sinus) and can be worse when the patient bends over or is supine. Although rare, manifestations of advanced sphenoid or ethmoid sinus infection can be profound, including severe frontal or retroorbital pain radiating to the occiput, thrombosis of the cavernous sinus, and signs of orbital cellulitis. Acute focal sinusitis is uncommon but should be considered with severe symptoms involving the maxillary sinus and fever, regardless of illness duration. Similarly, patients with advanced frontal sinusitis can present with a condition known as Pott’s puffy tumor, with soft tissue swelling and pitting edema over the frontal bone from a communicating subperiosteal abscess. Life-threatening complications of sinusitis include meningitis, epidural abscess, and cerebral abscess. Patients with acute fungal rhinosinusitis (such as mucormycosis; Chap. 242) often present with symptoms related to pressure effects, particularly when the infection has spread to the orbits and cavernous sinus. Signs such as orbital swelling and cellulitis, proptosis, ptosis, and decreased extraocular movement are common, as is retroor periorbital pain. Nasopharyngeal ulcerations, epistaxis, and headaches are also common, and involvement of cranial nerves V and VII has been described in more advanced cases. Bony erosion may be evident on examination or endoscopy. Often the patient does not appear seriously ill despite the rapidly progressive nature of these infections. Patients with acute nosocomial sinusitis are often critically ill and thus do not manifest the typical clinical features of sinus disease. This diagnosis should be suspected, however, when hospitalized patients with appropriate risk factors (e.g., nasotracheal intubation) develop fever without another apparent cause. Diagnosis Distinguishing viral from bacterial rhinosinusitis in the ambulatory setting is usually difficult because of the relatively low sensitivity and specificity of the common clinical features. One clinical feature that has been used to help guide diagnostic and therapeutic decision-making is illness duration. Because acute bacterial sinusitis is uncommon in patients whose symptoms have lasted <10 days, expert panels now recommend reserving this diagnosis for patients with “persistent” symptoms (i.e., symptoms lasting >10 days in adults or >10–14 days in children) accompanied by the three cardinal signs of purulent nasal discharge, nasal obstruction, and facial pain (Table 44-1). Even among patients who meet these criteria, only 40–50% have true bacterial sinusitis. The use of CT or sinus radiography is not recommended for acute disease, particularly early in the course of illness (i.e., at <10 days) in light of the high prevalence of similar findings among patients with acute viral rhinosinusitis. In the evaluation of persistent, recurrent, or chronic sinusitis, CT of the sinuses becomes the radiographic study of choice. The clinical history and/or setting often can identify cases of acute anaerobic bacterial sinusitis, acute fungal sinusitis, or sinusitis from noninfectious causes (e.g., allergic rhinosinusitis). In the case of an immunocompromised patient with acute fungal sinus infection, Moderate symptoms Initial therapy: (e.g., nasal purulence/ Amoxicillin, 500 mg PO tid; or congestion or cough) for Amoxicillin/clavulanate, 500/125 mg PO tid or >10 d or Severe symptoms of any Penicillin allergy: duration, including unilateral/focal facial swell-Doxycycline, 100 mg PO bid; or ing or tooth pain Clindamycin, 300 mg PO tid Exposure to antibiotics within 30 d or >30% prevalence of penicillin-resistant Streptococcus pneumoniae: Amoxicillin/clavulanate (extended release), 2000/125 mg PO bid; or An antipneumococcal fluoroquinolone (e.g., moxifloxacin, 400 mg PO daily) Recent treatment failure: Amoxicillin/clavulanate (extended release), 2000 mg PO bid; or An antipneumococcal fluoroquinolone (e.g., moxifloxacin, 400 mg PO daily) aThe duration of therapy is generally 7–10 days (with consideration of a 5-day course), with appropriate follow-up. Severe disease may warrant IV antibiotics and consideration of hospital admission. bAlthough the evidence is not as strong, amoxicillin/clavulanate may be considered for initial use, particularly if local rates of penicillin resistance or β-lactamase production are high. immediate examination by an otolaryngologist is required. Biopsy 227 specimens from involved areas should be examined by a pathologist for evidence of fungal hyphal elements and tissue invasion. Cases of suspected acute nosocomial sinusitis should be confirmed by sinus CT. Because therapy should target the offending organism, a sinus aspirate for culture and susceptibility testing should be obtained, whenever possible, before the initiation of antimicrobial therapy. Most patients with a clinical diagnosis of acute rhinosinusitis improve without antibiotic therapy. The preferred initial approach in patients with mild to moderate symptoms of short duration is therapy aimed at symptom relief and facilitation of sinus drainage, such as with oral and topical decongestants, nasal saline lavage, and—at least in patients with a history of chronic sinusitis or allergies—nasal glucocorticoids. Newer studies have cast doubt on the role of antibiotics and nasal glucocorticoids in acute rhinosinusitis. In one notable double-blind, randomized, placebo-controlled trial, neither antibiotics nor topical glucocorticoids had a significant impact on cure in the study population of patients, the majority of whom had had symptoms for <7 days. Similarly, another high-profile randomized trial comparing antibiotics to placebo in patients with acute rhinosinusitis demonstrated no significant improvement in symptoms by the third day of therapy. Still, antibiotic therapy can be considered for adult patients whose condition does not improve after 10 days, and patients with more severe symptoms (regardless of duration) should be treated with antibiotics (Table 44-1). However, watchful waiting remains a viable option in many cases. Empirical antibiotic therapy for adults with community-acquired sinusitis should consist of the narrowest-spectrum agent active against the most common bacterial pathogens, including S. pneumoniae and H. influenzae—e.g., amoxicillin or amoxicillin/ clavulanate (with the decision guided by local rates of β-lactamaseproducing H. influenzae). No clinical trials support the use of broader-spectrum agents for routine cases of bacterial sinusitis, even in the current era of drug-resistant S. pneumoniae. For those patients who do not respond to initial antimicrobial therapy, sinus aspiration and/or lavage by an otolaryngologist should be considered. Antibiotic prophylaxis to prevent episodes of recurrent acute bacterial sinusitis is not recommended. Surgical intervention and IV antibiotic administration usually are reserved for patients with severe disease or those with intra-cranial complications such as abscess and orbital involvement. Immunocompromised patients with acute invasive fungal sinusitis usually require extensive surgical debridement and treatment with IV antifungal agents active against fungal hyphal forms, such as amphotericin B. Specific therapy should be individualized according to the fungal species and its susceptibilities as well as the individual patient’s characteristics. Treatment of nosocomial sinusitis should begin with broad-spectrum antibiotics to cover common and often resistant pathogens such as S. aureus and gram-negative bacilli. Therapy then should be tailored to the results of culture and susceptibility testing of sinus aspirates. CHAPTER 44 Sore Throat, Earache, and Upper Respiratory Symptoms Chronic sinusitis is characterized by symptoms of sinus inflammation lasting >12 weeks. This illness is most commonly associated with either bacteria or fungi, and clinical cure in most cases is very difficult. Many patients have undergone treatment with repeated courses of antibacterial agents and multiple sinus surgeries, increasing their risk of colonization with antibiotic-resistant pathogens and of surgical complications. These patients often have high rates of morbidity, sometimes over many years. In chronic bacterial sinusitis, infection is thought to be due to the impairment of mucociliary clearance from repeated infections rather than to persistent bacterial infection. The pathogenesis of this 228 condition, however, is poorly understood. Although certain conditions (e.g., cystic fibrosis) can predispose patients to chronic bacterial sinusitis, most patients with chronic rhinosinusitis do not have obvious underlying conditions that result in the obstruction of sinus drainage, the impairment of ciliary action, or immune dysfunction. Patients experience constant nasal congestion and sinus pressure, with intermittent periods of greater severity, which may persist for years. CT can be helpful in determining the extent of disease, detecting an underlying anatomic defect or obstructing process (e.g., a polyp), and assessing the response to therapy. Management should involve an otolaryngologist to conduct endoscopic examinations and obtain tissue samples for histologic examination and culture. An endoscopy-derived culture not only has a higher yield but also allows direct visualization for abnormal anatomy. Chronic fungal sinusitis is a disease of immunocompetent hosts and is usually noninvasive, although slowly progressive invasive disease is sometimes seen. Noninvasive disease, which typically is associated with hyaline molds such as Aspergillus species and dematiaceous molds such as Curvularia or Bipolaris species, can present as a number of different scenarios. In mild, indolent disease, which usually occurs in the setting of repeated failures of antibacterial therapy, only nonspecific mucosal changes may be seen on sinus CT. Although there is some controversy on this point, endoscopic surgery is usually curative in these cases, with no need for antifungal therapy. Another form of disease presents as long-standing, often unilateral symptoms and opacification of a single sinus on imaging studies as a result of a mycetoma (fungus ball) within the sinus. Treatment for this condition also is surgical, although systemic antifungal therapy may be warranted in the rare case in which bony erosion occurs. A third form of disease, known as allergic fungal sinusitis, is seen in patients with a history of nasal polyposis and asthma, who often have had multiple sinus surgeries. Patients with this condition produce a thick, eosinophil-laden mucus with the consistency of peanut butter that contains sparse fungal hyphae on histologic examination. These patients often present with pansinusitis. Treatment of chronic bacterial sinusitis can be challenging and consists primarily of repeated culture-guided courses of antibiotics, sometimes for 3–4 weeks or longer at a time; administration of intranasal glucocorticoids; and mechanical irrigation of the sinus with sterile saline solution. When this management approach fails, sinus surgery may be indicated and sometimes provides significant, albeit short-term, alleviation. Treatment of chronic fungal sinusitis consists of surgical removal of impacted mucus. Recurrence, unfortunately, is common. PART 2 Cardinal Manifestations and Presentation of Diseases Infections of the ear and associated structures can involve both the middle and the external ear, including the skin, cartilage, periosteum, ear canal, and tympanic and mastoid cavities. Both viruses and bacteria are known causes of these infections, some of which result in significant morbidity if not treated appropriately. Infections involving the structures of the external ear are often difficult to differentiate from noninfectious inflammatory conditions with similar clinical manifestations. Clinicians should consider inflammatory disorders as possible causes of external ear irritation, particularly in the absence of local or regional adenopathy. Aside from the more salient causes of inflammation, such as trauma, insect bite, and overexposure to sunlight or extreme cold, the differential diagnosis should include less common conditions such as autoimmune disorders (e.g., lupus or relapsing polychondritis) and vasculitides (e.g., granulomatosis with polyangiitis). Auricular Cellulitis Auricular cellulitis is an infection of the skin overlying the external ear and typically follows minor local trauma. It presents as the typical signs and symptoms of cellulitis, with tenderness, erythema, swelling, and warmth of the external ear (particularly the lobule) but without apparent involvement of the ear canal or inner structures. Treatment consists of warm compresses and oral antibiotics such as cephalexin or dicloxacillin that are active against typical skin and soft tissue pathogens (specifically, S. aureus and streptococci). IV antibiotics such as a first-generation cephalosporin (e.g., cefazolin) or a penicillinase-resistant penicillin (e.g., nafcillin) occasionally are needed for more severe cases, with consideration of MRSA if either risk factors or failure of therapy point to this organism. Perichondritis Perichondritis, an infection of the perichondrium of the auricular cartilage, typically follows local trauma (e.g., piercings, burns, or lacerations). Occasionally, when the infection spreads down to the cartilage of the pinna itself, patients may develop chondritis. The infection may closely resemble auricular cellulitis, with erythema, swelling, and extreme tenderness of the pinna, although the lobule is less often involved in perichondritis. The most common pathogens are P. aeruginosa and S. aureus, although other gram-negative and gram-positive organisms occasionally are involved. Treatment consists of systemic antibiotics active against both P. aeruginosa and S. aureus. An antipseudomonal penicillin (e.g., piperacillin) or a combination of a penicillinase-resistant penicillin and an antipseudomonal quinolone (e.g., nafcillin plus ciprofloxacin) is typically used. Incision and drainage may be helpful for culture and for resolution of infection, which often takes weeks. When perichondritis fails to respond to adequate antimicrobial therapy, clinicians should consider a noninfectious inflammatory etiology such as relapsing polychondritis. Otitis Externa The term otitis externa refers to a collection of diseases involving primarily the auditory meatus. Otitis externa usually results from a combination of heat and retained moisture, with desquamation and maceration of the epithelium of the outer ear canal. The disease exists in several forms: localized, diffuse, chronic, and invasive. All forms are predominantly bacterial in origin, with P. aeruginosa and S. aureus the most common pathogens. Acute localized otitis externa (furunculosis) can develop in the outer third of the ear canal, where skin overlies cartilage and hair follicles are numerous. As in furunculosis elsewhere on the body, S. aureus is the usual pathogen, and treatment typically consists of an oral antistaphylococcal penicillin (e.g., dicloxacillin or cephalexin), with incision and drainage in cases of abscess formation. Acute diffuse otitis externa is also known as swimmer’s ear, although it can develop in patients who have not recently been swimming. Heat, humidity, and the loss of protective cerumen lead to excessive moisture and elevation of the pH in the ear canal, which in turn lead to skin maceration and irritation. Infection may then follow; the predominant pathogen is P. aeruginosa, although other gram-negative and gram-positive organisms—and rarely yeasts—have been recovered from patients with this condition. The illness often starts with itching and progresses to severe pain, which is usually elicited by manipulation of the pinna or tragus. The onset of pain is generally accompanied by the development of an erythematous, swollen ear canal, often with scant white, clumpy discharge. Treatment consists of cleansing the canal to remove debris and enhance the activity of topical therapeutic agents—usually hypertonic saline or mixtures of alcohol and acetic acid. Inflammation can also be decreased by adding glucocorticoids to the treatment regimen or by using Burow’s solution (aluminum acetate in water). Antibiotics are most effective when given topically. Otic mixtures provide adequate pathogen coverage; these preparations usually combine neomycin with polymyxin, with or without glucocorticoids. Systemic antimicrobial agents typically are reserved for severe disease or infections in immunocompromised hosts. Chronic otitis externa is caused primarily by repeated local irritation, most commonly arising from persistent drainage from a chronic middle-ear infection. Other causes of repeated irritation, such as insertion of cotton swabs or other foreign objects into the ear canal, can lead to this condition, as can rare chronic infections such as syphilis, tuberculosis, and leprosy. Chronic otitis externa typically presents as erythematous, scaling dermatitis in which the predominant symptom is pruritus rather than pain; this condition must be differentiated from several others that produce a similar clinical picture, such as atopic dermatitis, seborrheic dermatitis, psoriasis, and dermatomycosis. Therapy consists of identifying and treating or removing the offending process, although successful resolution is frequently difficult. Invasive otitis externa, also known as malignant or necrotizing otitis externa, is an aggressive and potentially life-threatening disease that occurs predominantly in elderly diabetic patients and other immunocompromised persons. The disease begins in the external canal as a soft tissue infection that progresses slowly over weeks to months and often is difficult to distinguish from a severe case of chronic otitis externa because of the presence of purulent otorrhea and an erythematous swollen ear and external canal. Severe, deep-seated otalgia, frequently out of proportion to findings on examination, is often noted and can help differentiate invasive from chronic otitis externa. The characteristic finding on examination is granulation tissue in the posteroinferior wall of the external canal, near the junction of bone and cartilage. If left unchecked, the infection can migrate to the base of the skull (resulting in skull-base osteomyelitis) and onward to the meninges and brain, with a high mortality rate. Cranial nerve involvement is seen occasionally, with the facial nerve usually affected first and most often. Thrombosis of the sigmoid sinus can occur if the infection extends to the area. CT, which can reveal osseous erosion of the temporal bone and skull base, can be used to help determine the extent of disease, as can gallium and technetium-99 scintigraphy studies. P. aeruginosa is by far the most common offender, although S. aureus, Staphylococcus epidermidis, Aspergillus, Actinomyces, and some gram-negative bacteria also have been associated with this disease. In all cases, the external ear canal should be cleansed and a biopsy specimen of the granulation tissue within the canal (or of deeper tissues) obtained for culture of the offending organism. IV antibiotic therapy should be given for a prolonged course (6–8 weeks) and directed specifically toward the recovered pathogen. For P. aeruginosa, the regimen typically includes an antipseudomonal penicillin or cephalosporin (e.g., piperacillin or cefepime), often with an aminoglycoside or a fluoroquinolone, the latter of which can even be administered orally given its excellent bioavailability. In addition, antibiotic drops containing an agent active 229 against Pseudomonas (e.g., ciprofloxacin) are usually prescribed and are combined with glucocorticoids to reduce inflammation. Cases of invasive Pseudomonas otitis externa recognized in the early stages can sometimes be treated with oral and otic fluoroquinolones alone, albeit with close follow-up. Extensive surgical debridement, once an important component of the treatment approach, is now rarely indicated. In necrotizing otitis externa, recurrence is documented up to 20% of the time. Aggressive glycemic control in diabetics is important not only for effective treatment but also for prevention of recurrence. The role of hyperbaric oxygen has not been clearly established. Otitis media is an inflammatory condition of the middle ear that results from dysfunction of the eustachian tube in association with a number of illnesses, including URIs and chronic rhinosinusitis. The inflammatory response in these conditions leads to the development of a sterile transudate within the middle ear and mastoid cavities. Infection may occur if bacteria or viruses from the nasopharynx contaminate this fluid, producing an acute (or sometimes chronic) illness. Acute Otitis Media Acute otitis media results when pathogens from the nasopharynx are introduced into the inflammatory fluid collected in the middle ear (e.g., by nose blowing during a URI). Pathogenic proliferation in this space leads to the development of the typical signs and symptoms of acute middle-ear infection. The diagnosis of acute otitis media requires the demonstration of fluid in the middle ear (with tympanic membrane [TM] immobility) and the accompanying signs or symptoms of local or systemic illness (Table 44-2). ETIOLOgY Acute otitis media typically follows a viral URI. The causative viruses (most commonly RSV, influenza virus, rhinovirus, and enterovirus) can themselves cause subsequent acute otitis media; more often, they predispose the patient to bacterial otitis media. Studies using tympanocentesis have consistently found S. pneumoniae to be the most important bacterial cause, isolated in up to 35% of cases. H. influenzae (nontypable strains) and M. catarrhalis also are CHAPTER 44 Sore Throat, Earache, and Upper Respiratory Symptoms aDuration (unless otherwise specified): 10 days for patients <6 years old and patients with severe disease; 5–7 days (with consideration of observation only in previously healthy individuals with mild disease) for patients ≥6 years old. bFailure to improve and/or clinical worsening after 48–72 h of observation or treatment. Abbreviation: TM, tympanic membrane. Source: American Academy of Pediatrics Subcommittee on Management of Acute Otitis Media, 2004. 230 common bacterial causes of acute otitis media, and concern is increasing with MRSA as an emerging etiologic agent. Viruses, such as those mentioned above, have been recovered either alone or with bacteria in 17–40% of cases. CLINICAL MANIFESTATIONS Fluid in the middle ear is typically demonstrated or confirmed with pneumatic otoscopy. In the absence of fluid, the TM moves visibly with the application of positive and negative pressure, but this movement is dampened when fluid is present. With bacterial infection, the TM can also be erythematous, bulging, or retracted and occasionally can perforate spontaneously. The signs and symptoms accompanying infection can be local or systemic, including otalgia, otorrhea, diminished hearing, and fever. Erythema of the TM is often evident but is nonspecific as it frequently is seen in association with inflammation of the upper respiratory mucosa. Other signs and symptoms occasionally reported include vertigo, nystagmus, and tinnitus. PART 2 Cardinal Manifestations and Presentation of Diseases There has been considerable debate on the usefulness of antibiotics for the treatment of acute otitis media. A higher proportion of treated than untreated patients are free of illness 3–5 days after diagnosis. The difficulty of predicting which patients will benefit from antibiotic therapy has led to different approaches. In the Netherlands, for instance, physicians typically manage acute otitis media with initial observation, administering anti-inflammatory agents for aggressive pain management and reserving antibiotics for high-risk patients, patients with complicated disease, or patients whose condition does not improve after 48–72 h. In contrast, many experts in the United States continue to recommend antibiotic therapy for children <6 months old in light of the higher frequency of secondary complications in this young and functionally immunocompromised population. However, observation without antimicrobial therapy is now the recommended option in the United States for acute otitis media in children >2 years of age and for mild to moderate disease without middle-ear effusion in children 6 months to 2 years of age. Treatment is typically indicated for patients <6 months old; for children 6 months to 2 years old who have mid-dle-ear effusion and signs/symptoms of middle-ear inflammation; for all patients >2 years old who have bilateral disease, TM perforation, immunocompromise, or emesis; and for any patient who has severe symptoms, including a fever ≥39°C or moderate to severe otalgia (Table 44-2). Because most studies of the etiologic agents of acute otitis media consistently document similar pathogen profiles, therapy is generally empirical except in those few cases in which tympanocentesis is warranted—e.g., cases refractory to therapy and cases in patients who are severely ill or immunodeficient. Despite resistance to penicillin and amoxicillin in roughly one-quarter of S. pneumoniae isolates, one-third of H. influenzae isolates, and nearly all M. catarrhalis isolates, outcome studies continue to find that amoxicillin is as successful as any other agent, and it remains the drug of first choice in recommendations from multiple sources (Table 44-2). Therapy for uncomplicated acute otitis media typically is administered for 5–7 days to patients ≥6 years old; longer courses (e.g., 10 days) should be reserved for patients with severe disease, in whom short-course therapy may be inadequate. A switch in regimen is recommended if there is no clinical improvement by the third day of therapy, given the possibility of infection with a β-lactamase-producing strain of H. influenzae or M. catarrhalis or with a strain of penicillin-resistant S. pneumoniae. Decongestants and antihistamines are frequently used as adjunctive agents to reduce congestion and relieve obstruction of the eustachian tube, but clinical trials have yielded no significant evidence of benefit with either class of agents. Recurrent Acute Otitis Media Recurrent acute otitis media (more than three episodes within 6 months or four episodes within 12 months) generally is due to relapse or reinfection, although data indicate that the majority of early recurrences are new infections. In general, the same pathogens responsible for acute otitis media cause recurrent disease; even so, the recommended treatment consists of antibiotics active against β-lactamase-producing organisms. Antibiotic prophylaxis (e.g., with trimethoprim-sulfamethoxazole [TMP-SMX] or amoxicillin) can reduce recurrences in patients with recurrent acute otitis media by an average of one episode per year, but this benefit is small compared with the high likelihood of colonization with antibiotic-resistant pathogens. Other approaches, including placement of tympanostomy tubes, adenoidectomy, and tonsillectomy plus adenoidectomy, are of questionable overall value in light of the relatively small benefit compared with the potential for complications. Serous Otitis Media In serous otitis media (otitis media with effusion), fluid is present in the middle ear for an extended period in the absence of signs and symptoms of infection. In general, acute effusions are self-limited; most resolve in 2–4 weeks. In some cases, however (in particular after an episode of acute otitis media), effusions can persist for months. These chronic effusions are often associated with significant hearing loss in the affected ear. The great majority of cases of otitis media with effusion resolve spontaneously within 3 months without antibiotic therapy. Antibiotic therapy or myringotomy with insertion of tympanostomy tubes typically is reserved for patients in whom bilateral effusion (1) has persisted for at least 3 months and (2) is associated with significant bilateral hearing loss. With this conservative approach and the application of strict diagnostic criteria for acute otitis media and otitis media with effusion, it is estimated that 6–8 million courses of antibiotics could be avoided each year in the United States. Chronic Otitis Media Chronic suppurative otitis media is characterized by persistent or recurrent purulent otorrhea in the setting of TM perforation. Usually, there is also some degree of conductive hearing loss. This condition can be categorized as active or inactive. Inactive disease is characterized by a central perforation of the TM, which allows drainage of purulent fluid from the middle ear. When the perforation is more peripheral, squamous epithelium from the auditory canal may invade the middle ear through the perforation, forming a mass of keratinaceous debris (cholesteatoma) at the site of invasion. This mass can enlarge and has the potential to erode bone and promote further infection, which can lead to meningitis, brain abscess, or paralysis of cranial nerve VII. Treatment of chronic active otitis media is surgical; mastoidectomy, myringoplasty, and tympanoplasty can be performed as outpatient surgical procedures, with an overall success rate of ~80%. Chronic inactive otitis media is more difficult to cure, usually requiring repeated courses of topical antibiotic drops during periods of drainage. Systemic antibiotics may offer better cure rates, but their role in the treatment of this condition remains unclear. Mastoiditis Acute mastoiditis was relatively common among children before the introduction of antibiotics. Because the mastoid air cells connect with the middle ear, the process of fluid collection and infection is usually the same in the mastoid as in the middle ear. Early and frequent treatment of acute otitis media is most likely the reason that the incidence of acute mastoiditis has declined to only 1.2–2.0 cases per 100,000 person-years in countries with high prescribing rates for acute otitis media. In countries such as the Netherlands, where antibiotics are used sparingly for acute otitis media, the incidence rate of acute mastoiditis is roughly twice that in countries like the United States. However, neighboring Denmark has a rate of acute mastoiditis similar to that in the Netherlands but an antibiotic-prescribing rate for acute otitis media more similar to that in the United States. In typical acute mastoiditis, purulent exudate collects in the mastoid air cells (Fig. 44-1), producing pressure that may result in erosion of the surrounding bone and formation of abscess-like cavities that are usually evident on CT. Patients typically present with pain, erythema, and swelling of the mastoid process along with displacement of the pinna, usually in conjunction with the typical signs and symptoms of acute middle-ear infection. Rarely, patients can develop severe complications if the infection tracks under the periosteum of the temporal bone to cause a subperiosteal abscess, erodes through the mastoid tip to cause a deep neck abscess, or extends posteriorly to cause septic thrombosis of the lateral sinus. FIguRE 44-1 Acute mastoiditis. Axial CT image shows an acute fluid collection within the mastoid air cells on the left. Purulent fluid should be cultured whenever possible to help guide antimicrobial therapy. Initial empirical therapy usually is directed against the typical organisms associated with acute otitis media, such as S. pneumoniae, H. influenzae, and M. catarrhalis. Patients with more severe or prolonged courses of illness should be treated for infection with S. aureus and gram-negative bacilli (including Pseudomonas). Broad empirical therapy should be narrowed once culture results become available. Most patients can be treated conservatively with IV antibiotics; surgery (cortical mastoidectomy) is reserved for complicated cases and those in which conservative treatment has failed. Oropharyngeal infections range from mild, self-limited viral illnesses to serious, life-threatening bacterial infections. The most common presenting symptom is sore throat—one of the most common reasons for ambulatory care visits by both adults and children. Although sore throat is a symptom in many noninfectious illnesses as well, the overwhelming majority of patients with a new sore throat have acute pharyngitis of viral or bacterial etiology. Millions of visits to primary care providers each year are for sore throat; the majority of cases of acute pharyngitis are caused by typical respiratory viruses. The most important source of concern is infection with group A β-hemolytic Streptococcus (S. pyogenes) that is associated with acute glomerulonephritis and acute rheumatic fever. The risk of rheumatic fever can be reduced by timely penicillin therapy. Etiology A wide variety of organisms cause acute pharyngitis. The relative importance of the different pathogens can only be estimated, since a significant proportion of cases (~30%) have no identified cause. Together, respiratory viruses are the most common identifiable cause 231 of acute pharyngitis, with rhinoviruses and coronaviruses accounting for large proportions of cases (~20% and at least 5%, respectively). Influenza virus, parainfluenza virus, and adenovirus also account for a measurable share of cases, with the former two more seasonal and the latter as part of the more clinically severe syndrome of pharyngoconjunctival fever. Other important but less common viral causes include herpes simplex virus (HSV) types 1 and 2, coxsackievirus A, cytomegalovirus (CMV), and Epstein-Barr virus (EBV). Acute HIV infection can present as acute pharyngitis and should be considered in at-risk populations. Acute bacterial pharyngitis is typically caused by S. pyogenes, which accounts for ~5–15% of all cases of acute pharyngitis in adults; rates vary with the season and with utilization of the health care system. Group A streptococcal pharyngitis is primarily a disease of children 5–15 years of age; it is uncommon among children <3 years old, as is rheumatic fever. Streptococci of groups C and G account for a minority of cases, although these serogroups are nonrheumatogenic. Fusobacterium necrophorum has been increasingly recognized as a cause of pharyngitis in adolescents and young adults and is isolated nearly as often as group A streptococci. This organism is important because of the rare but life-threatening Lemierre’s disease, which is generally associated with F. necrophorum and is usually preceded by pharyngitis (see “Oral Infections,” below). The remaining bacterial causes of acute pharyngitis are seen infrequently (<1% of cases each) but should be considered in appropriate exposure groups because of the severity of illness if left untreated; these etiologic agents include Neisseria gonorrhoeae, Corynebacterium diphtheriae, Corynebacterium ulcerans, Yersinia enterocolitica, and Treponema pallidum (in secondary syphilis). Anaerobic bacteria also can cause acute pharyngitis (Vincent’s angina) and can contribute to more serious polymicrobial infections, such as peritonsillar or retropharyngeal abscesses (see below). Atypical organisms such as M. pneumoniae and C. pneumoniae have been recovered from patients with acute pharyngitis; whether these agents are commensals or causes of acute infection is debatable. Clinical Manifestations Although the signs and symptoms accompanying acute pharyngitis are not reliable predictors of the etiologic agent, the clinical presentation occasionally suggests one etiology over another. Acute pharyngitis due to respiratory viruses such as rhinovirus or coronavirus usually is not severe and typically is associated with a constellation of coryzal symptoms better characterized as nonspecific URI. Findings on physical examination are uncommon; fever is rare, and tender cervical adenopathy and pharyngeal exudates are not seen. In contrast, acute pharyngitis from influenza virus can be severe and is much more likely to be associated with fever as well as with myalgias, headache, and cough. The presentation of pharyngoconjunctival fever due to adenovirus infection is similar. Since pharyngeal exudate may be present on examination, this condition can be difficult to differentiate from streptococcal pharyngitis. However, adenoviral pharyngitis is distinguished by the presence of conjunctivitis in one-third to one-half of patients. Acute pharyngitis from primary HSV infection can also mimic streptococcal pharyngitis in some cases, with pharyngeal inflammation and exudate, but the presence of vesicles and shallow ulcers on the palate can help differentiate the two diseases. This HSV syndrome is distinct from pharyngitis caused by coxsackievirus (herpangina), which is associated with small vesicles that develop on the soft palate and uvula and then rupture to form shallow white ulcers. Acute exudative pharyngitis coupled with fever, fatigue, generalized lymphadenopathy, and (on occasion) splenomegaly is characteristic of infectious mononucleosis due to EBV or CMV. Acute primary infection with HIV is frequently associated with fever and acute pharyngitis as well as with myalgias, arthralgias, malaise, and occasionally a nonpruritic maculopapular rash, which may be followed by lymphadenopathy and mucosal ulcerations without exudate. The clinical features of acute pharyngitis caused by streptococci of groups A, C, and G are similar, ranging from a relatively mild illness without many accompanying symptoms to clinically severe cases with profound pharyngeal pain, fever, chills, and abdominal pain. CHAPTER 44 Sore Throat, Earache, and Upper Respiratory Symptoms 232 A hyperemic pharyngeal membrane with tonsillar hypertrophy and exudate is usually seen, along with tender anterior cervical adenopathy. Coryzal manifestations, including cough, are typically absent; when present, they suggest a viral etiology. Strains of S. pyogenes that generate erythrogenic toxin can also produce scarlet fever characterized by an erythematous rash and strawberry tongue. The other types of acute bacterial pharyngitis (e.g., gonococcal, diphtherial, and yersinial) often present as exudative pharyngitis with or without other clinical features. Their etiologies are often suggested only by the clinical history. Diagnosis The primary goal of diagnostic testing is to separate acute streptococcal pharyngitis from pharyngitis of other etiologies (particularly viral) so that antibiotics can be prescribed more efficiently for patients in whom they may be beneficial. The most appropriate standard for the diagnosis of streptococcal pharyngitis, however, has not been established definitively. Throat swab culture is generally regarded as the most appropriate but cannot distinguish between infection and colonization and requires 24–48 h to yield results that vary with technique and culture conditions. Rapid antigen-detection tests offer good specificity (>90%) but lower sensitivity when implemented in routine practice. The sensitivity has also been shown to vary across the clinical spectrum of disease (65–90%). Several clinical prediction systems (Fig. 44-2) can increase the sensitivity of rapid antigen-detection tests to >90% in controlled settings. Since the PART 2 Cardinal Manifestations and Presentation of Diseases Symptoms consistent with viral URI? Risk factors for HIV, gonorrhea? Group A Strep RADT or throat culture Penicillin allergy? No streptococcal testing Test accordingly Symptomatic management • Penicillin G 1.2 million units IM × 1, or • Penicillin VK 250 mg orally QID, or 500 mg orally BID, or • Amoxicillin 500 mg orally BID • Cephalexin 500 mg orally BID or TID (only if non-anaphylactic penicillin allergy), or • Azithromycin† 500 mg orally QD × 5 days, or • Clindamycin 300 mg orally TID Positive NoNegative* NOTE: All treatment durations are for 10 days with appropriate follow-up,unless otherwise specified.Yes Yes Yes No No No *Confirmation of a negative rapid antigen-detection test by a throat culture is not required in adults. †Macrolides do not treat F. necrophorum, a cause of pharyngitis in young adults (see text). Abbreviations: URI, upper respiratory infection; RADT, rapid antigen detection test FIguRE 44-2 Algorithm for the diagnosis and treatment of acute pharyngitis. sensitivities achieved in routine clinical practice are often lower, several medical and professional societies continue to recommend that all negative rapid antigen-detection tests in children be confirmed by a throat culture to limit transmission and complications of illness caused by group A streptococci. The Centers for Disease Control and Prevention, the Infectious Diseases Society of America, and the American Academy of Family Physicians do not recommend backup culture when adults have negative results from a highly sensitive rapid antigen-detection test, however, because of the lower prevalence and smaller benefit in this age group. Cultures and rapid diagnostic tests for other causes of acute pharyngitis, such as influenza virus, adenovirus, HSV, EBV, CMV, and M. pneumoniae, are available in many locations and can be used when suspected. The diagnosis of acute EBV infection depends primarily on the detection of antibodies to the virus with a heterophile agglutination assay (monospot slide test) or enzyme-linked immunosorbent assay. Testing for HIV RNA or antigen (p24) should be performed when acute primary HIV infection is suspected. If other bacterial causes are suspected (particularly N. gonorrhoeae, C. diphtheriae, or Y. enterocolitica), specific cultures should be requested since these organisms may be missed on routine throat swab culture. Antibiotic treatment of pharyngitis due to S. pyogenes confers numerous benefits, including a decrease in the risk of rheumatic fever, the primary focus of treatment. The magnitude of this benefit is fairly small, since rheumatic fever is now a rare disease, even among untreated patients. Nevertheless, when therapy is started within 48 h of illness onset, symptom duration is decreased modestly. An additional benefit of therapy is the potential to reduce the transmission of streptococcal pharyngitis, particularly in areas of overcrowding or close contact. Antibiotic therapy for acute pharyngitis is therefore recommended in cases in which S. pyogenes is confirmed as the etiologic agent by rapid antigen-detection test or throat swab culture. Otherwise, antibiotics should be given in routine cases only when another bacterial cause has been identified. Effective therapy for streptococcal pharyngitis consists of either a single dose of IM benzathine penicillin or a full 10-day course of oral penicillin (Fig. 44-2). Azithromycin can be used in place of penicillin, although resistance to azithromycin among S. pyogenes strains in some parts of the world (particularly Europe) can prohibit the use of this drug. Newer (and more expensive) antibiotics also are active against streptococci but offer no greater efficacy than the agents mentioned above. Testing for cure is unnecessary and may reveal only chronic colonization. There is no evidence to support antibiotic treatment of group C or G streptococcal pharyngitis or pharyngitis in which mycoplasmas or chlamydiae have been recovered. Cultures can be of benefit because F. necrophorum, an increasingly common cause of bacterial pharyngitis in young adults, is not covered by macrolide therapy. Long-term penicillin prophylaxis (benzathine penicillin G, 1.2 million units IM every 3–4 weeks; or penicillin VK, 250 mg PO bid) is indicated for patients at risk of recurrent rheumatic fever. Treatment of viral pharyngitis is entirely symptom based except in infection with influenza virus or HSV. For influenza, the armamentarium includes the adamantanes amantadine and rimantadine and the neuraminidase inhibitors oseltamivir and zanamivir. Administration of all these agents needs to be started within 48 h of symptom onset to reduce illness duration meaningfully. Among these agents, only oseltamivir and zanamivir are active against both influenza A and influenza B and therefore can be used when local patterns of infection and antiviral resistance are unknown. Oropharyngeal HSV infection sometimes responds to treatment with antiviral agents such as acyclovir, although these drugs are often reserved for immunosuppressed patients. Complications Although rheumatic fever is the best-known com-233 plication of acute streptococcal pharyngitis, the risk of its following acute infection remains quite low. Other complications include acute glomerulonephritis and numerous suppurative conditions, such as peritonsillar abscess (quinsy), otitis media, mastoiditis, sinusitis, bacteremia, and pneumonia—all of which occur at low rates. Although antibiotic treatment of acute streptococcal pharyngitis can prevent the development of rheumatic fever, there is no evidence that it can prevent acute glomerulonephritis. Some evidence supports antibiotic use to prevent the suppurative complications of streptococcal pharyngitis, particularly peritonsillar abscess, which can also involve oral anaerobes such as Fusobacterium. Abscesses usually are accompanied by severe pharyngeal pain, dysphagia, fever, and dehydration; in addition, medial displacement of the tonsil and lateral displacement of the uvula are often evident on examination. Although early use of IV antibiotics (e.g., clindamycin, penicillin G with metronidazole) may obviate the need for surgical drainage in some cases, treatment typically involves needle aspiration or incision and drainage. Aside from periodontal disease such as gingivitis, infections of the oral cavity most commonly involve HSV or Candida species. In addition to causing painful cold sores on the lips, HSV can infect the tongue and buccal mucosa, causing the formation of irritating vesicles. Although topical antiviral agents (e.g., acyclovir and penciclovir) can be used externally for cold sores, oral or IV acyclovir is often needed for primary infections, extensive oral infections, and infections in immunocompromised patients. Oropharyngeal candidiasis (thrush) is caused by a variety of Candida species, most often C. albicans. Thrush occurs predominantly in neonates, immunocompromised patients (especially those with AIDS), and recipients of prolonged antibiotic or glucocorticoid therapy. In addition to sore throat, patients often report a burning tongue, and physical examination reveals friable white or gray plaques on the gingiva, tongue, and oral mucosa. Treatment, which usually consists of an oral antifungal suspension (nystatin or clotrimazole) or oral fluconazole, is typically successful. In the uncommon cases of fluconazole-refractory thrush that are seen in some patients with HIV/AIDS, other therapeutic options include oral formulations of itraconazole or voriconazole as well as an IV echinocandin (caspofungin, micafungin, or anidulafungin) or amphotericin B deoxycholate, if needed. In these cases, therapy based on culture and susceptibility test results is ideal. Vincent’s angina, also known as acute necrotizing ulcerative gingivitis or trench mouth, is a unique and dramatic form of gingivitis characterized by painful, inflamed gingiva with ulcerations of the interdental papillae that bleed easily. Since oral anaerobes are the cause, patients typically have halitosis and frequently present with fever, malaise, and lymphadenopathy. Treatment consists of debridement and oral administration of penicillin plus metronidazole, with clindamycin or doxycycline alone as an alternative. Ludwig’s angina is a rapidly progressive, potentially fulminant form of cellulitis that involves the bilateral sublingual and sub-mandibular spaces and that typically originates from an infected or recently extracted tooth, most commonly the lower second and third molars. Improved dental care has reduced the incidence of this disorder substantially. Infection in these areas leads to dysphagia, odynophagia, and “woody” edema in the sublingual region, forcing the tongue up and back with the potential for airway obstruction. Fever, dysarthria, and drooling also may be noted, and patients may speak in a “hot potato” voice. Intubation or tracheostomy may be necessary to secure the airway, as asphyxiation is the most common cause of death. Patients should be monitored closely and treated promptly with IV antibiotics directed against streptococci and oral anaerobes. Recommended agents include ampicillin/sulbactam, clindamycin, or high-dose penicillin plus metronidazole. Postanginal septicemia (Lemierre’s disease) is a rare anaerobic oropharyngeal infection caused predominantly by F. necrophorum. The illness typically starts as a sore throat (most commonly in adolescents and young adults), which may present as exudative tonsillitis or peritonsillar abscess. Infection of the deep pharyngeal tissue allows CHAPTER 44 Sore Throat, Earache, and Upper Respiratory Symptoms 234 organisms to drain into the lateral pharyngeal space, which contains the carotid artery and internal jugular vein. Septic thrombophlebitis of the internal jugular vein can result, with associated pain, dysphagia, and neck swelling and stiffness. Sepsis usually occurs 3–10 days after the onset of sore throat and is often coupled with metastatic infection to the lung and other distant sites. Occasionally, the infection can extend along the carotid sheath and into the posterior mediastinum, resulting in mediastinitis, or it can erode into the carotid artery, with the early sign of repeated small bleeds into the mouth. The mortality rate from these invasive infections can be as high as 50%. Treatment consists of IV antibiotics (clindamycin or ampicillin/sulbactam) and surgical drainage of any purulent collections. The concomitant use of anticoagulants to prevent embolization remains controversial and is sometimes advised, with careful consideration of both the risks and the benefits. PART 2 Cardinal Manifestations and Presentation of Diseases Laryngitis is defined as any inflammatory process involving the larynx and can be caused by a variety of infectious and noninfectious processes. The vast majority of laryngitis cases seen in clinical practice in developed countries are acute. Acute laryngitis is a common syndrome caused predominantly by the same viruses responsible for many other URIs. In fact, most cases of acute laryngitis occur in the setting of a viral URI. Etiology Nearly all major respiratory viruses have been implicated in acute viral laryngitis, including rhinovirus, influenza virus, parainfluenza virus, adenovirus, coxsackievirus, coronavirus, and RSV. Acute laryngitis can also be associated with acute bacterial respiratory infections such as those caused by group A Streptococcus or C. diphtheriae (although diphtheria has been virtually eliminated in the United States). Another bacterial pathogen thought to play a role (albeit unclear) in the pathogenesis of acute laryngitis is M. catarrhalis, which has been recovered from nasopharyngeal culture in a significant percentage of cases. Chronic laryngitis of infectious etiology is much less common in developed than in developing countries. Laryngitis due to Mycobacterium tuberculosis is often difficult to distinguish from laryngeal cancer, in part because of the frequent absence of signs, symptoms, and radiographic findings typical of pulmonary disease. Histoplasma and Blastomyces may cause laryngitis, often as a complication of systemic infection. Candida species can cause laryngitis as well, often in association with thrush or esophagitis and particularly in immunosuppressed patients. Rare cases of chronic laryngitis are due to Coccidioides and Cryptococcus. Clinical Manifestations Laryngitis is characterized by hoarseness and also can be associated with reduced vocal pitch or aphonia. As acute laryngitis is caused predominantly by respiratory viruses, these symptoms usually occur in association with other symptoms and signs of URI, including rhinorrhea, nasal congestion, cough, and sore throat. Direct laryngoscopy often reveals diffuse laryngeal erythema and edema, along with vascular engorgement of the vocal folds. In addition, chronic disease (e.g., tuberculous laryngitis) often includes mucosal nodules and ulcerations visible on laryngoscopy; these lesions are sometimes mistaken for laryngeal cancer. Acute laryngitis is usually treated with humidification and voice rest alone. Antibiotics are not recommended except when group A Streptococcus is cultured, in which case penicillin is the drug of choice. The choice of therapy for chronic laryngitis depends on the pathogen, whose identification usually requires biopsy with culture. Patients with laryngeal tuberculosis are highly contagious because of the large number of organisms that are easily aerosolized. These patients should be managed in the same way as patients with active pulmonary disease. The term croup actually denotes a group of diseases collectively referred to as “croup syndrome,” all of which are acute and predominantly viral respiratory illnesses characterized by marked swelling of the subglottic region of the larynx. Croup primarily affects children <6 years old. For a detailed discussion of this entity, the reader should consult a textbook of pediatric medicine. Acute epiglottitis (supraglottitis) is an acute, rapidly progressive form of cellulitis of the epiglottis and adjacent structures that can result in complete—and potentially fatal—airway obstruction in both children and adults. Before the widespread use of H. influenzae type b (Hib) vaccine, this entity was much more common among children, with a peak incidence at ~3.5 years of age. In some countries, mass vaccination against Hib has reduced the annual incidence of acute epiglottitis in children by >90%; in contrast, the annual incidence in adults has changed little since the introduction of Hib vaccine. Because of the danger of airway obstruction, acute epiglottitis constitutes a medical emergency, particularly in children, and prompt diagnosis and airway protection are of the utmost importance. Etiology After the introduction of the Hib vaccine in the mid-1980s, disease incidence among children in the United States declined dramatically. Nevertheless, lack of vaccination or vaccine failure has meant that many pediatric cases seen today are still due to Hib. In adults and (more recently) in children, a variety of other bacterial pathogens have been associated with epiglottitis, the most common being group A Streptococcus. Other pathogens—seen less frequently— include S. pneumoniae, Haemophilus parainfluenzae, and S. aureus (including MRSA). Viruses have not been established as causes of acute epiglottitis. Clinical Manifestations and Diagnosis Epiglottitis typically presents more acutely in young children than in adolescents or adults. On presentation, most children have had symptoms for <24 h, including high fever, severe sore throat, tachycardia, systemic toxicity, and (in many cases) drooling while sitting forward. Symptoms and signs of respiratory obstruction also may be present and may progress rapidly. The somewhat milder illness in adolescents and adults often follows 1–2 days of severe sore throat and is commonly accompanied by dyspnea, drooling, and stridor. Physical examination of patients with acute epiglottitis may reveal moderate or severe respiratory distress, with inspiratory stridor and retractions of the chest wall. These findings diminish as the disease progresses and the patient tires. Conversely, oropharyngeal examination reveals infection that is much less severe than would be predicted from the symptoms—a finding that should alert the clinician to a cause of symptoms and obstruction that lies beyond the tonsils. The diagnosis often is made on clinical grounds, although direct fiberoptic laryngoscopy is frequently performed in a controlled environment (e.g., an operating room) to visualize and culture the typical edematous “cherry-red” epiglottis and facilitate placement of an endotracheal tube. Direct visualization in an examination room (i.e., with a tongue blade and indirect laryngoscopy) is not recommended because of the risk of immediate laryngospasm and complete airway obstruction. Lateral neck radiographs and laboratory tests can assist in the diagnosis but may delay the critical securing of the airway and cause the patient to be moved or repositioned more than is necessary, thereby increasing the risk of further airway compromise. Neck radiographs typically reveal an enlarged edematous epiglottis (the “thumbprint sign,” Fig. 44-3), usually with a dilated hypopharynx and normal subglottic structures. Laboratory tests characteristically document mild to moderate leukocytosis with a predominance of neutrophils. Blood cultures are positive in a significant proportion of cases. Security of the airway is always of primary concern in acute epiglottitis, even if the diagnosis is only suspected. Mere observation for signs of impending airway obstruction is not routinely recommended, particularly in children. Many adults have been managed with observation only since the illness is perceived to be milder in this age group, but some data suggest that this approach may be risky and probably should be reserved only for adult patients who have yet to develop dyspnea or stridor. Once the airway has been secured and specimens of blood and epiglottis tissue have been obtained for culture, treatment with IV antibiotics should be given to cover the most likely organisms, particularly H. influenzae. Because rates of ampicillin resistance in this organism have risen significantly in recent years, therapy with a β-lactam/β-lactamase inhibitor combination or a secondor third-generation cephalosporin is recommended. Typically, ampicillin/sulbactam, cefuroxime, cefotaxime, or ceftriaxone is given, with clindamycin and TMP-SMX reserved for patients allergic to β-lactams. Antibiotic therapy should be continued for 7–10 days and should be tailored to the organism recovered in culture. If the household contacts of a patient with H. influenzae epiglottitis include an unvaccinated child under age 4, all members of the household (including the patient) should receive prophylactic rifampin for 4 days to eradicate carriage of H. influenzae. FIguRE 44-3 Acute epiglottitis. In this lateral soft tissue radiograph of the neck, the arrow indicates the enlarged edematous epiglottis (the “thumbprint sign”). Deep neck infections are usually extensions of infection from other primary sites, most often within the pharynx or oral cavity. Many of these infections are life threatening but are difficult to detect at early stages, when they may be more easily managed. Three of the most clinically relevant spaces in the neck are the submandibular (and sublingual) space, the lateral pharyngeal (or parapharyngeal) space, and the retropharyngeal space. These spaces communicate with one another and with other important structures in the head, neck, and thorax, providing pathogens with easy access to areas that include the mediastinum, carotid sheath, skull base, and meninges. Once infection reaches these sensitive areas, mortality rates can be as high as 20–50%. Infection of the submandibular and/or sublingual space typically originates from an infected or recently extracted lower tooth. The result is the severe, life-threatening infection referred to as Ludwig’s angina (see “Oral Infections,” above). Infection of the lateral pharyngeal (or parapharyngeal) space is most often a complication of common infections of the oral cavity and upper respiratory tract, including tonsillitis, peritonsillar abscess, pharyngitis, mastoiditis, and periodontal infection. This space, situated deep in the lateral wall of the pharynx, contains a number of sensitive structures, including the 235 carotid artery, internal jugular vein, cervical sympathetic chain, and portions of cranial nerves IX through XII; at its distal end, it opens into the posterior mediastinum. Involvement of this space with infection can therefore be rapidly fatal. Examination may reveal some tonsillar displacement, trismus, and neck rigidity, but swelling of the lateral pharyngeal wall can easily be missed. The diagnosis can be confirmed by CT. Treatment consists of airway management, operative drainage of fluid collections, and at least 10 days of IV therapy with an antibiotic active against streptococci and oral anaerobes (e.g., ampicillin/ sulbactam). A particularly severe form of this infection involving the components of the carotid sheath (postanginal septicemia, Lemierre’s disease) is described above (see “Oral Infections”). Infection of the retropharyngeal space also can be extremely dangerous, as this space runs posterior to the pharynx from the skull base to the superior mediastinum. Infections in this space are more common among children <5 years old because of the presence of several small retropharyngeal lymph nodes that typically atrophy by age 4 years. Infection is usually a consequence of extension from another site of infection—most commonly, acute pharyngitis. Other sources include otitis media, tonsillitis, dental infections, Ludwig’s angina, and anterior extension of vertebral osteomyelitis. Retropharyngeal space infection also can follow penetrating trauma to the posterior pharynx (e.g., from an endoscopic procedure). Infections are commonly polymicrobial, involving a mixture of aerobes and anaerobes; group A β-hemolytic streptococci and S. aureus are the most common pathogens. M. tuberculosis was a common cause in the past but now is rarely involved in the United States. Patients with retropharyngeal abscess typically present with sore throat, fever, dysphagia, and neck pain and are often drooling because of difficulty and pain with swallowing. Examination may reveal tender cervical adenopathy, neck swelling, and diffuse erythema and edema of the posterior pharynx as well as a bulge in the posterior pharyngeal wall that may not be obvious on routine inspection. A soft tissue mass is usually demonstrable by lateral neck radiography or CT. Because of the risk of airway obstruction, treatment begins with securing of the airway, followed by a combination of surgical drainage and IV antibiotic administration. Initial empirical therapy should cover streptococci, oral anaerobes, and S. aureus; ampicillin/sulbactam, clindamycin plus ceftriaxone, or meropenem is usually effective. Complications result primarily from extension to other areas (e.g., rupture into the posterior pharynx may lead to aspiration pneumonia and empyema). Extension may also occur to the lateral pharyngeal space and mediastinum, resulting in mediastinitis and pericarditis, or into nearby major blood vessels. All these events are associated with a high mortality rate. CHAPTER 45 Oral Manifestations of Disease oral Manifestations of Disease Samuel C. Durso As primary care physicians and consultants, internists are often asked to evaluate patients with disease of the oral soft tissues, teeth, and pharynx. Knowledge of the oral milieu and its unique structures is necessary to guide preventive services and recognize oral manifestations of local or systemic disease (Chap. 46e). Furthermore, internists frequently collaborate with dentists in the care of patients who have a variety of medical conditions that affect oral health or who undergo dental procedures that increase their risk of medical complications. Tooth formation begins during the sixth week of embryonic life and continues through 17 years of age. Teeth start to develop in utero and continue to develop until after the tooth erupts. Normally, all 20 deciduous teeth have erupted by age 3 and have been shed by age 13. Permanent teeth, eventually totaling 32, begin to erupt by age 6 and 236 have completely erupted by age 14, though third molars (“wisdom teeth”) may erupt later. The erupted tooth consists of the visible crown covered with enamel and the root submerged below the gum line and covered with bonelike cementum. Dentin, a material that is denser than bone and exquisitely sensitive to pain, forms the majority of the tooth substance, surrounding a core of myxomatous pulp containing the vascular and nerve supply. The tooth is held firmly in the alveolar socket by the periodontium, supporting structures that consist of the gingivae, alveolar bone, cementum, and periodontal ligament. The periodontal ligament tenaciously binds the tooth’s cementum to the alveolar bone. Above this ligament is a collar of attached gingiva just below the crown. A few millimeters of unattached or free gingiva (1–3 mm) overlap the base of the crown, forming a shallow sulcus along the gum-tooth margin. Dental Caries, Pulpal and Periapical Disease, and Complications Dental caries usually begin asymptomatically as a destructive infectious process of the enamel. Bacteria—principally Streptococcus mutans— colonize the organic buffering biofilm (plaque) on the tooth surface. If not removed by brushing or by the natural cleansing and antibacterial action of saliva, bacterial acids can demineralize the enamel. Fissures and pits on the occlusal surfaces are the most frequent sites of early decay. Surfaces between the teeth, adjacent to tooth restorations and exposed roots, are also vulnerable, particularly as individuals age. Over time, dental caries extend to the underlying dentin, leading to cavitation of the enamel. Without management, the caries will penetrate to the tooth pulp, producing acute pulpitis. At this stage, when the pulp infection is limited, the tooth may become sensitive to percussion and to hot or cold, and pain resolves immediately when the irritating stimulus is removed. Should the infection spread throughout the pulp, irreversible pulpitis occurs, leading to pulp necrosis. At this later stage, pain can be severe and has a sharp or throbbing visceral quality that may be worse when the patient lies down. Once pulp necrosis is complete, pain may be constant or intermittent, but cold sensitivity is lost. Treatment of caries involves removal of the softened and infected hard tissue and restoration of the tooth structure with silver amalgam, glass ionomer, composite resin, or gold. Once irreversible pulpitis occurs, root canal therapy becomes necessary; removal of the contents of the pulp chamber and root canals is followed by thorough cleaning and filling with an inert material. Alternatively, the tooth may be extracted. Pulpal infection leads to periapical abscess formation, which can produce pain on chewing. If the infection is mild and chronic, a periapical granuloma or eventually a periapical cyst forms, either of which produces radiolucency at the root apex. When unchecked, a periapical abscess can erode into the alveolar bone, producing osteomyelitis; penetrate and drain through the gingivae, producing a parulis (gumboil); or track along deep fascial planes, producing virulent cellulitis (Ludwig’s angina) involving the submandibular space and floor of the mouth (Chap. 201). Elderly patients, patients with diabetes mellitus, and patients taking glucocorticoids may experience little or no pain or fever as these complications develop. Periodontal Disease Periodontal disease and dental caries are the primary causes of tooth loss. Like dental caries, chronic infection of the gingiva and anchoring structures of the tooth begins with formation of bacterial plaque. The process begins at the gum line. Plaque and calculus (calcified plaque) are preventable by appropriate daily oral hygiene, including periodic professional cleaning. Left undisturbed, chronic inflammation can ensue and produce hyperemia of the free and attached gingivae (gingivitis), which then typically bleed with brushing. If this issue is ignored, severe periodontitis can develop, leading to deepening of the physiologic sulcus and destruction of the periodontal ligament. Gingival pockets develop around the teeth. As the periodontium (including the supporting bone) is destroyed, the teeth loosen. A role for chronic inflammation due to chronic periodontal disease in promoting coronary heart disease and stroke has been proposed. Epidemiologic studies have demonstrated a moderate but significant association between chronic periodontal inflammation and atherogenesis, though a causal role remains unproven. PART 2 Cardinal Manifestations and Presentation of Diseases Acute and aggressive forms of periodontal disease are less common than the chronic forms described above. However, if the host is stressed or exposed to a new pathogen, rapidly progressive and destructive disease of the periodontal tissue can occur. A virulent example is acute necrotizing ulcerative gingivitis. Stress and poor oral hygiene are risk factors. The presentation includes sudden gingival inflammation, ulceration, bleeding, interdental gingival necrosis, and fetid halitosis. Localized juvenile periodontitis, which is seen in adolescents, is particularly destructive and appears to be associated with impaired neutrophil chemotaxis. AIDS-related periodontitis resembles acute necrotizing ulcerative gingivitis in some patients and a more destructive form of adult chronic periodontitis in others. It may also produce a gangrene-like destructive process of the oral soft tissues and bone that resembles noma, an infectious condition seen in severely malnourished children in developing nations. Prevention of Tooth Decay and Periodontal Infection Despite the reduced prevalences of dental caries and periodontal disease in the United States (due in large part to water fluoridation and improved dental care, respectively), both diseases constitute a major public health problem worldwide, particularly in certain groups. The internist should promote preventive dental care and hygiene as part of health maintenance. Populations at high risk for dental caries and periodontal disease include those with hyposalivation and/or xerostomia, diabetics, alcoholics, tobacco users, persons with Down syndrome, and those with gingival hyperplasia. Furthermore, patients lacking access to dental care (e.g., as a result of low socioeconomic status) and patients with a reduced ability to provide self-care (e.g., individuals with disabilities, nursing home residents, and persons with dementia or upper-extremity disability) suffer at a disproportionate rate. It is important to provide counseling regarding regular dental hygiene and professional cleaning, use of fluoride-containing toothpaste, professional fluoride treatments, and (for patients with limited dexterity) use of electric toothbrushes and also to instruct persons caring for those who are not capable of self-care. Cost, fear of dental care, and differences in language and culture create barriers that prevent some people from seeking preventive dental services. Developmental and Systemic Disease Affecting the Teeth and Periodontium In addition to posing cosmetic issues, malocclusion, the most common developmental oral problem, can interfere with mastication unless corrected through orthodontic and surgical techniques. Impacted third molars are common and can become infected or erupt into an insufficient space. Acquired prognathism due to acromegaly may also lead to malocclusion, as may deformity of the maxilla and mandible due to Paget’s disease of the bone. Delayed tooth eruption, a receding chin, and a protruding tongue are occasional features of cretinism and hypopituitarism. Congenital syphilis produces tapering, notched (Hutchinson’s) incisors and finely nodular (mulberry) molar crowns. Enamel hypoplasia results in crown defects ranging from pits to deep fissures of primary or permanent teeth. Intrauterine infection (syphilis, rubella), vitamin deficiency (A, C, or D), disorders of calcium metabolism (malabsorption, vitamin D–resistant rickets, hypoparathyroidism), prematurity, high fever, and rare inherited defects (amelogenesis imperfecta) are all causes. Tetracycline, given in sufficiently high doses during the first 8 years of life, may produce enamel hypoplasia and discoloration. Exposure to endogenous pigments can discolor developing teeth; etiologies include erythroblastosis fetalis (green or bluish-black), congenital liver disease (green or yellow-brown), and porphyria (red or brown that fluoresces with ultraviolet light). Mottled enamel occurs if excessive fluoride is ingested during development. Worn enamel is seen with age, bruxism, or excessive acid exposure (e.g., chronic gastric reflux or bulimia). Celiac disease is associated with nonspecific enamel defects in children but not in adults. Total or partial tooth loss resulting from periodontitis is seen with cyclic neutropenia, Papillon-Lefévre syndrome, Chédiak-Higashi syndrome, and leukemia. Rapid focal tooth loosening is most often due to infection, but rarer causes include Langerhans cell histiocytosis, Ewing’s sarcoma, osteosarcoma, and Burkitt’s lymphoma. Early loss of primary teeth is a feature of hypophosphatasia, a rare congenital error of metabolism. Pregnancy may produce gingivitis and localized pyogenic granulomas. Severe periodontal disease occurs in uncontrolled diabetes mellitus. Gingival hyperplasia may be caused by phenytoin, calcium channel blockers (e.g., nifedipine), and cyclosporine, though excellent daily oral care can prevent or reduce its occurrence. Idiopathic familial gingival fibromatosis and several syndrome-related disorders cause similar conditions. Discontinuation of the medication may reverse the drug-induced form, though surgery may be needed to control both of the latter entities. Linear gingival erythema is variably seen in patients with advanced HIV infection and probably represents immune deficiency and decreased neutrophil activity. Diffuse or focal gingival swelling may be a feature of early or late acute myelomonocytic leukemia as well as of other lymphoproliferative disorders. A rare but pathognomonic sign of granulomatosis with polyangiitis is a red-purplish, granular gingivitis (strawberry gums). DISEASES OF THE ORAL MuCOSA Infections Most oral mucosal diseases involve microorganisms (Table 45-1). Pigmented Lesions See Table 45-2. Dermatologic Diseases See Tables 45-1, 45-2, and 45-3 and Chaps. 70–74. Diseases of the Tongue See Table 45-4. HIV Disease and AIDS See Tables 45-1, 45-2, 45-3, and 45-5; Chap. 226; and Fig. 218-3. ulcers Ulceration is the most common oral mucosal lesion. Although there are many causes, the host and the pattern of lesions, including the presence of organ system features, narrow the differential diagnosis (Table 45-1). Most acute ulcers are painful and self-limited. Recurrent aphthous ulcers and herpes simplex account for the majority. Persistent and deep aphthous ulcers can be idiopathic or can accompany HIV/ AIDS. Aphthous lesions are often the presenting symptom in Behçet’s syndrome (Chap. 387). Similar-appearing, though less painful, lesions may occur in reactive arthritis, and aphthous ulcers are occasionally present during phases of discoid or systemic lupus erythematosus (Chap. 382). Aphthous-like ulcers are seen in Crohn’s disease (Chap. 351), but, unlike the common aphthous variety, they may exhibit granulomatous inflammation on histologic examination. Recurrent aphthae are more prevalent in patients with celiac disease and have been reported to remit with elimination of gluten. Of major concern are chronic, relatively painless ulcers and mixed red/white patches (erythroplakia and leukoplakia) of >2 weeks’ duration. Squamous cell carcinoma and premalignant dysplasia should be considered early and a diagnostic biopsy performed. This awareness and this procedure are critically important because early-stage malignancy is vastly more treatable than late-stage disease. High-risk sites include the lower lip, floor of the mouth, ventral and lateral tongue, and soft palate–tonsillar pillar complex. Significant risk factors for oral cancer in Western countries include sun exposure (lower lip), tobacco and alcohol use, and human papillomavirus infection. In India and some other Asian countries, smokeless tobacco mixed with betel nut, slaked lime, and spices is a common cause of oral cancer. Rarer causes of chronic oral ulcer, such as tuberculosis, fungal infection, granulomatosis with polyangiitis, and midline granuloma may look identical to carcinoma. Making the correct diagnosis depends on recognizing other clinical features and performing a biopsy of the lesion. The syphilitic chancre is typically painless and therefore easily missed. Regional lymphadenopathy is invariably present. The syphilitic etiology is confirmed with appropriate bacterial and serologic tests. Disorders of mucosal fragility often produce painful oral ulcers that fail to heal within 2 weeks. Mucous membrane pemphigoid and pemphigus vulgaris are the major acquired disorders. While their clinical features are often distinctive, a biopsy or immunohistochemical examination should be performed to diagnose these entities and to 237 distinguish them from lichen planus and drug reactions. Hematologic and Nutritional Disease Internists are more likely to encounter patients with acquired, rather than congenital, bleeding disorders. Bleeding should stop 15 min after minor trauma and within an hour after tooth extraction if local pressure is applied. More prolonged bleeding, if not due to continued injury or rupture of a large vessel, should lead to investigation for a clotting abnormality. In addition to bleeding, petechiae and ecchymoses are prone to occur at the vibrating line between the soft and hard palates in patients with platelet dysfunction or thrombocytopenia. All forms of leukemia, but particularly acute myelomonocytic leukemia, can produce gingival bleeding, ulcers, and gingival enlargement. Oral ulcers are a feature of agranulocytosis, and ulcers and mucositis are often severe complications of chemotherapy and radiation therapy for hematologic and other malignancies. Plummer-Vinson syndrome (iron deficiency, angular stomatitis, glossitis, and dysphagia) raises the risk of oral squamous cell cancer and esophageal cancer at the postcricoidal tissue web. Atrophic papillae and a red, burning tongue may occur with pernicious anemia. Deficiencies in B-group vitamins produce many of these same symptoms as well as oral ulceration and cheilosis. Consequences of scurvy include swollen, bleeding gums; ulcers; and loosening of the teeth. Most, but not all, oral pain emanates from inflamed or injured tooth pulp or periodontal tissues. Nonodontogenic causes are often overlooked. In most instances, toothache is predictable and proportional to the stimulus applied, and an identifiable condition (e.g., caries, abscess) is found. Local anesthesia eliminates pain originating from dental or periodontal structures, but not referred pains. The most common nondental source of pain is myofascial pain referred from muscles of mastication, which become tender and ache with increased use. Many sufferers exhibit bruxism (grinding of the teeth) secondary to stress and anxiety. Temporomandibular joint disorder is closely related. It affects both sexes, with a higher prevalence among women. Features include pain, limited mandibular movement, and temporomandibular joint sounds. The etiologies are complex; malocclusion does not play the primary role once attributed to it. Osteoarthritis is a common cause of masticatory pain. Anti-inflammatory medication, jaw rest, soft foods, and heat provide relief. The temporomandibular joint is involved in 50% of patients with rheumatoid arthritis, and its involvement is usually a late feature of severe disease. Bilateral preauricular pain, particularly in the morning, limits range of motion. Migrainous neuralgia may be localized to the mouth. Episodes of pain and remission without an identifiable cause and a lack of relief with local anesthesia are important clues. Trigeminal neuralgia (tic douloureux) can involve the entire branch or part of the mandibular or maxillary branch of the fifth cranial nerve and can produce pain in one or a few teeth. Pain may occur spontaneously or may be triggered by touching the lip or gingiva, brushing the teeth, or chewing. Glossopharyngeal neuralgia produces similar acute neuropathic symptoms in the distribution of the ninth cranial nerve. Swallowing, sneezing, coughing, or pressure on the tragus of the ear triggers pain that is felt in the base of the tongue, pharynx, and soft palate and may be referred to the temporomandibular joint. Neuritis involving the maxillary and mandibular divisions of the trigeminal nerve (e.g., maxillary sinusitis, neuroma, and leukemic infiltrate) is distinguished from ordinary toothache by the neuropathic quality of the pain. Occasionally, phantom pain follows tooth extraction. Pain and hyperalgesia behind the ear and on the side of the face in the day or so before facial weakness develops often constitute the earliest symptom of Bell’s palsy. Likewise, similar symptoms may precede visible lesions of herpes zoster infecting the seventh nerve (Ramsey-Hunt syndrome) or trigeminal nerve. Postherpetic neuralgia may follow either condition. Coronary ischemia may produce pain exclusively in the face and jaw; as in typical angina pectoris, this pain is usually reproducible with increased myocardial demand. Aching in several upper molar or CHAPTER 45 Oral Manifestations of Disease 238 TABLE 45-1 vESiCuLAR, BuLLouS, oR uLCERATivE LESionS of THE oRAL MuCoSA Labial vesicles that rupture and crust, and intraoral vesicles that quickly ulcerate; extremely painful; acute gingivitis, fever, malaise, foul odor, and cervical lymphadenopathy; occurs primarily in infants, children, and young adults Eruption of groups of vesicles that may coalesce, then rupture and crust; painful to pressure or spicy foods Skin lesions may be accompanied by small vesicles on oral mucosa that rupture to form shallow ulcers; may coalesce to form large bullous lesions that ulcerate; mucosa may have generalized erythema Unilateral vesicular eruptions and ulceration in linear pattern following sensory distribution of trigeminal nerve or one of its branches Fatigue, sore throat, malaise, fever, and cervical lymphadenopathy; numerous small ulcers usually appear several days before lymphadenopathy; gingival bleeding and multiple petechiae at junction of hard and soft palates Sudden onset of fever, sore throat, and oropharyngeal vesicles, usually in children <4 years old, during summer months; diffuse pharyngeal congestion and vesicles (1–2 mm), grayish-white surrounded by red areola; vesicles enlarge and ulcerate Fever, malaise, headache with oropharyngeal vesicles that become painful, shallow ulcers; highly infectious; usually affects children under age 10 Acute gingivitis and oropharyngeal ulceration, associated with febrile illness resembling mononucleosis and including lymphadenopathy Heals spontaneously in 10–14 days; unless secondarily infected, lesions lasting >3 weeks are not due to primary HSV infection Lasts ∼1 week, but condition may be prolonged if secondarily infected; if severe, topical or oral antiviral treatment may reduce healing time Heals spontaneously in ∼1 week; if severe, topical or oral antiviral treatment may reduce healing time Gradual healing without scarring unless secondarily infected; postherpetic neuralgia is common; oral acyclovir, famciclovir, or valacyclovir reduces healing time and postherpetic neuralgia Oral lesions disappear during convalescence; no treatment is given, though glucocorticoids are indicated if tonsillar swelling compromises the airway Incubation period of 2–9 days; fever for 1–4 days; recovery uneventful Followed by HIV seroconversion, asymptomatic HIV infection, and usually ultimately by HIV disease Primary HIV infection Lip and oral mucosa (buccal, gingival, lingual mucosa) Mucocutaneous junction of lip, perioral skin Cheek, tongue, gingiva, or palate Oral mucosa, pharynx, tongue Oral mucosa, pharynx, palms, and soles Gingiva, palate, and pharynx PART 2 Cardinal Manifestations and Presentation of Diseases Painful, bleeding gingiva characterized by necrosis and ulceration of gingival papillae and margins plus lymphadenopathy and foul breath Gummatous involvement of palate, jaws, and facial bones; Hutchinson’s incisors, mulberry molars, glossitis, mucous patches, and fissures at corner of mouth Small papule developing rapidly into a large, painless ulcer with indurated border; unilateral lymphadenopathy; chancre and lymph nodes containing spirochetes; serologic tests positive by third to fourth weeks Maculopapular lesions of oral mucosa, 5–10 mm in diameter with central ulceration covered by grayish membrane; eruptions occurring on various mucosal surfaces and skin, accompanied by fever, malaise, and sore throat Gummatous infiltration of palate or tongue followed by ulceration and fibrosis; atrophy of tongue papillae produces characteristic bald tongue and glossitis Most pharyngeal infection is asymptomatic; may produce burning or itching sensation; oropharynx and tonsils may be ulcerated and erythematous; saliva viscous and fetid Painless, solitary, 1to 5-cm, irregular ulcer covered with persistent exudate; ulcer has firm undermined border Debridement and diluted (1:3) peroxide lavage provide relief within 24 h; antibiotics in acutely ill patients; relapse may occur Healing of chancre in 1–2 months, followed by secondary syphilis in 6–8 weeks Lesions may persist from several weeks to a year Gumma may destroy palate, causing complete perforation More difficult to eradicate than urogenital infection, though pharyngitis usually resolves with appropriate antimicrobial treatment Autoinoculation from pulmonary infection is usual; lesions resolve with appropriate antimicrobial therapy 239TABLE 45-1 vESiCuLAR, BuLLouS, oR uLCERATivE LESionS of THE oRAL MuCoSA (CONTINUED) CHAPTER 45 Oral Manifestations of Disease Recurrent aphthous Usually on nonkeratinized Single or clustered painful ulcers with surrounding Lesions heal in 1–2 weeks but may recur monthly or ulcers oral mucosa (buccal and erythematous border; lesions may be 1–2 mm in several times a year; protective barrier with benzolabial mucosa, floor of diameter in crops (herpetiform), 1–5 mm (minor), caine and topical glucocorticoids relieve symptoms; mouth, soft palate, lateral or 5–15 mm (major) systemic glucocorticoids may be needed in severe and ventral tongue) Behçet’s syndrome Oral mucosa, eyes, genita-Multiple aphthous ulcers in mouth; inflammatory Oral lesions often first manifestation; persist several lia, gut, and CNS ocular changes, ulcerative lesions on genitalia; weeks and heal without scarring inflammatory bowel disease and CNS disease Traumatic ulcers Anywhere on oral mucosa; Localized, discrete ulcerated lesions with red Lesions usually heal in 7–10 days when irritant is dentures frequently border; produced by accidental biting of mucosa, removed, unless secondarily infected responsible for ulcers in penetration by foreign object, or chronic irritation vestibule by dentures Squamous cell Any area of mouth, most Red, white, or red and white ulcer with elevated or Invades and destroys underlying tissues; frequently carcinoma commonly on lower lip, indurated border; failure to heal; pain not promi-metastasizes to regional lymph nodes lateral borders of tongue, nent in early lesions and floor of mouth Acute myeloid Gingiva Gingival swelling and superficial ulceration fol-Usually responds to systemic treatment of leukemia; leukemia (usually lowed by hyperplasia of gingiva with extensive occasionally requires local irradiation monocytic) necrosis and hemorrhage; deep ulcers may occur elsewhere on mucosa, complicated by secondary infection Lymphoma Gingiva, tongue, palate, Elevated, ulcerated area that may proliferate rap-Fatal if untreated; may indicate underlying HIV and tonsillar area idly, giving appearance of traumatic inflammation infection Chemical or thermal Any area in mouth White slough due to contact with corrosive Lesion heals in several weeks if not secondarily burns agents (e.g., aspirin, hot cheese) applied locally; infected removal of slough leaves raw, painful surface aSee Table 45-3. Abbreviations: CNS, central nervous system; EM, erythema multiforme; HSV, herpes simplex virus; VZV, varicella-zoster virus. premolar teeth that is unrelieved by anesthetizing the teeth may point etiology may be neuropathic. Clonazepam, α-lipoic acid, and cognitive to maxillary sinusitis. behavioral therapy have benefited some patients. Some cases associ- Giant cell arteritis is notorious for producing headache, but it may ated with an angiotensin-converting enzyme inhibitor have remitted also produce facial pain or sore throat without headache. Jaw and when treatment with the drug was discontinued. tongue claudication with chewing or talking is relatively common. Tongue infarction is rare. Patients with subacute thyroiditis often DISEASES OF THE SALIVARY gLANDS experience pain referred to the face or jaw before the tenderness of the Saliva is essential to oral health. Its absence leads to dental caries, thyroid gland and transient hyperthyroidism are appreciated. periodontal disease, and difficulties in wearing dental prostheses, mas “Burning mouth syndrome” (glossodynia) occurs in the absence of ticating, and speaking. Its major components, water and mucin, serve an identifiable cause (e.g., vitamin B12 deficiency, iron deficiency, dia-as a cleansing solvent and lubricating fluid. In addition, saliva conbetes mellitus, low-grade Candida infection, food sensitivity, or subtle tains antimicrobial factors (e.g., lysozyme, lactoperoxidase, secretory xerostomia) and predominantly affects postmenopausal women. The IgA), epidermal growth factor, minerals, and buffering systems. The PART 2 Cardinal Manifestations and Presentation of Diseases Lichen planus Buccal mucosa, tongue, gingiva, and lips; skin White sponge nevus Oral mucosa, vagina, anal mucosa Smoker’s leukopla-Any area of oral mucosa, kia and smokeless sometimes related to tobacco lesions location of habit Erythroplakia with Floor of mouth com-or without white monly affected in men; patches tongue and buccal tongue, rarely elsewhere Warts (human papil-Anywhere on skin and lomavirus) oral mucosa Striae, white plaques, red areas, ulcers in mouth; purplish papules on skin; may be asymptomatic, sore, or painful; lichenoid drug reactions may look similar Painless white thickening of epithelium; adolescence/early adulthood onset; familial White patch that may become firm, rough, or red-fissured and ulcerated; may become sore and painful but is usually painless Velvety, reddish plaque; occasionally mixed with white patches or smooth red areas Pseudomembranous type (“thrush”): creamy white curdlike patches that reveal a raw, bleeding surface when scraped; found in sick infants, debilitated elderly patients receiving high-dose glucocorticoids or broad-spectrum antibiotics, and patients with AIDS Erythematous type: flat, red, sometimes sore areas in same groups of patients Candidal leukoplakia: nonremovable white thickening of epithelium due to Candida Angular cheilitis: sore fissures at corner of mouth White areas ranging from small and flat to extensive accentuation of vertical folds; found in HIV carriers (all risk groups for AIDS) Single or multiple papillary lesions with thick, white, keratinized surfaces containing many pointed projections; cauliflower lesions covered with normal-colored mucosa or multiple pink or pale bumps (focal epithelial hyperplasia) Protracted; responds to topical glucocorticoids May or may not resolve with cessation of habit; 2% of patients develop squamous cell carcinoma; early biopsy essential High risk of squamous cell cancer; early biopsy essential Responds favorably to antifungal therapy and correction of predisposing causes where possible Course same as for pseudomembranous type Responds to prolonged antifungal therapy Responds to topical antifungal therapy Due to Epstein-Barr virus; responds to high-dose acyclovir but recurs; rarely causes discomfort unless secondarily infected with Candida Lesions grow rapidly and spread; squamous cell carcinoma must be ruled out with biopsy; excision or laser therapy; may regress in HIV-infected patients receiving antiretroviral therapy Type of Change Clinical Features Macroglossia Enlarged tongue that may be part of a syndrome found in developmental conditions such as Down syndrome, Simpson-Golabi-Behmel syndrome, or Beckwith-Wiedemann syndrome; may be due to tumor (hemangioma or lymphangioma), metabolic disease (e.g., primary amyloidosis), or endocrine disturbance (e.g., acromegaly or cretinism); may occur when all teeth are removed Fissured (“scro-Dorsal surface and sides of tongue covered by pain Median rhom-Congenital abnormality with ovoid, denuded area in boid glossitis median posterior portion of tongue; may be associated with candidiasis and may respond to antifungal treatment “Geographic” Asymptomatic inflammatory condition of tongue, with tongue (benign rapid loss and regrowth of filiform papillae leading to migratory appearance of denuded red patches “wandering” across glossitis) surface Hairy tongue Elongation of filiform papillae of medial dorsal surface area due to failure of keratin layer of papillae to desquamate normally; brownish-black coloration may be due to staining by tobacco, food, or chromogenic organisms “Strawberry” Appearance of tongue during scarlet fever due to hyper-and “raspberry” trophy of fungiform papillae as well as changes in filiform tongue papillae “Bald” tongue Atrophy may be associated with xerostomia, pernicious anemia, iron-deficiency anemia, pellagra, or syphilis; may be accompanied by painful burning sensation; may be an expression of erythematous candidiasis and respond to antifungal treatment Papules, Candidiasis (hyperplastic and pseudomembranous)a nodules, Ulcers Recurrent aphthous ulcersa Angular cheilitis Squamous cell carcinoma Acute necrotizing ulcerative gingivitisa Necrotizing ulcerative periodontitisa Necrotizing ulcerative stomatitis Non-Hodgkin’s lymphomaa Viral infection (herpes simplex, herpes zoster, cytomegalo Fungal infection (histoplasmosis, cryptococcosis, candidiasis, geotrichosis, aspergillosis) Bacterial infection (Escherichia coli, Enterobacter cloacae, Klebsiella pneumoniae, Pseudomonas aeruginosa) common than oral) Zidovudine pigmentation (skin, nails, and occasionally oral mucosa) Addison’s disease Miscellaneous Linear gingival erythemaa aStrongly associated with HIV infection. major salivary glands secrete intermittently in response to autonomic 241 stimulation, which is high during a meal but low otherwise. Hundreds of minor glands in the lips and cheeks secrete mucus continuously throughout the day and night. Consequently, oral function becomes impaired when salivary function is reduced. The sensation of a dry mouth (xerostomia) is perceived when salivary flow is reduced by 50%. The most common etiology is medication, especially drugs with anticholinergic properties but also alpha and beta blockers, calcium channel blockers, and diuretics. Other causes include Sjögren’s syndrome, chronic parotitis, salivary duct obstruction, diabetes mellitus, HIV/AIDS, and radiation therapy that includes the salivary glands in the field (e.g., for Hodgkin’s disease and for head and neck cancer). Management involves the elimination or limitation of drying medications, preventive dental care, and supplementation with oral liquid or salivary substitutes. Sugarless mints or chewing gum may stimulate salivary secretion if dysfunction is mild. When sufficient exocrine tissue remains, pilocarpine or cevimeline has been shown to increase secretions. Commercial saliva substitutes or gels relieve dryness. Fluoride supplementation is critical to prevent caries. Sialolithiasis presents most often as painful swelling but in some instances as only swelling or only pain. Conservative therapy consists of local heat, massage, and hydration. Promotion of salivary secretion with mints or lemon drops may flush out small stones. Antibiotic treatment is necessary when bacterial infection in suspected. In adults, acute bacterial parotitis is typically unilateral and most commonly affects postoperative, dehydrated, and debilitated patients. Staphylococcus aureus (including methicillin-resistant strains) and anaerobic bacteria are the most common pathogens. Chronic bacterial sialadenitis results from lowered salivary secretion and recurrent bacterial infection. When suspected bacterial infection is not responsive to therapy, the differential diagnosis should be expanded to include benign and malignant neoplasms, lymphoproliferative disorders, Sjögren’s syndrome, sarcoidosis, tuberculosis, lymphadenitis, actinomycosis, and granulomatosis with polyangiitis. Bilateral nontender parotid enlargement occurs with diabetes mellitus, cirrhosis, bulimia, HIV/AIDS, and drugs (e.g., iodide, propylthiouracil). Pleomorphic adenoma comprises two-thirds of all salivary neoplasms. The parotid is the principal salivary gland affected, and the tumor presents as a firm, slow-growing mass. Although this tumor is benign, its recurrence is common if resection is incomplete. Malignant tumors such as mucoepidermoid carcinoma, adenoid cystic carcinoma, and adenocarcinoma tend to grow relatively fast, depending upon grade. They may ulcerate and invade nerves, producing numbness and facial paralysis. Surgical resection is the primary treatment. Radiation therapy (particularly neutron-beam therapy) is used when surgery is not feasible and as post-resection for certain histologic types with a high risk of recurrence. Malignant salivary gland tumors have a 5-year survival rate of ∼68%. Routine dental care (e.g., uncomplicated extraction, scaling and cleaning, tooth restoration, and root canal) is remarkably safe. The most common concerns regarding care of dental patients with medical disease are excessive bleeding for patients taking anticoagulants, infection of the heart valves and prosthetic devices from hematogenous seeding by the oral flora, and cardiovascular complications resulting from vasopressors used with local anesthetics during dental treatment. Experience confirms that the risk of any of these complications is very low. Patients undergoing tooth extraction or alveolar and gingival surgery rarely experience uncontrolled bleeding when warfarin anticoagulation is maintained within the therapeutic range currently recommended for prevention of venous thrombosis, atrial fibrillation, or mechanical heart valve. Embolic complications and death, however, have been reported during subtherapeutic anticoagulation. Therapeutic anticoagulation should be confirmed before and continued through the procedure. Likewise, low-dose aspirin (e.g., 81–325 mg) can safely be continued. For patients taking aspirin and another antiplatelet medication (e.g., clopidogrel), the decision to continue the second antiplatelet CHAPTER 45 Oral Manifestations of Disease 242 medication should be based on individual consideration of the risks of thrombosis and bleeding. Patients at risk for bacterial endocarditis (Chap. 155) should maintain optimal oral hygiene, including flossing, and have regular professional cleanings. Currently, guidelines recommend that prophylactic antibiotics be restricted to those patients at high risk for bacterial endocarditis who undergo dental and oral procedures involving significant manipulation of gingival or periapical tissue or penetration of the oral mucosa. If unexpected bleeding occurs, antibiotics given within 2 h after the procedure provide effective prophylaxis. Hematogenous bacterial seeding from oral infection can undoubtedly produce late prosthetic-joint infection and therefore requires removal of the infected tissue (e.g., drainage, extraction, root canal) and appropriate antibiotic therapy. However, evidence that late prosthetic-joint infection follows routine dental procedures is lacking. For this reason, antibiotic prophylaxis is not recommended before dental surgery for patients with orthopedic pins, screws, and plates. Antibiotic prophylaxis is recommended for patients within the first 2 years after joint replacement who have inflammatory arthropathies, immunosuppression, type 1 diabetes mellitus, previous prosthetic-joint infection, hemophilia, or malnourishment. Concern often arises regarding the use of vasoconstrictors to treat patients with hypertension and heart disease. Vasoconstrictors enhance the depth and duration of local anesthesia, thus reducing the anesthetic dose and potential toxicity. If intravascular injection is avoided, 2% lidocaine with 1:100,000 epinephrine (limited to a total of 0.036 mg of epinephrine) can be used safely in patients with controlled hypertension and stable coronary heart disease, arrhythmia, or congestive heart failure. Precautions should be taken with patients taking tricyclic antidepressants and nonselective beta blockers because these drugs may potentiate the effect of epinephrine. Elective dental treatments should be postponed for at least 1 month and preferably for 6 months after myocardial infarction, after which the risk of reinfarction is low provided the patient is medically stable (e.g., stable rhythm, stable angina, and no heart failure). Patients who have suffered a stroke should have elective dental care deferred for 6 months. In both situations, effective stress reduction requires good pain control, including the use of the minimal amount of vasoconstrictor necessary to provide good hemostasis and local anesthesia. Bisphosphonate therapy is associated with osteonecrosis of the jaw. However, the risk with oral bisphosphonate therapy is very low. Most patients affected have received high-dose aminobisphosphonate therapy for multiple myeloma or metastatic breast cancer and have undergone tooth extraction or dental surgery. Intraoral lesions, of which two-thirds are painful, appear as exposed yellow-white hard bone involving the mandible or maxilla. Screening tests for determining risk of osteonecrosis are unreliable. Patients slated for aminobisphosphonate therapy should receive preventive dental care that reduces the risk of infection and the need for future dentoalveolar surgery. Halitosis typically emanates from the oral cavity or nasal passages. Volatile sulfur compounds resulting from bacterial decay of food and cellular debris account for the malodor. Periodontal disease, caries, acute forms of gingivitis, poorly fitting dentures, oral abscess, and tongue coating are common causes. Treatment includes correcting poor hygiene, treating infection, and tongue brushing. Hyposalivation can produce and exacerbate halitosis. Pockets of decay in the tonsillar crypts, esophageal diverticulum, esophageal stasis (e.g., achalasia, stricture), sinusitis, and lung abscess account for some instances. A few systemic diseases produce distinctive odors: renal failure (ammoniacal), hepatic (fishy), and ketoacidosis (fruity). Helicobacter pylori gastritis can also produce ammoniacal breath. If a patient presents because of concern about halitosis but no odor is detectable, then pseudohalitosis or halitophobia must be considered. Part 2 Cardinal Manifestations and Presentation of Diseases While tooth loss and dental disease are not normal consequences of aging, a complex array of structural and functional changes that occur with age can affect oral health. Subtle changes in tooth structure (e.g., diminished pulp space and volume, sclerosis of dentinal tubules, and altered proportions of nerve and vascular pulp content) result in the elimination or diminution of pain sensitivity and a reduction in the reparative capacity of the teeth. In addition, age-associated fatty replacement of salivary acini may reduce physiologic reserve, thus increasing the risk of hyposalivation. In healthy older adults, there is minimal, if any, reduction in salivary flow. Poor oral hygiene often results when general health fails or when patients lose manual dexterity and upper-extremity flexibility. This situation is particularly common among frail older adults and nursing home residents and must be emphasized because regular oral cleaning and dental care reduce the incidence of pneumonia and oral disease as well as the mortality risk in this population. Other risks for dental decay include limited lifetime fluoride exposure. Without assiduous care, decay can become quite advanced yet remain asymptomatic. Consequently, much of a tooth—or the entire tooth—can be destroyed before the patient is aware of the process. Periodontal disease, a leading cause of tooth loss, is indicated by loss of alveolar bone height. More than 90% of the U.S. population has some degree of periodontal disease by age 50. Healthy adults who have not had significant alveolar bone loss by the sixth decade of life do not typically experience significant worsening with advancing age. Complete edentulousness with advanced age, though less common than in previous decades, still affects <50% of the U.S. population ffi85 years of age. Speech, mastication, and facial contours are dramatically affected. Edentulousness may also exacerbate obstructive sleep apnea, particularly in asymptomatic individuals who wear dentures. Dentures can improve verbal articulation and restore diminished facial contours. Mastication can also be restored; however, patients expecting dentures to facilitate oral intake are often disappointed. Accommodation to dentures requires a period of adjustment. Pain can result from friction or traumatic lesions produced by loose dentures. Poor fit and poor oral hygiene may permit the development of candidiasis. This fungal infection may be either asymptomatic or painful and is suggested by erythematous smooth or granular tissue conforming to an area covered by the appliance. Individuals with dentures and no natural teeth need regular (annual) professional oral examinations. Atlas of Oral Manifestations of Disease Samuel C. Durso, Janet A. Yellowitz The health status of the oral cavity is linked to cardiovascular disease, diabetes, and other systemic illnesses. Thus, examining the oral cav-46e Figure 46e-3 Erosive lichen planus. ity for signs of disease is a key part of the physical exam. This chapter presents numerous outstanding clinical photographs illustrating many of the conditions discussed in Chap. 45, Oral Manifestations of Disease. Conditions affecting the teeth, periodontal tissues, and oral mucosa are all represented. CHAPTER 46e Atlas of Oral Manifestations of Disease Figure 46e-1 Gingival overgrowth secondary to calcium channel blocker use. Figure 46e-4 Stevens-Johnson syndrome—reaction to nevirapine. Figure 46e-5 Erythematosus candidiasis under a denture (i.e., the patient should be treated for this fungal infection). Figure 46e-2 Oral lichen planus. Figure 46e-6 Severe periodontitis. Figure 46e-8 Sublingual leukoplakia. Cardinal Manifestations and Presentation of Diseases Figure 46e-9 A. Epulis (gingival hypertrophy) under denture. Figure 46e-7 Angular cheilitis. B. Epulis fissuratum. 46e-3 CHAPTER 46e Figure 46e-13 Healthy mouth. Figure 46e-10 Traumatic lesion inside of cheek. Figure 46e-11 Oral leukoplakia, subtype homogenous leukoplakia. Figure 46e-14 Geographic tongue. Figure 46e-12 Oral carcinoma. Figure 46e-15 Moderate gingivitis. Figure 46e-16 Gingival recession. Figure 46e-19 Root cavity in presence of severe periodontal disease. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 46e-17 Heavy calculus and gingival inflammation. Figure 46e-20 Ulcer on lateral border of tongue —potential carcinoma. Figure 46e-18 Severe gingival inflammation and heavy calculus. Figure 46e-21 Osteonecrosis. Figure 46e-22 Severe periodontal disease, missing tooth, very mobile teeth. Figure 46e-23 Salivary stone. Figure 46e-24 A. Calculus. B. Teeth cleaned. Figure 46e-25 Traumatic ulcer. CHAPTER 46e Atlas of Oral Manifestations of Disease Figure 46e-26 Fissured tongue. Figure 46e-27 White coated tongue —likely candidiasis. Dr. Jane Atkinson was a co-author of this chapter in the 17th edition. Some of the materials have been carried over into this edition. 47e-1 Dyspnea Richard M. Schwartzstein DYSPNEA The American Thoracic Society defines dyspnea as a “subjective expe-rience of breathing discomfort that consists of qualitatively distinct sensations that vary in intensity. The experience derives from interac-tions among multiple physiological, psychological, social, and environ-mental factors and may induce secondary physiological and behavioral responses.” Dyspnea, a symptom, can be perceived only by the person experiencing it and must be distinguished from the signs of increased work of breathing. MECHANISMS OF DYSPNEA Respiratory sensations are the consequence of interactions between the efferent, or outgoing, motor output from the brain to the ventilatory muscles (feed-forward) and the afferent, or incoming, sensory input from receptors throughout the body (feedback) as well as the integra-tive processing of this information that we infer must be occurring in the brain (Fig. 47e-1). In contrast to painful sensations, which can often be attributed to the stimulation of a single nerve ending, dys-pnea sensations are more commonly viewed as holistic, more akin to hunger or thirst. A given disease state may lead to dyspnea by one or more mechanisms, some of which may be operative under some cir-cumstances (e.g., exercise) but not others (e.g., a change in position). 47e SECTion 5 AlTERATionS in CiRCulAToRy AnD RESPiRAToRy FunCTionS ALGORITHM FOR THE INPUTS IN DYSPNEA PRODUCTION FIGuRE 47e-1 Hypothetical model for integration of sensory inputs in the production of dyspnea. Afferent information from the receptors throughout the respiratory system projects directly to the sensory cortex to contribute to primary qualitative sensory experiences and to provide feedback on the action of the ventilatory pump. Afferents also project to the areas of the brain responsible for control of ventilation. The motor cortex, responding to input from the control centers, sends neural messages to the ventilatory muscles and a corollary discharge to the sensory cortex (feed-forward with respect to the instructions sent to the muscles). If the feed-forward and feedback messages do not match, an error signal is generated and the intensity of dyspnea increases. An increasing body of data supports the contribution of affective inputs to the ultimate perception of unpleasant respiratory sensations. (Adapted from MA Gillette, RM Schwartzstein, in SH Ahmedzai, MF Muer [eds]. Supportive Care in Respiratory Disease. Oxford, UK, Oxford University Press, 2005.) Motor Efferents Disorders of the ventilatory pump—most commonly, increased airway resistance or stiffness (decreased compliance) of the respiratory system—are associated with increased work of breathing or the sense of an increased effort to breathe. When the muscles are weak or fatigued, greater effort is required, even though the mechanics of the system are normal. The increased neural output from the motor cortex is sensed via a corollary discharge, a neural signal that is sent to the sensory cortex at the same time that motor output is directed to the ventilatory muscles. Sensory Afferents Chemoreceptors in the carotid bodies and medulla are activated by hypoxemia, acute hypercapnia, and acidemia. Stimulation of these receptors and of others that lead to an increase in ventilation produce a sensation of “air hunger.” Mechanoreceptors in the lungs, when stimulated by bronchospasm, lead to a sensation of chest tightness. J-receptors, which are sensitive to interstitial edema, and pulmonary vascular receptors, which are activated by acute changes in pulmonary artery pressure, appear to contribute to air hunger. Hyperinflation is associated with the sensation of increased work of breathing, an inability to get a deep breath, or an unsatisfying breath. Metaboreceptors, which are located in skeletal muscle, are believed to be activated by changes in the local biochemical milieu of the tissue active during exercise and, when stimulated, contribute to breathing discomfort. Integration: Efferent-Reafferent Mismatch A discrepancy or mismatch between the feed-forward message to the ventilatory muscles and the feedback from receptors that monitor the response of the ventilatory pump increases the intensity of dyspnea. This mismatch is particularly important when there is a mechanical derangement of the ventilatory pump, as in asthma or chronic obstructive pulmonary disease (COPD). Contribution of Emotional or Affective Factors to Dyspnea Acute anxiety or fear may increase the severity of dyspnea either by altering the interpretation of sensory data or by leading to patterns of breathing that heighten physiologic abnormalities in the respiratory system. In patients with expiratory flow limitation, for example, the increased respiratory rate that accompanies acute anxiety leads to hyperinflation, increased work and effort of breathing, and the sense of an unsatisfying breath. ASSESSING DYSPNEA Quality of Sensation Like pain assessment, dyspnea assessment begins with a determination of the quality of the patient’s discomfort (Table 47e-1). Dyspnea questionnaires or lists of phrases commonly used TAblE 47e-1 ASSoCiATion oF QuAliTATivE DESCRiPToRS, CliniCAl CHARACTERiSTiCS, AnD PATHoPHySiologiC MECHAniSMS oF SHoRTnESS oF bREATH Chest tightness or Asthma, CHF constriction Increased work or effort COPD, asthma, neuroof breathing muscular disease, chest “Air hunger,” need to CHF, PE, COPD, asthma, breathe, urge to breathe pulmonary fibrosis Inability to get a deep Moderate to severe breath, unsatisfying asthma and COPD, pulbreath monary fibrosis, chest Heavy breathing, rapid Sedentary status in breathing, breathing healthy individual or more patient with cardiopul Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; PE, pulmonary embolism. Bronchoconstriction, interstitial edema Airway obstruction, neuromuscular disease Increased drive to breathe 47e-2 by patients assist those who have difficulty describing their breathing sensations. Sensory Intensity A modified Borg scale or visual analogue scale can be utilized to measure dyspnea at rest, immediately following exercise, or on recall of a reproducible physical task, such as climbing the stairs at home. An alternative approach is to gain a sense of the patient’s disability by inquiring about what activities are possible. These methods indirectly assess dyspnea and may be affected by nonrespiratory factors, such as leg arthritis or weakness. The Baseline Dyspnea Index and the Chronic Respiratory Disease Questionnaire are commonly used tools for this purpose. Affective Dimension For a sensation to be reported as a symptom, it must be perceived as unpleasant and interpreted as abnormal. Laboratory studies have demonstrated that air hunger evokes a stronger affective response than does increased effort or work of breathing. Some therapies for dyspnea, such as pulmonary rehabilitation, may reduce breathing discomfort, in part, by altering this dimension. Dyspnea most often results from deviations from normal function in the cardiovascular and respiratory systems. These deviations produce breathlessness as a consequence of increased drive to breathe; increased effort or work of breathing; and/or stimulation of receptors in the heart, lungs, or vascular system. Most diseases of the respiratory system are associated with alterations in the mechanical properties of the lungs and/or chest wall, and some stimulate pulmonary receptors. In contrast, disorders of the cardiovascular system more commonly lead to dyspnea by causing gas-exchange abnormalities or stimulating pulmonary and/or vascular receptors (Table 47e-2). Respiratory System Dyspnea • DISEASES OF THE AIRWAYS Asthma and COPD, the most common obstructive lung diseases, are characterized by expiratory airflow obstruction, which typically leads to dynamic hyperinflation of the lungs and chest wall. Patients with moderate to severe disease have both increased resistive and elastic loads (a term that relates to the stiffness of the system) on the ventilatory muscles and experience increased work of breathing. Patients with acute bronchoconstriction also report a sense of tightness, which can exist even when lung function is still within the normal range. These patients are commonly tachypneic; this condition leads to hyperinflation and reduced respiratory system compliance and also limits tidal volume. Both the chest tightness and the tachypnea are probably due to stimulation of pulmonary receptors. Both asthma and COPD may lead to hypoxemia and hypercapnia from ventilation-perfusion (V/Q) mismatch (and diffusion limitation during exercise with emphysema); hypoxemia is much more common than hypercapnia as a consequence of the different ways in which oxygen and carbon dioxide bind to hemoglobin. DISEASES OF THE CHEST WALL Conditions that stiffen the chest wall, such as kyphoscoliosis, or that weaken ventilatory muscles, such as myasthenia gravis or the Guillain-Barré syndrome, are also associated with PART 2 Cardinal Manifestations and Presentation of Diseases an increased effort to breathe. Large pleural effusions may contribute to dyspnea, both by increasing the work of breathing and by stimulating pulmonary receptors if there is associated atelectasis. DISEASES OF THE LUNG PARENCHYMA Interstitial lung diseases, which may arise from infections, occupational exposures, or autoimmune disorders, are associated with increased stiffness (decreased compliance) of the lungs and increased work of breathing. In addition, V/Q mismatch and the destruction and/or thickening of the alveolar-capillary interface may lead to hypoxemia and an increased drive to breathe. Stimulation of pulmonary receptors may further enhance the hyperventilation characteristic of mild to moderate interstitial disease. Cardiovascular System Dyspnea • DISEASES OF THE LEFT HEART Diseases of the myocardium resulting from coronary artery disease and nonischemic cardiomyopathies cause a greater left-ventricular end-diastolic volume and an elevation of the left-ventricular end-diastolic as well as pulmonary capillary pressures. These elevated pressures lead to interstitial edema and stimulation of pulmonary receptors, thereby causing dyspnea; hypoxemia due to V/Q mismatch may also contribute to breathlessness. Diastolic dysfunction, characterized by a very stiff left ventricle, may lead to severe dyspnea with relatively mild degrees of physical activity, particularly if it is associated with mitral regurgitation. DISEASES OF THE PULMONARY VASCULATURE Pulmonary thromboembolic disease and primary diseases of the pulmonary circulation (primary pulmonary hypertension, pulmonary vasculitis) cause dyspnea via increased pulmonary-artery pressure and stimulation of pulmonary receptors. Hyperventilation is common, and hypoxemia may be present. However, in most cases, use of supplemental oxygen has only a minimal impact on the severity of dyspnea and hyperventilation. DISEASES OF THE PERICARDIUM Constrictive pericarditis and cardiac tamponade are both associated with increased intracardiac and pulmonary vascular pressures, which are the likely cause of dyspnea in these conditions. To the extent that cardiac output is limited (at rest or with exercise) metaboreceptors may be stimulated if cardiac output is compromised to the degree that lactic acidosis develops; chemoreceptors will also be activated. Dyspnea with Normal Respiratory and Cardiovascular Systems Mild to moderate anemia is associated with breathing discomfort during exercise. This symptom is thought to be related to stimulation of metaboreceptors; oxygen saturation is normal in patients with anemia. The breathlessness associated with obesity is probably due to multiple mechanisms, including high cardiac output and impaired ventilatory pump function (decreased compliance of the chest wall). Cardiovascular deconditioning (poor fitness) is characterized by the early development of anaerobic metabolism and the stimulation of chemoreceptors and metaboreceptors. Dyspnea that is medically unexplained has been associated with increased sensitivity to the unpleasantness of acute hypercapnia. aHypoxemia and hypercapnia are not always present in these conditions. When hypoxemia is present, dyspnea usually persists, albeit at a reduced intensity, with correction of hypoxemia by the administration of supplemental oxygen. Abbreviations: COPD, chronic obstructive pulmonary disease; CPE, cardiogenic pulmonary edema; ILD, interstitial lung disease; NCPE, noncardiogenic pulmonary edema; PVD, pulmonary vascular disease. APPROACH TO THE PATIENT: Dyspnea (see Fig. 47e-2) The patient should be asked to describe in his/her own words what the discomfort feels like as well as the effect of position, infections, and environmental stimuli on the dyspnea. Orthopnea is a common indicator of congestive heart failure (CHF), mechanical impairment of the diaphragm associated with obesity, or asthma triggered by esophageal reflux. Nocturnal dyspnea suggests CHF or asthma. Acute, intermittent episodes of dyspnea are more likely to reflect episodes of myocardial ischemia, bronchospasm, or pulmonary embolism, while chronic persistent dyspnea is typical of COPD, interstitial lung disease, and chronic thromboembolic disease. Information on risk factors for occupational lung disease and for coronary artery disease should be elicited. Left atrial myxoma or hepatopulmonary syndrome should be considered when the patient complains of platypnea—i.e., dyspnea in the upright position with relief in the supine position. The physical examination should begin during the interview of the patient. Inability of the patient to speak in full sentences before stopping to get a deep breath suggests a condition that leads to stimulation of the controller or impairment of the ventilatory pump with reduced vital capacity. Evidence of increased work of breathing (supraclavicular retractions; use of accessory muscles of ventilation; and the tripod position, characterized by sitting with the hands braced on the knees) is indicative of increased airway resistance or stiffness of the lungs and the chest wall. When measuring the vital signs, the physician should accurately assess the respiratory rate and measure the pulsus paradoxus (Chap. 288); if the systolic pressure decreases by >10 mmHg, the presence of COPD, acute asthma, or pericardial disease should be considered. During the general examination, signs of anemia (pale conjunctivae), cyanosis, and cirrhosis (spider angiomata, gynecomastia) should be sought. Examination of the chest should focus on symmetry of movement; percussion (dullness is indicative of pleural effusion; hyperresonance is a sign of emphysema); and auscultation (wheezes, rhonchi, prolonged expiratory phase, and diminished breath sounds are clues to disorders of the airways; rales suggest interstitial edema or fibrosis). The cardiac examination should focus on signs of elevated right heart pressures (jugular venous distention, edema, accentuated pulmonic component to the second heart sound); left ventricular dysfunction (S3 and S4 gallops); and valvular disease (murmurs). When examining the abdomen with the patient in the supine position, the physician should note whether there is paradoxical movement of the abdomen: inward motion during inspiration is a sign of diaphragmatic weakness, and rounding of the abdomen during exhalation is suggestive of pulmonary edema. Clubbing of the digits may be an indication of interstitial pulmonary fibrosis, and joint swelling or deformation as well as changes consistent with Raynaud’s disease may be indicative of a collagen-vascular process that can be associated with pulmonary disease. History Quality of sensation, timing, positional disposition Persistent vs intermittent Physical Exam General appearance: Speak in full sentences? Accessory muscles? Color? Vital Signs: Tachypnea? Pulsus paradoxus? Oximetry-evidence of desaturation? Chest: Wheezes, rales, rhonchi, diminished breath sounds? Hyperinflated? Cardiac exam: JVP elevated? Precordial impulse? Gallop? Murmur? Extremities: Edema? Cyanosis? At this point, diagnosis may be evident—if not, proceed to further evaluation Chest radiograph Assess cardiac size, evidence of CHF Assess for hyperinflation Assess for pneumonia, interstitial lung disease, pleural effusions If diagnosis still uncertain, obtain cardiopulmonary exercise test FIGuRE 47e-2 Algorithm for the evaluation of the patient with dyspnea. JVP, jugular venous pulse; CHF, congestive heart failure; ECG, electrocardiogram; CT, computed tomography. (Adapted from RM Schwartzstein, D Feller-Kopman, in E Braunwald, L Goldman [eds]. Primary Cardiology, 2nd ed. Philadelphia, WB Saunders, 2003.) Patients with exertional dyspnea should be asked to walk under observation in order to reproduce the symptoms. The patient should be examined during and at the end of exercise for new findings that were not present at rest and for changes in oxygen saturation. After the history elicitation and the physical examination, a chest radiograph should be obtained. The lung volumes should be assessed: hyperinflation indicates obstructive lung disease, whereas low lung volumes suggest interstitial edema or fibrosis, diaphragmatic dysfunction, or impaired chest wall motion. The pulmonary parenchyma should be examined for evidence of interstitial disease and emphysema. Prominent pulmonary vasculature in the upper zones indicates pulmonary venous hypertension, while enlarged central pulmonary arteries suggest pulmonary arterial hypertension. An enlarged cardiac silhouette suggests dilated cardiomyopathy or valvular disease. Bilateral pleural effusions are typical of CHF and some forms of collagen-vascular disease. Unilateral effusions raise the specter of carcinoma and pulmonary embolism but may also occur in heart failure. CT of the chest is generally reserved for further evaluation of the lung parenchyma (interstitial lung disease) and possible pulmonary embolism. Laboratory studies should include electrocardiography to seek evidence of ventricular hypertrophy and prior myocardial infarction. Echocardiography is indicated when systolic dysfunction, pulmonary hypertension, or valvular heart disease is suspected. Bronchoprovocation testing is useful in patients with intermittent symptoms suggestive of asthma but normal physical examination and lung function; up to one-third of patients with the clinical diagnosis of asthma do not have reactive airways disease when formally tested. Measurement of brain natriuretic peptide levels in serum is increasingly used to assess for CHF in patients presenting with acute dyspnea but may be elevated in the presence of right ventricular strain as well. If a patient has evidence of both pulmonary and cardiac disease, a cardiopulmonary exercise test should be carried out to determine which system is responsible for the exercise limitation. If, at peak exercise, the patient achieves predicted maximal ventilation, demonstrates an increase in dead space or hypoxemia, or develops bronchospasm, the respiratory system is probably the cause of the problem. Alternatively, if the heart rate is >85% of the predicted maximum, if the anaerobic threshold occurs early, if the blood pressure becomes excessively high or decreases during exercise, if the O2 pulse (O2 consumption/heart rate, an indicator of stroke volume) falls, or if there are ischemic changes on the electrocardiogram, an abnormality of the cardiovascular system is likely the explanation for the breathing discomfort. PART 2 Cardinal Manifestations and Presentation of Diseases The first goal is to correct the underlying problem responsible for the symptom. If this is not possible, an effort is made to lessen the intensity of the symptom and its effect on the patient’s quality of life. Supplemental O2 should be administered if the resting O2 saturation is ≤89% or if the patient’s saturation drops to these levels with activity. For patients with COPD, pulmonary rehabilitation programs have demonstrated positive effects on dyspnea, exercise capacity, and rates of hospitalization. Studies of anxiolytics and antidepressants have not documented consistent benefit. Experimental interventions—e.g., cold air on the face, chest wall vibration, and inhaled furosemide—aimed at modulating the afferent information from receptors throughout the respiratory system are being studied. Morphine has been shown to reduce dyspnea out of proportion to the change in ventilation in laboratory models. The extent to which fluid accumulates in the interstitium of the lung depends on the balance of hydrostatic and oncotic forces within the pulmonary capillaries and in the surrounding tissue. Hydrostatic pressure favors movement of fluid from the capillary into the interstitium. The oncotic pressure, which is determined by the protein concentration in the blood, favors movement of fluid into the vessel. Levels of albumin, the primary protein in the plasma, may be low in conditions such as cirrhosis and nephrotic syndrome. While hypoalbuminemia favors movement of fluid into the tissue for any given hydrostatic pressure in the capillary, it is usually not sufficient by itself to cause interstitial edema. In a healthy individual, the tight junctions of the capillary endothelium are impermeable to proteins, and the lymphatics in the tissue carry away the small amounts of protein that may leak out; together, these factors result in an oncotic force that maintains fluid in the capillary. Disruption of the endothelial barrier, however, allows protein to escape the capillary bed and enhances the movement of fluid into the tissue of the lung. (See also Chap. 326) Cardiac abnormalities that lead to an increase in pulmonary venous pressure shift the balance of forces between the capillary and the interstitium. Hydrostatic pressure is increased and fluid exits the capillary at an increased rate, resulting in interstitial and, in more severe cases, alveolar edema. The development of pleural effusions may further compromise respiratory system function and contribute to breathing discomfort. Early signs of pulmonary edema include exertional dyspnea and orthopnea. Chest radiographs show peribronchial thickening, prominent vascular markings in the upper lung zones, and Kerley B lines. As the pulmonary edema worsens, alveoli fill with fluid; the chest radio-graph shows patchy alveolar filling, typically in a perihilar distribution, which then progresses to diffuse alveolar infiltrates. Increasing airway edema is associated with rhonchi and wheezes. In noncardiogenic pulmonary edema, lung water increases due to damage of the pulmonary capillary lining with consequent leakage of proteins and other macromolecules into the tissue; fluid follows the protein as oncotic forces are shifted from the vessel to the surrounding lung tissue. This process is associated with dysfunction of the surfactant lining the alveoli, increased surface forces, and a propensity for the alveoli to collapse at low lung volumes. Physiologically, noncardiogenic pulmonary edema is characterized by intrapulmonary shunt with hypoxemia and decreased pulmonary compliance leading to lower functional residual capacity. On pathologic examination, hyaline membranes are evident in the alveoli, and inflammation leading to pulmonary fibrosis may be seen. Clinically, the picture ranges from mild dyspnea to respiratory failure. Auscultation of the lungs may be relatively normal despite chest radiographs that show diffuse alveolar infiltrates. CT scans demonstrate that the distribution of alveolar edema is more heterogeneous than was once thought. Although normal intra-cardiac pressures are considered by many to be part of the definition of noncardiogenic pulmonary edema, the pathology of the process, as described above, is distinctly different, and a combination of cardiogenic and noncardiogenic pulmonary edema is observed in some patients. It is useful to categorize the causes of noncardiogenic pulmonary edema in terms of whether the injury to the lung is likely to result from direct, indirect, or pulmonary vascular causes (Table 47e-3). Direct injuries are mediated via the airways (e.g., aspiration) or as the consequence of blunt chest trauma. Indirect injury is the consequence of mediators that reach the lung via the bloodstream. The third category includes conditions that may result from acute changes in pulmonary vascular pressures, possibly due to sudden autonomic discharge (in the case of neurogenic and high-altitude pulmonary edema) or sudden swings of pleural pressure as well as transient damage to the pulmonary capillaries (in the case of reexpansion pulmonary edema). Direct Injury to Lung Chest trauma, pulmonary contusion Aspiration Smoke inhalation Pneumonia Oxygen toxicity Pulmonary embolism, reperfusion Hematogenous Injury to Lung Sepsis Pancreatitis Nonthoracic trauma Leukoagglutination reactions Multiple transfusions Intravenous drug use (e.g., heroin) Cardiopulmonary bypass The history is essential for assessing the likelihood of underlying cardiac disease as well as for identification of one of the conditions associated with noncardiogenic pulmonary edema. The physical examination in cardiogenic pulmonary edema is notable for evidence of increased intracardiac pressures (S3 gallop, elevated jugular venous pulse, peripheral edema) and rales and/or wheezes on auscultation of the chest. In contrast, the physical examination in noncardiogenic pulmonary edema is dominated by the findings of the precipitating condition; pulmonary findings may be relatively normal in the early stages. The chest radiograph in cardiogenic pulmonary edema typically shows an enlarged cardiac silhouette, vascular redistribution, interstitial thickening, and perihilar alveolar infiltrates; pleural effusions are common. In noncardiogenic pulmonary edema, heart size is normal, alveolar infiltrates are distributed more uniformly throughout the lungs, and pleural effusions are uncommon. Finally, the hypoxemia of cardiogenic pulmonary edema is due largely to V/Q to mismatch and responds to the administration of supplemental oxygen. In contrast, hypoxemia in noncardiogenic pulmonary edema is due primarily to intrapulmonary shunting and typically persists despite high concentrations of inhaled oxygen. Cough and Hemoptysis Patricia A. Kritek, Christopher H. Fanta COugH Cough performs an essential protective function for human airways and lungs. Without an effective cough reflex, we are at risk for retained 48 airway secretions and aspirated material predisposing to infection, atelectasis, and respiratory compromise. At the other extreme, excessive coughing can be exhausting; can be complicated by emesis, syncope, muscular pain, or rib fractures; and can aggravate abdominal or inguinal hernias and urinary incontinence. Cough is often a clue to the presence of respiratory disease. In many instances, cough is an expected and accepted manifestation of disease, as in acute respiratory tract infection. However, persistent cough in the absence of other respiratory symptoms commonly causes patients to seek medical attention. Spontaneous cough is triggered by stimulation of sensory nerve endings that are thought to be primarily rapidly adapting receptors and C fibers. Both chemical (e.g., capsaicin) and mechanical (e.g., particulates in air pollution) stimuli may initiate the cough reflex. A cationic ion channel—the type 1 vanilloid receptor—found on rapidly adapting receptors and C fibers is the receptor for capsaicin, and its expression is increased in patients with chronic cough. Afferent nerve endings richly innervate the pharynx, larynx, and airways to the level of the terminal bronchioles and extend into the lung parenchyma. They may also be located in the external auditory meatus (the auricular branch of the vagus nerve, or the Arnold nerve) and in the esophagus. Sensory signals travel via the vagus and superior laryngeal nerves to a region of the brainstem in the nucleus tractus solitarius vaguely identified as the “cough center.” The cough reflex involves a highly orchestrated series of involuntary muscular actions, with the potential for input from cortical pathways as well. The vocal cords adduct, leading to transient upper-airway occlusion. Expiratory muscles contract, generating positive intrathoracic pressures as high as 300 mmHg. With sudden release of the laryngeal contraction, rapid expiratory flows are generated, Coughs Volume (L) Flow (L/sec)FEV3 = 6.224 2 0 –2 –4 –6 –8 –10 12345678 91011 FIguRE 48-1 Flow-volume curve shows spikes of high expiratory flow achieved with cough. FEV1, forced expiratory volume in 1 s. FEV1 = 5.376 exceeding the normal “envelope” of maximal expiratory flow seen on the flow-volume curve (Fig. 48-1). Bronchial smooth-muscle contraction together with dynamic compression of airways narrows airway lumens and maximizes the velocity of exhalation. The kinetic energy available to dislodge mucus from the inside of airway walls is directly proportional to the square of the velocity of expiratory airflow. A deep breath preceding a cough optimizes the function of the expiratory muscles; a series of repetitive coughs at successively lower lung volumes sweeps the point of maximal expiratory velocity progressively further into the lung periphery. Weak or ineffective cough compromises the ability to clear lower respiratory tract infections, predisposing to more serious infections and their sequelae. Weakness, paralysis, or pain of the expiratory (abdominal and intercostal) muscles is foremost on the list of causes of impaired cough (Table 48-1). Cough strength is generally assessed qualitatively; peak expiratory flow or maximal expiratory pressure at the mouth can be used as a surrogate marker for cough strength. A variety of assistive devices and techniques have been developed to improve cough strength, running the gamut from simple (splinting of the abdominal muscles with a tightly-held pillow to reduce postoperative pain while coughing) to complex (a mechanical cough-assist device supplied via face mask or tracheal tube that applies a cycle of positive pressure followed rapidly by negative pressure). Cough may fail to clear secretions despite a preserved ability to generate normal expiratory velocities; such failure may be due to either abnormal airway secretions (e.g., bronchiectasis due to cystic fibrosis) or structural abnormalities of the airways (e.g., tracheomalacia with expiratory collapse during cough). The cough of chronic bronchitis in long-term cigarette smokers rarely leads the patient to seek medical advice. It lasts for only seconds to a CAuSES of iMPAiRED CougH Central respiratory depression (e.g., anesthesia, sedation, or coma) 244 few minutes, is productive of benign-appearing mucoid sputum, and generally does not cause discomfort. Cough may occur in the context of other respiratory symptoms that together point to a diagnosis; for example, cough accompanied by wheezing, shortness of breath, and chest tightness after exposure to a cat or other sources of allergens suggests asthma. At times, however, cough is the dominant or sole symptom of disease, and it may be of sufficient duration and severity that relief is sought. The duration of cough is a clue to its etiology. Acute cough (<3 weeks) is most commonly due to a respiratory tract infection, aspiration, or inhalation of noxious chemicals or smoke. Subacute cough (3–8 weeks in duration) is a common residuum of tracheobronchitis, as in pertussis or “postviral tussive syndrome.” Chronic cough (>8 weeks) may be caused by a wide variety of cardiopulmonary diseases, including those of inflammatory, infectious, neoplastic, and cardiovascular etiologies. When initial assessment with chest examination and radiography is normal, cough-variant asthma, gastroesophageal reflux, nasopharyngeal drainage, and medications (angiotensin-converting enzyme [ACE] inhibitors) are the most common causes of chronic cough. Details as to the sound, the time of occurrence during the day, and the pattern of coughing infrequently provide useful etiologic clues. Regardless of cause, cough often worsens upon first lying down at night, with talking, or with the hyperpnea of exercise; it frequently improves with sleep. An exception may involve the cough that occurs only with certain allergic exposures or exercise in cold air, as in asthma. Useful historical questions include what circumstances surround the onset of cough, what makes the cough better or worse, and whether or not the cough produces sputum. The physical examination seeks clues suggesting the presence of cardiopulmonary disease, including findings such as wheezing or crackles on chest examination. Examination of the auditory canals and tympanic membranes (for irritation of the latter resulting in stimulation of Arnold’s nerve), the nasal passageways (for rhinitis or polyps), and the nails (for clubbing) may also provide etiologic clues. Because cough can be a manifestation of a systemic disease such as sarcoidosis or vasculitis, a thorough general examination is equally important. In virtually all instances, evaluation of chronic cough merits a chest radiograph. The list of diseases that can cause persistent cough without other symptoms and without detectable abnormalities on physical examination is long. It includes serious illnesses such as sarcoidosis or Hodgkin’s disease in young adults, lung cancer in older patients, and (worldwide) pulmonary tuberculosis. An abnormal chest film prompts an evaluation aimed at explaining the cough. In a patient with chronic productive cough, examination of expectorated sputum is warranted. Purulent-appearing sputum should be sent for routine bacterial culture and, in certain circumstances, mycobacterial culture as well. Cytologic examination of mucoid sputum may be useful to assess for malignancy and to distinguish neutrophilic from eosinophilic bronchitis. Expectoration of blood—whether streaks of blood, blood mixed with airway secretions, or pure blood—deserves a special approach to assessment and management (see “Hemoptysis,” below). It is commonly held that (alone or in combination) the use of an ACE inhibitor; postnasal drainage; gastroesophageal reflux; and asthma account for more than 90% of cases of chronic cough with a normal or noncontributory chest radiograph. However, clinical experience does not support this contention, and strict adherence to this concept discourages the search for alternative explanations by both clinicians and researchers. ACE inhibitor–induced cough occurs in 5–30% of patients taking these agents and is not dose dependent. ACE metabolizes bradykinin and other tachykinins, such as substance P. The mechanism of ACE inhibitor–associated cough may involve sensitization of sensory nerve endings due to accumulation of bradykinin. In support of this hypothesis, polymorphisms in the neurokinin-2 receptor gene are associated with ACE inhibitor–induced cough. Any patient with PART 2 Cardinal Manifestations and Presentation of Diseases chronic unexplained cough who is taking an ACE inhibitor should have a trial period off the medication, regardless of the timing of the onset of cough relative to the initiation of ACE inhibitor therapy. In most instances, a safe alternative is available; angiotensin-receptor blockers do not cause cough. Failure to observe a decrease in cough after 1 month off medication argues strongly against this etiology. Postnasal drainage of any etiology can cause cough as a response to stimulation of sensory receptors of the cough-reflex pathway in the hypopharynx or aspiration of draining secretions into the trachea. Clues suggesting this etiology include postnasal drip, frequent throat clearing, and sneezing and rhinorrhea. On speculum examination of the nose, excess mucoid or purulent secretions, inflamed and edematous nasal mucosa, and/or polyps may be seen; in addition, secretions or a cobblestoned appearance of the mucosa along the posterior pharyngeal wall may be noted. Unfortunately, there is no means by which to quantitate postnasal drainage. In many instances, this diagnosis must rely on subjective information provided by the patient. This assessment must also be counterbalanced by the fact that many people who have chronic postnasal drainage do not experience cough. Linking gastroesophageal reflux to chronic cough poses similar challenges. It is thought that reflux of gastric contents into the lower esophagus may trigger cough via reflex pathways initiated in the esophageal mucosa. Reflux to the level of the pharynx (laryngopharyngeal reflux), with consequent aspiration of gastric contents, causes a chemical bronchitis and possibly pneumonitis that can elicit cough for days afterward. Retrosternal burning after meals or on recumbency, frequent eructation, hoarseness, and throat pain may be indicative of gastroesophageal reflux. Nevertheless, reflux may also elicit minimal or no symptoms. Glottic inflammation detected on laryngoscopy may be a manifestation of recurrent reflux to the level of the throat, but it is a nonspecific finding. Quantification of the frequency and level of reflux requires a somewhat invasive procedure to measure esophageal pH directly (either nasopharyngeal placement of a catheter with a pH probe into the esophagus for 24 h or endoscopic placement of a radio-transmitter capsule into the esophagus). The precise interpretation of test results that permits an etiologic linking of reflux events and cough remains debated. Again, assigning the cause of cough to gastroesophageal reflux must be weighed against the observation that many people with symptomatic reflux do not experience chronic cough. Cough alone as a manifestation of asthma is common among children but not among adults. Cough due to asthma in the absence of wheezing, shortness of breath, and chest tightness is referred to as “cough-variant asthma.” A history suggestive of cough-variant asthma ties the onset of cough to exposure to typical triggers for asthma and the resolution of cough to discontinuation of exposure. Objective testing can establish the diagnosis of asthma (airflow obstruction on spirometry that varies over time or reverses in response to a bronchodilator) or exclude it with certainty (a negative response to a bronchoprovocation challenge—e.g., with methacholine). In a patient capable of taking reliable measurements, home expiratory peak flow monitoring can be a cost-effective method to support or discount a diagnosis of asthma. Chronic eosinophilic bronchitis causes chronic cough with a normal chest radiograph. This condition is characterized by sputum eosinophilia in excess of 3% without airflow obstruction or bronchial hyperresponsiveness and is successfully treated with inhaled glucocorticoids. Treatment of chronic cough in a patient with a normal chest radio-graph is often empirical and is targeted at the most likely cause(s) of cough as determined by history, physical examination, and possibly pulmonary-function testing. Therapy for postnasal drainage depends on the presumed etiology (infection, allergy, or vasomotor rhinitis) and may include systemic antihistamines; antibiotics; nasal saline irrigation; and nasal pump sprays with glucocorticoids, antihistamines, or anticholinergics. Antacids, histamine type 2 (H2) receptor antagonists, and proton-pump inhibitors are used to neutralize or decrease the production of gastric acid in gastroesophageal reflux disease; dietary changes, elevation of the head and torso during sleep, and medications to improve gastric emptying are additional therapeutic measures. Cough-variant asthma typically responds well to inhaled glucocorticoids and intermittent use of inhaled β-agonist bronchodilators. Patients who fail to respond to treatment targeting the common causes of chronic cough or who have had these causes excluded by appropriate diagnostic testing should undergo chest CT. Diseases causing cough that may be missed on chest x-ray include tumors, early interstitial lung disease, bronchiectasis, and atypical mycobacterial pulmonary infection. On the other hand, patients with chronic cough who have normal findings on chest examination, lung function testing, oxygenation assessment, and chest CT can be reassured as to the absence of serious pulmonary pathology. Chronic idiopathic cough, also called cough hypersensitivity syndrome, is distressingly common. It is often experienced as a tickle or sensitivity in the throat, occurs more often in women, and is typically “dry” or at most productive of scant amounts of mucoid sputum. It can be exhausting, interfere with work, and cause social embarrassment. Once serious underlying cardiopulmonary pathology has been excluded, an attempt at cough suppression is appropriate. Most effective are narcotic cough suppressants, such as codeine or hydrocodone, which are thought to act in the “cough center” in the brainstem. The tendency of narcotic cough suppressants to cause drowsiness and constipation and their potential for addictive dependence limit their appeal for longterm use. Dextromethorphan is an over-the-counter, centrally acting cough suppressant with fewer side effects and less efficacy than the narcotic cough suppressants. Dextromethorphan is thought to have a different site of action than narcotic cough suppressants and can be used in combination with them if necessary. Benzonatate is thought to inhibit neural activity of sensory nerves in the cough-reflex pathway. It is generally free of side effects; however, its effectiveness in suppressing cough is variable and unpredictable. Case series have reported benefit from off-label use of gabapentin or amitriptyline for chronic idiopathic cough. Novel cough suppressants without the limitations of currently available agents are greatly needed. Approaches that are being explored include the development of neurokinin receptor antagonists, type 1 vanilloid receptor antagonists, and novel opioid and opioid-like receptor agonists. Hemoptysis, the expectoration of blood from the respiratory tract, can arise at any location from the alveoli to the glottis. It is important to distinguish hemoptysis from epistaxis (bleeding from the nasopharynx) and hematemesis (bleeding from the upper gastrointestinal tract). Hemoptysis can range from the expectoration of blood-tinged sputum to that of life-threatening large volumes of bright red blood. For most patients, any degree of hemoptysis can cause anxiety and often prompts medical evaluation. While precise epidemiologic data are lacking, the most common etiology of hemoptysis is infection of the medium-sized airways. In the United States, the cause is usually viral or bacterial bronchitis. Hemoptysis can arise in the setting of acute bronchitis or during an exacerbation of chronic bronchitis. Worldwide, the most common cause of hemoptysis is infection with Mycobacterium tuberculosis, presumably because of the high prevalence of tuberculosis and its predilection for cavity formation. While these are the most common causes, the differential diagnosis for hemoptysis is extensive, and a step-wise approach to evaluation is appropriate. One way to approach the source of hemoptysis is to search systematically for potential sites of bleeding from the alveolus to the mouth. Diffuse bleeding in the alveolar space, often referred to as diffuse alveolar hemorrhage (DAH), may present as hemoptysis. Causes of DAH can be inflammatory or noninflammatory. Inflammatory DAH is due to small-vessel vasculitis/capillaritis from a variety of diseases, including granulomatosis with polyangiitis and microscopic polyangiitis. Similarly, systemic autoimmune diseases such as systemic lupus erythematosus 245 can manifest as pulmonary capillaritis. Antibodies to the alveolar basement membrane, as are seen in Goodpasture’s disease, can also result in alveolar hemorrhage. In the early period after bone marrow transplantation, patients can develop a form of inflammatory DAH that can be catastrophic and life-threatening. The exact pathophysiology of this process is not well understood, but DAH should be suspected in patients with sudden-onset dyspnea and hypoxemia in the first 100 days after bone marrow transplantation. Alveoli can also bleed due to direct inhalational injury, including thermal injury from fires, inhalation of illicit substances (e.g., cocaine), and inhalation of toxic chemicals. If alveoli are irritated from any process, patients with thrombocytopenia, coagulopathy, or antiplatelet or anticoagulant use will be at increased risk of hemoptysis. Bleeding in hemoptysis most commonly arises from the smallto medium-sized airways. Irritation and injury of the bronchial mucosa can lead to small-volume bleeding. More significant hemoptysis can result from the proximity of the bronchial artery and vein to the airway, with these vessels and the bronchus running together in what is often referred to as the bronchovascular bundle. In the smaller airways, these blood vessels are close to the airspace, and lesser degrees of inflammation or injury can therefore result in their rupture into the airways. While alveolar hemorrhage arises from capillaries that are part of the low-pressure pulmonary circulation, bronchial bleeding generally originates from bronchial arteries, which are under systemic pressure and thus are predisposed to larger-volume bleeding. Any infection of the airways can result in hemoptysis, although acute bronchitis is most commonly caused by viral infection. In patients with a history of chronic bronchitis, bacterial superinfection with organisms such as Streptococcus pneumoniae, Haemophilus influenzae, or Moraxella catarrhalis can also result in hemoptysis. Patients with bronchiectasis (a permanent dilation of the airways with loss of mucosal integrity) are particularly prone to hemoptysis due to chronic inflammation and anatomic abnormalities that bring the bronchial arteries closer to the mucosal surface. One common presentation of patients with advanced cystic fibrosis—the prototypical bronchiectatic lung disease—is hemoptysis, which can be life-threatening. Pneumonias of any sort can cause hemoptysis. Tuberculous infection, which can lead to bronchiectasis or cavitary pneumonia, is a very common cause of hemoptysis worldwide. Patients may present with a chronic cough productive of blood-streaked sputum or with larger-volume bleeding. Rasmussen’s aneurysm (the dilation of a pulmonary artery in a cavity formed by previous tuberculous infection) remains a source of massive, life-threatening hemoptysis in the developing world. Community-acquired pneumonia and lung abscess can also result in bleeding. Once again, if the infection results in cavitation, there is a greater likelihood of bleeding due to erosion into blood vessels. Infections with Staphylococcus aureus and gram-negative rods (e.g., Klebsiella pneumoniae) are especially likely to cause necrotizing lung infections and thus to be associated with hemoptysis. While not common in North America, pulmonary paragoni miasis (i.e., infection with the lung fluke Paragonimus wester mani) often presents as fever, cough, and hemoptysis. This infection is a public health issue in Southeast Asia and China and is frequently confused with active tuberculosis, in which the clinical picture can be similar. Paragonimiasis should be considered in recent immigrants from endemic areas who have new or recurrent hemoptysis. In addition, pulmonary paragonimiasis has been reported secondary to ingestion of crayfish or small crabs in the United States. Other causes of airway irritation resulting in hemoptysis include inhalation of toxic chemicals, thermal injury, and direct trauma from suctioning of the airways (particularly in intubated patients). All of these etiologies should be considered in light of the individual patient’s history and exposures. Perhaps the most feared cause of hemoptysis is bronchogenic lung cancer, although hemoptysis is a presenting symptom in only ∼10% of patients. Cancers arising in the proximal airways are much more likely to cause hemoptysis, but any malignancy in the chest can do so. Because both squamous cell carcinomas and small-cell carcinomas are 246 more commonly in or adjacent to the proximal airways, and large at presentation, they are more often a cause of hemoptysis. These cancers can present with large-volume and life-threatening hemoptysis because of erosion into the hilar vessels. Carcinoid tumors, which are found almost exclusively as endobronchial lesions with friable mucosa, can also present with hemoptysis. In addition to cancers arising in the lung, metastatic disease in the pulmonary parenchyma can bleed. Malignancies that commonly metastasize to the lungs include renal cell, breast, colon, testicular, and thyroid cancers as well as melanoma. While hemoptysis is not a common manifestation of pulmonary metastases, the combination of multiple pulmonary nodules and hemoptysis should raise suspicion of this etiology. Finally, disease of the pulmonary vasculature can cause hemoptysis. Perhaps most frequently, congestive heart failure with transmission of elevated left atrial pressures can lead to rupture of small alveolar capillaries. These patients rarely present with bright red blood but more commonly have pink, frothy sputum or blood-tinged secretions. Patients with a focal jet of mitral regurgitation can present with an upper-lobe opacity on chest radiography together with hemoptysis. This finding is thought to be due to focal increases in pulmonary capillary pressure due to the regurgitant jet. Pulmonary arteriovenous malformations are prone to bleeding. Pulmonary embolism can also lead to the development of hemoptysis, which is generally associated with pulmonary infarction. Pulmonary arterial hypertension from other causes rarely results in hemoptysis. As with most signs of possible illness, the initial step in the evaluation of hemoptysis is a thorough history and physical examination (Fig. 48-2). As already mentioned, initial questioning should focus on ascertaining whether the bleeding is truly from the respiratory tract and not the nasopharynx or gastrointestinal tract; bleeding from the latter sources requires different approaches to evaluation and treatment. History and Physical Examination The specific characteristics of hemoptysis may be helpful in determining an etiology, such as whether the expectorated material consists of blood-tinged, purulent secretions; pink, frothy sputum; or pure blood. Information on specific triggers PART 2 Cardinal Manifestations and Presentation of Diseases of the bleeding (e.g., recent inhalation exposures) as well as any previous episodes of hemoptysis should be elicited during history-taking. Monthly hemoptysis in a woman suggests catamenial hemoptysis from pulmonary endometriosis. Moreover, the volume of blood expectorated is important not only in determining the cause but also in gauging the urgency for further diagnostic and therapeutic maneuvers. Patients rarely exsanguinate from hemoptysis but can effectively “drown” in aspirated blood. Large-volume hemoptysis, referred to as massive hemoptysis, is variably defined as hemoptysis of >200–600 mL in 24 h. Massive hemoptysis should be considered a medical emergency. All patients should be asked about current or former cigarette smoking; this behavior predisposes to chronic bronchitis and increases the likelihood of bronchogenic cancer. Practitioners should inquire about symptoms and signs suggestive of respiratory tract infection (including fever, chills, and dyspnea), recent inhalation exposures, recent use of illicit substances, and risk factors for venous thromboembolism. A medical history of malignancy or treatment thereof, rheumatologic disease, vascular disease, or underlying lung disease (e.g., bronchiectasis) may be relevant to the cause of hemoptysis. Because many causes of DAH can be part of a pulmonary-renal syndrome, specific inquiry into a history of renal insufficiency is important. The physical examination begins with an assessment of vital signs and oxygen saturation to gauge whether there is evidence of life-threatening bleeding. Tachycardia, hypotension, and decreased oxygen saturation mandate a more expedited evaluation of hemoptysis. A specific focus on respiratory and cardiac examinations is important; these examinations should include inspection of the nares, auscultation of the lungs and heart, assessment of the lower extremities for symmetric or asymmetric edema, and evaluation for jugular venous distention. Clubbing of the digits may suggest underlying lung diseases such as bronchogenic carcinoma or bronchiectasis, which predispose to hemoptysis. Similarly, mucocutaneous telangiectasias should raise the specter of pulmonary arterial-venous malformations. Diagnostic Evaluation For most patients, the next step in evaluation of hemoptysis should be a standard chest radiograph. If a source of bleeding is not identified on plain film, CT of the chest should be performed. CT allows better delineation of bronchiectasis, alveolar filling, cavitary infiltrates, and masses than does chest radiograph. The practitioner should consider a CT protocol to assess for pulmonary embolism if the history or examination suggests venous thromboembolism as a cause of bleeding. Quantify amount of bleeding History and physical exam Patient with hemoptysis Mild Moderate Massive Rule out other sources: • Oropharynx • Gastrointestinal tract No risk factors* Risk factors* or recurrent bleeding Treat underlying disease (usually infection) CT scan if unrevealing, bronchoscopy Bleeding continues Treat underlying disease CT scan Bronchoscopy CXR, CBC, coagulation studies, UA, creatinine Secure airway Treat underlying disease Persistent bleeding *Risk Factors: smoking, age >40 Bleeding stops Embolization or resection FIguRE 48-2 Decision tree for evaluation of hemoptysis. CBC, complete blood count; CT, computed tomography; CXR, chest x-ray; UA, urinalysis. Laboratory studies should include a complete blood count to assess both the hematocrit and the platelet count as well as coagulation studies. Renal function should be evaluated and urinalysis conducted because of the possibility of pulmonary-renal syndromes presenting with hemoptysis. The documentation of acute renal insufficiency or the detection of red blood cells or their casts on urinalysis should elevate suspicion of small-vessel vasculitis, and studies such as antineutrophil cytoplasmic antibody, antiglomerular basement membrane antibody, and antinuclear antibody should be considered. If a patient is producing sputum, Gram’s and acid-fast staining as well as culture should be undertaken. If all of these studies are unrevealing, bronchoscopy should be considered. In any patient with a history of cigarette smoking, airway inspection should be part of the evaluation of new-onset hemoptysis as endobronchial lesions are not reliably visualized on CT. For the most part, the treatment of hemoptysis varies with its etiology. However, large-volume, life-threatening hemoptysis generally requires immediate intervention regardless of the cause. The first step is to establish a patent airway, usually by endotracheal intubation and subsequent mechanical ventilation. As large-volume hemoptysis usually arises from an airway lesion, it is ideal to identify the site of bleeding by either chest imaging or bronchoscopy (more commonly rigid rather than flexible). The goals are then to isolate the bleeding to one lung and not to allow the preserved airspaces in the other lung to be filled with blood so that gas exchange is further impaired. Patients should be placed with the bleeding lung in a dependent position (i.e., bleeding-side down), and, if possible, dual-lumen endotracheal tubes or an airway blocker should be placed in the proximal airway of the bleeding lung. These interventions generally require the assistance of anesthesiologists, interventional pulmonologists, or thoracic surgeons. If the bleeding does not stop with treatment of the underlying cause and the passage of time, severe hemoptysis from bronchial arteries can be treated with angiographic embolization of the responsible bronchial artery. This intervention should be entertained only in the most severe and life-threatening cases of hemoptysis because of the risk of unintentional spinal-artery embolization and consequent paraplegia. Endobronchial lesions can be treated with a variety of bronchoscopically directed interventions, including cauterization and laser therapy. In extreme circumstances, surgical resection of the affected region of the lung is considered. Most cases of hemoptysis resolve with treatment of the infection or inflammatory process or with removal of the offending stimulus. products from them. Proper maintenance of this function depends not only on intact cardiovascular and respiratory systems, but also on an adequate number of red blood cells and hemoglobin and a supply of inspired gas containing adequate O2. Decreased O2 availability to cells results in an inhibition of oxidative phosphorylation and increased anaerobic glycolysis. This switch from aerobic to anaerobic metabolism, the Pasteur effect, maintains some, albeit reduced, adenosine 5′-triphosphate (ATP) production. In severe hypoxia, when ATP production is inadequate to meet the energy requirements of ionic and osmotic equilibrium, cell membrane depolarization leads to uncontrolled Ca2+ influx and activation of Ca2+-dependent phospholipases and proteases. These events, in turn, cause cell swelling, activation of apoptotic pathways, and, ultimately, cell death. The adaptations to hypoxia are mediated, in part, by the upregulation of genes encoding a variety of proteins, including glycolytic enzymes, such as phosphoglycerate kinase and phosphofructokinase, as well as the glucose transporters Glut-1 and Glut-2; and by growth factors, such as vascular endothelial growth factor (VEGF) and erythropoietin, which enhance erythrocyte production. The hypoxia-induced increase in expression of these key proteins is governed by the hypoxia-sensitive transcription factor, hypoxia-inducible factor-1 (HIF-1). During hypoxia, systemic arterioles dilate, at least in part, by opening of KATP channels in vascular smooth-muscle cells due to the hypoxia-induced reduction in ATP concentration. By contrast, in pulmonary vascular smooth-muscle cells, inhibition of K+ channels causes depolarization which, in turn, activates voltage-gated Ca2+ channels raising the cytosolic [Ca2+] and causing smooth-muscle cell contraction. Hypoxia-induced pulmonary arterial constriction shunts blood away from poorly ventilated portions toward better ventilated portions of the lung; however, it also increases pulmonary vascular resistance and right ventricular afterload. Effects on the Central Nervous System Changes in the central nervous system (CNS), particularly the higher centers, are especially important consequences of hypoxia. Acute hypoxia causes impaired judgment, motor incoordination, and a clinical picture resembling acute alcohol intoxication. High-altitude illness is characterized by headache secondary to cerebral vasodilation, gastrointestinal symptoms, dizziness, insomnia, fatigue, or somnolence. Pulmonary arterial and sometimes venous constriction causes capillary leakage and high-altitude pulmonary edema (HAPE) (Chap. 47e), which intensifies hypoxia, further promoting vasoconstriction. Rarely, high-altitude cerebral edema (HACE) develops, which is manifest by severe headache and papilledema and can cause coma. As hypoxia becomes more severe, the regulatory centers of the brainstem are affected, and death usually results from respiratory failure. Effects on the Cardiovascular System Acute hypoxia stimulates the chemoreceptor reflex arc to induce venoconstriction and systemic arterial vasodilation. These acute changes are accompanied by transiently increased myocardial contractility, which is followed by depressed myocardial contractility with prolonged hypoxia. CAuSES OF HYPOXIA Respiratory Hypoxia When hypoxia occurs from respiratory failure, Pao2 declines, and when respiratory failure is persistent, the hemoglobin-oxygen (Hb-O2) dissociation curve (see Fig. 127-2) is displaced to the right, with greater quantities of O2 released at any level of tissue Po2. Arterial hypoxemia, i.e., a reduction of O2 saturation of arterial blood (Sao2), and consequent cyanosis are likely to be more marked when such depression of Pao2 results from pulmonary disease than when the depression occurs as the result of a decline in the fraction of oxygen in inspired air (Fio2). In this latter situation, Paco2 falls secondary to anoxia-induced hyperventilation and the Hb-O2 dissociation curve is displaced to the left, limiting the decline in Sao2 at any level of Pao2. The most common cause of respiratory hypoxia is ventilation-perfusion mismatch resulting from perfusion of poorly ventilated alveoli. Respiratory hypoxemia may also be caused by hypoventilation, in which case it is associated with an elevation of Paco2 (Chap. 306e). These two forms of respiratory hypoxia are usually correctable by 248 inspiring 100% O2 for several minutes. A third cause of respiratory hypoxia is shunting of blood across the lung from the pulmonary arterial to the venous bed (intrapulmonary right-to-left shunting) by perfusion of nonventilated portions of the lung, as in pulmonary atelectasis or through pulmonary arteriovenous connections. The low Pao2 in this situation is only partially corrected by an Fio2 of 100%. Hypoxia Secondary to High Altitude As one ascends rapidly to 3000 m (~10,000 ft), the reduction of the O2 content of inspired air (Fio2) leads to a decrease in alveolar Po2 to approximately 60 mmHg, and a condition termed high-altitude illness develops (see above). At higher altitudes, arterial saturation declines rapidly and symptoms become more serious; and at 5000 m, unacclimated individuals usually cease to be able to function normally owing to the changes in CNS function described above. Hypoxia Secondary to Right-to-Left Extrapulmonary Shunting From a physiologic viewpoint, this cause of hypoxia resembles intrapulmonary right-to-left shunting but is caused by congenital cardiac malformations, such as tetralogy of Fallot, transposition of the great arteries, and Eisenmenger’s syndrome (Chap. 282). As in pulmonary right-toleft shunting, the Pao2 cannot be restored to normal with inspiration of 100% O2. Anemic Hypoxia A reduction in hemoglobin concentration of the blood is accompanied by a corresponding decline in the O2-carrying capacity of the blood. Although the Pao2 is normal in anemic hypoxia, the absolute quantity of O2 transported per unit volume of blood is diminished. As the anemic blood passes through the capillaries and the usual quantity of O2 is removed from it, the Po2 and saturation in the venous blood decline to a greater extent than normal. Carbon Monoxide (CO) Intoxication (See also Chap. 472e) Hemoglobin that binds with CO (carboxy-hemoglobin, COHb) is unavailable for O2 transport. In addition, the presence of COHb shifts the Hb-O2 dissociation curve to the left (see Fig. 127-2) so that O2 is unloaded only at lower tensions, further contributing to tissue hypoxia. Circulatory Hypoxia As in anemic hypoxia, the Pao2 is usually normal, but venous and tissue Po2 values are reduced as a consequence of reduced tissue perfusion and greater tissue O2 extraction. This pathophysiology leads to an increased arterial-mixed venous O2 difference (a-v-O2 difference), or gradient. Generalized circulatory hypoxia occurs in heart failure (Chap. 279) and in most forms of shock (Chap. 324). Specific Organ Hypoxia Localized circulatory hypoxia may occur as a result of decreased perfusion secondary to arterial obstruction, as in localized atherosclerosis in any vascular bed, or as a consequence of vasoconstriction, as observed in Raynaud’s phenomenon (Chap. 302). Localized hypoxia may also result from venous obstruction and the resultant expansion of interstitial fluid causing arteriolar compression and, thereby, reduction of arterial inflow. Edema, which increases the distance through which O2 must diffuse before it reaches cells, can also cause localized hypoxia. In an attempt to maintain adequate perfusion to more vital organs in patients with reduced cardiac output secondary to heart failure or hypovolemic shock, vasoconstriction may reduce perfusion in the limbs and skin, causing hypoxia of these regions. Increased O2 Requirements If the O2 consumption of tissues is elevated without a corresponding increase in perfusion, tissue hypoxia ensues and the Po2 in venous blood declines. Ordinarily, the clinical picture of patients with hypoxia due to an elevated metabolic rate, as in fever or thyrotoxicosis, is quite different from that in other types of hypoxia: the skin is warm and flushed owing to increased cutaneous blood flow that dissipates the excessive heat produced, and cyanosis is usually absent. Exercise is a classic example of increased tissue O2 requirements. These increased demands are normally met by several mechanisms operating simultaneously: (1) increase in the cardiac output and ventilation and, thus, O2 delivery to the tissues; (2) a preferential shift in blood flow to the exercising muscles by changing vascular resistances PART 2 Cardinal Manifestations and Presentation of Diseases in the circulatory beds of exercising tissues, directly and/or reflexly; (3) an increase in O2 extraction from the delivered blood and a widening of the arteriovenous O2 difference; and (4) a reduction in the pH of the tissues and capillary blood, shifting the Hb-O2 curve to the right (see Fig. 127-2), and unloading more O2 from hemoglobin. If the capacity of these mechanisms is exceeded, then hypoxia, especially of the exercising muscles, will result. Improper Oxygen utilization Cyanide (Chap. 473e) and several other similarly acting poisons cause cellular hypoxia. The tissues are unable to use O2, and, as a consequence, the venous blood tends to have a high O2 tension. This condition has been termed histotoxic hypoxia. An important component of the respiratory response to hypoxia originates in special chemosensitive cells in the carotid and aortic bodies and in the respiratory center in the brainstem. The stimulation of these cells by hypoxia increases ventilation, with a loss of CO2, and can lead to respiratory alkalosis. When combined with the metabolic acidosis resulting from the production of lactic acid, the serum bicarbonate level declines (Chap. 66). With the reduction of Pao2, cerebrovascular resistance decreases and cerebral blood flow increases in an attempt to maintain O2 delivery to the brain. However, when the reduction of Pao2 is accompanied by hyperventilation and a reduction of Paco2, cerebrovascular resistance rises, cerebral blood flow falls, and tissue hypoxia intensifies. The diffuse, systemic vasodilation that occurs in generalized hypoxia increases the cardiac output. In patients with underlying heart disease, the requirements of peripheral tissues for an increase of cardiac output with hypoxia may precipitate congestive heart failure. In patients with ischemic heart disease, a reduced Pao2 may intensify myocardial ischemia and further impair left ventricular function. One of the important compensatory mechanisms for chronic hypoxia is an increase in the hemoglobin concentration and in the number of red blood cells in the circulating blood, i.e., the development of polycythemia secondary to erythropoietin production (Chap. 131). In persons with chronic hypoxemia secondary to prolonged residence at a high altitude (>13,000 ft, 4200 m), a condition termed chronic mountain sickness develops. This disorder is characterized by a blunted respiratory drive, reduced ventilation, erythrocytosis, cyanosis, weakness, right ventricular enlargement secondary to pulmonary hypertension, and even stupor. Cyanosis refers to a bluish color of the skin and mucous membranes resulting from an increased quantity of reduced hemoglobin (i.e., deoxygenated hemoglobin) or of hemoglobin derivatives (e.g., met-hemoglobin or sulfhemoglobin) in the small blood vessels of those tissues. It is usually most marked in the lips, nail beds, ears, and malar eminences. Cyanosis, especially if developed recently, is more commonly detected by a family member than the patient. The florid skin characteristic of polycythemia vera (Chap. 131) must be distinguished from the true cyanosis discussed here. A cherry-colored flush, rather than cyanosis, is caused by COHb (Chap. 473e). The degree of cyanosis is modified by the color of the cutaneous pigment and the thickness of the skin, as well as by the state of the cutaneous capillaries. The accurate clinical detection of the presence and degree of cyanosis is difficult, as proved by oximetric studies. In some instances, central cyanosis can be detected reliably when the Sao2 has fallen to 85%; in others, particularly in dark-skinned persons, it may not be detected until it has declined to 75%. In the latter case, examination of the mucous membranes in the oral cavity and the conjunctivae rather than examination of the skin is more helpful in the detection of cyanosis. The increase in the quantity of reduced hemoglobin in the mucocutaneous vessels that produces cyanosis may be brought about either by an increase in the quantity of venous blood as a result of dilation of the venules (including precapillary venules) or by a reduction in the Sao2 in the capillary blood. In general, cyanosis becomes apparent when the concentration of reduced hemoglobin in capillary blood exceeds 40 g/L (4 g/dL). It is the absolute, rather than the relative, quantity of reduced hemoglobin that is important in producing cyanosis. Thus, in a patient with severe anemia, the relative quantity of reduced hemoglobin in the venous blood may be very large when considered in relation to the total quantity of hemoglobin in the blood. However, since the concentration of the latter is markedly reduced, the absolute quantity of reduced hemoglobin may still be low, and, therefore, patients with severe anemia and even marked arterial desaturation may not display cyanosis. Conversely, the higher the total hemoglobin content, the greater the tendency toward cyanosis; thus, patients with marked polycythemia tend to be cyanotic at higher levels of Sao2 than patients with normal hematocrit values. Likewise, local passive congestion, which causes an increase in the total quantity of reduced hemoglobin in the vessels in a given area, may cause cyanosis. Cyanosis is also observed when nonfunctional hemoglobin, such as methemoglobin (consequential or acquired) or sulfhemoglobin (Chap. 127), is present in blood. Cyanosis may be subdivided into central and peripheral types. In central cyanosis, the Sao2 is reduced or an abnormal hemoglobin derivative is present, and the mucous membranes and skin are both affected. Peripheral cyanosis is due to a slowing of blood flow and abnormally great extraction of O2 from normally saturated arterial blood; it results from vasoconstriction and diminished peripheral blood flow, such as occurs in cold exposure, shock, congestive failure, and peripheral vascular disease. Often in these conditions, the mucous membranes of the oral cavity or those beneath the tongue may be spared. Clinical differentiation between central and peripheral cyanosis may not always be simple, and in conditions such as cardiogenic shock with pulmonary edema, there may be a mixture of both types. DIFFERENTIAL DIAgNOSIS Central Cyanosis (Table 49-1) Decreased Sao2 results from a marked reduction in the Pao2. This reduction may be brought about by a decline in the Fio2 without sufficient compensatory alveolar hyperventilation to maintain alveolar Po2. Cyanosis usually becomes manifest in an ascent to an altitude of 4000 m (13,000 ft). Seriously impaired pulmonary function, through perfusion of unventilated or poorly ventilated areas of the lung or alveolar hypoventilation, is a common cause of central cyanosis (Chap. 306e). Inhomogeneity in pulmonary ventilation and perfusion (perfusion of hypoventilated alveoli) Impaired oxygen diffusion Anatomic shunts Certain types of congenital heart disease Pulmonary arteriovenous fistulas Multiple small intrapulmonary shunts Hemoglobin with low affinity for oxygen Hemoglobin abnormalities Methemoglobinemia—hereditary, acquired Sulfhemoglobinemia—acquired Carboxyhemoglobinemia (not true cyanosis) Reduced cardiac output Cold exposure Redistribution of blood flow from extremities Arterial obstruction Venous obstruction This condition may occur acutely, as in extensive pneumonia or 249 pulmonary edema, or chronically, with chronic pulmonary diseases (e.g., emphysema). In the latter situation, secondary polycythemia is generally present and clubbing of the fingers (see below) may occur. Another cause of reduced Sao2 is shunting of systemic venous blood into the arterial circuit. Certain forms of congenital heart disease are associated with cyanosis on this basis (see above and Chap. 282). Pulmonary arteriovenous fistulae may be congenital or acquired, solitary or multiple, microscopic or massive. The severity of cyanosis produced by these fistulae depends on their size and number. They occur with some frequency in hereditary hemorrhagic telangiectasia. Sao2 reduction and cyanosis may also occur in some patients with cirrhosis, presumably as a consequence of pulmonary arteriovenous fistulae or portal vein–pulmonary vein anastomoses. In patients with cardiac or pulmonary right-to-left shunts, the presence and severity of cyanosis depend on the size of the shunt relative to the systemic flow as well as on the Hb-O2 saturation of the venous blood. With increased extraction of O2 from the blood by the exercising muscles, the venous blood returning to the right side of the heart is more unsaturated than at rest, and shunting of this blood intensifies the cyanosis. Secondary polycythemia occurs frequently in patients in this setting and contributes to the cyanosis. Cyanosis can be caused by small quantities of circulating methemoglobin (Hb Fe3+) and by even smaller quantities of sulfhemoglobin (Chap. 127); both of these hemoglobin derivatives impair oxygen delivery to the tissues. Although they are uncommon causes of cyanosis, these abnormal hemoglobin species should be sought by spectroscopy when cyanosis is not readily explained by malfunction of the circulatory or respiratory systems. Generally, digital clubbing does not occur with them. Peripheral Cyanosis Probably the most common cause of peripheral cyanosis is the normal vasoconstriction resulting from exposure to cold air or water. When cardiac output is reduced, cutaneous vasoconstriction occurs as a compensatory mechanism so that blood is diverted from the skin to more vital areas such as the CNS and heart, and cyanosis of the extremities may result even though the arterial blood is normally saturated. Arterial obstruction to an extremity, as with an embolus, or arteriolar constriction, as in cold-induced vasospasm (Raynaud’s phenomenon) (Chap. 302), generally results in pallor and coldness, and there may be associated cyanosis. Venous obstruction, as in thrombophlebitis or deep venous thrombosis, dilates the subpapillary venous plexuses and thereby intensifies cyanosis. APPROACH TO THE PATIENT: Certain features are important in arriving at the cause of cyanosis: 1. It is important to ascertain the time of onset of cyanosis. Cyanosis present since birth or infancy is usually due to congenital heart disease. 2. Central and peripheral cyanosis must be differentiated. Evidence of disorders of the respiratory or cardiovascular systems is helpful. Massage or gentle warming of a cyanotic extremity will increase peripheral blood flow and abolish peripheral, but not central, cyanosis. 3. The presence or absence of clubbing of the digits (see below) should be ascertained. The combination of cyanosis and clubbing is frequent in patients with congenital heart disease and right-to-left shunting and is seen occasionally in patients with pulmonary disease, such as lung abscess or pulmonary arteriovenous fistulae. In contrast, peripheral cyanosis or acutely developing central cyanosis is not associated with clubbed digits. 4. Pao2 and Sao2 should be determined, and, in patients with cyanosis in whom the mechanism is obscure, spectroscopic examination of the blood should be performed to look for abnormal types of hemoglobin (critical in the differential diagnosis of cyanosis). The selective bulbous enlargement of the distal segments of the fingers and toes due to proliferation of connective tissue, particularly on the dorsal surface, is termed clubbing; there is also increased sponginess of the soft tissue at the base of the clubbed nail. Clubbing may be hereditary, idiopathic, or acquired and associated with a variety of disorders, including cyanotic congenital heart disease (see above), infective endocarditis, and a variety of pulmonary conditions (among them primary and metastatic lung cancer, bronchiectasis, asbestosis, sarcoidosis, lung abscess, cystic fibrosis, tuberculosis, and mesothelioma), as well as with some gastrointestinal diseases (including inflammatory bowel disease and hepatic cirrhosis). In some instances, it is occupational, e.g., in jackhammer operators. Clubbing in patients with primary and metastatic lung cancer, mesothelioma, bronchiectasis, or hepatic cirrhosis may be associated with hypertrophic osteoarthropathy. In this condition, the subperiosteal formation of new bone in the distal diaphyses of the long bones of the extremities causes pain and symmetric arthritis-like changes in the shoulders, knees, ankles, wrists, and elbows. The diagnosis of hypertrophic osteoarthropathy may be confirmed by bone radiograph or magnetic resonance imaging (MRI). Although the mechanism of clubbing is unclear, it appears to be secondary to humoral substances that cause dilation of the vessels of the distal digits as well as growth factors released from platelet precursors in the digital circulation. In certain circumstances, clubbing is reversible, such as following lung transplantation for cystic fibrosis. Eugene Braunwald, Joseph Loscalzo PART 2 Cardinal Manifestations and Presentation of Diseases About one-third of total-body water is confined to the extracellular space. Approximately 75% of the latter is interstitial fluid, and the remainder is the plasma. The forces that regulate the disposition of fluid between these two components of the extracellular compartment frequently are referred to as the Starling forces. The hydrostatic pressure within the capillaries and the colloid oncotic pressure in the interstitial fluid tend to promote movement of fluid from the vascular to the extravascular space. By contrast, the colloid oncotic pressure contributed by plasma proteins and the hydrostatic pressure within the interstitial fluid promote the movement of fluid into the vascular compartment. As a consequence, there is movement of water and diffusible solutes from the vascular space at the arteriolar end of the capillaries. Fluid is returned from the interstitial space into the vascular system at the venous end of the capillaries and by way of the lymphatics. These movements are usually balanced so that there is a steady state in the sizes of the intravascular and interstitial compartments, yet a large exchange between them occurs. However, if either the capillary hydrostatic pressure is increased and/or the oncotic pressure is reduced, a further net movement of fluid from intravascular to the interstitial spaces will take place. Edema is defined as a clinically apparent increase in the interstitial fluid volume, which develops when Starling forces are altered so that there is increased flow of fluid from the vascular system into the interstitium. Edema due to an increase in capillary pressure may result from an elevation of venous pressure caused by obstruction to venous and/ or lymphatic drainage. An increase in capillary pressure may be generalized, as occurs in heart failure, or it may be localized to one extremity when venous pressure is elevated due to unilateral thrombophlebitis (see below). The Starling forces also may be imbalanced when the colloid oncotic pressure of the plasma is reduced owing to any factor that may induce hypoalbuminemia, as when large quantities of protein are lost in the urine such as in the nephrotic syndrome (see below), or when synthesis is reduced in a severe catabolic state. Edema may also result from damage to the capillary endothelium, which increases its permeability and permits the transfer of proteins into the interstitial compartment. Injury to the capillary wall can result from drugs (see below), viral or bacterial agents, and thermal or mechanical trauma. Increased capillary permeability also may be a consequence of a hypersensitivity reaction and of immune injury. Damage to the capillary endothelium is presumably responsible for inflammatory edema, which is usually nonpitting, localized, and accompanied by other signs of inflammation—i.e., erythema, heat, and tenderness. In many forms of edema, despite the increase in extracellular fluid volume, the effective arterial blood volume, a parameter that represents the filling of the arterial tree and that effectively perfuses the tissues, is reduced. Underfilling of the arterial tree may be caused by a reduction of cardiac output and/or systemic vascular resistance, by pooling of blood in the splanchnic veins (as in cirrhosis), and by hypoalbuminemia (Fig. 50-1A). As a consequence of underfilling, a series of physiologic responses designed to restore the effective arterial volume to normal are set into motion. A key element of these responses is the renal retention of sodium and, therefore, water, thereby restoring effective arterial volume, but sometimes also leading to or intensifying edema. The diminished renal blood flow characteristic of states in which the effective arterial blood volume is reduced is translated by the renal juxtaglomerular cells (specialized myoepithelial cells surrounding the afferent arteriole) into a signal for increased renin release. Renin is an enzyme with a molecular mass of about 40,000 Da that acts on its substrate, angiotensinogen, an α2-globulin synthesized by the liver, to release angiotensin I, a decapeptide, which in turn is converted to angiotensin II (AII), an octapeptide. AII has generalized vasoconstrictor properties, particularly on the renal efferent arterioles. This action reduces the hydrostatic pressure in the peritubular capillaries, whereas the increased filtration fraction raises the colloid osmotic pressure in these vessels, thereby enhancing salt and water reabsorption in the proximal tubule as well as in the ascending limb of the loop of Henle. The renin-angiotensin-aldosterone system (RAAS) operates as both a hormonal and paracrine system. Its activation causes sodium and water retention and thereby contributes to edema formation. Blockade of the conversion of angiotensin I to AII and blockade of the AII receptor enhance sodium and water excretion and reduce many forms of edema. AII that enters the systemic circulation stimulates the production of aldosterone by the zona glomerulosa of the adrenal cortex. Aldosterone in turn enhances sodium reabsorption (and potassium excretion) by the collecting tubule, further favoring edema formation. In patients with heart failure, not only is aldosterone secretion elevated but the biologic half-life of the hormone is prolonged secondary to the depression of hepatic blood flow, which reduces its hepatic catabolism and increases further the plasma level of the hormone. Blockade of the action of aldosterone by spironolactone or eplerenone (aldosterone antagonists) or by amiloride (a blocker of epithelial sodium channels) often induces a moderate diuresis in edematous states. (See also Chap. 404) The secretion of arginine vasopressin (AVP) occurs in response to increased intracellular osmolar concentration, and, by stimulating V2 receptors, AVP increases the reabsorption of free water in the distal tubules and collecting ducts of the kidneys, thereby increasing total-body water. Circulating AVP is elevated in many patients with heart failure secondary to a nonosmotic stimulus associated with decreased effective arterial volume and reduced compliance of the left atrium. Such patients fail to show the normal reduction of AVP with a reduction of osmolality, contributing to edema formation and hyponatremia. Nonosmotic vasopressin stimulation Activation of RAAS ˜Extracellular fluid volume ˜Cardiac output Effective arterial volume Activation of ventricular and arterial receptors Restoration of effective arterial volume SNS stimulation Renal H2O retention Renal Na+ retention Low output heart failure, Pericardial tamponade Constructive pericarditis ˜Oncotic pressure and/or °capillary permeability °Systemic and renal arterial vascular resistance Sepsis Cirrhosis Activation of RAAS High-output cardiac failure Arteriovenous fistula Arterial vasodilators ˜Systemic vascular resistance Effective arterial volume Maintenance of arterial circulatory integrity Activation of arterial baroreceptors Nonosmotic AVP stimulation °Cardiac output Renal H2O retention SNS stimulation Pregnancy Renal Na+ retention °Systemic arterial, vascular, and renal resistance FIguRE 50-1 Clinical conditions in which a decrease in cardiac output (A) and systemic vascular resistance (B) cause arterial underfilling with resulting neurohumoral activation and renal sodium and water retention. In addition to activating the neurohumoral axis, adrenergic stimulation causes renal vasoconstriction and enhances sodium and fluid transport by the proximal tubule epithelium. RAAS, renin-angiotensin aldosterone system; SNS, sympathetic nervous system. (Modified from RW Schrier: Ann Intern Med 113:155, 1990.) This potent peptide vasoconstrictor is released by endothelial cells. Its concentration in the plasma is elevated in patients with severe heart failure and contributes to renal vasoconstriction, sodium retention, and edema. Atrial distention causes release into the circulation of atrial natriuretic peptide (ANP), a polypeptide; a high-molecular-weight precursor of ANP is stored in secretory granules within atrial myocytes. The closely related brain natriuretic peptide (pre-prohor-251 mone BNP) is stored primarily in ventricular myocytes and is released when ventricular diastolic pressure rises. Released ANP and BNP (which is derived from its precursor) bind to the natriuretic receptor-A, which causes: (1) excretion of sodium and water by augmenting glomerular filtration rate, inhibiting sodium reabsorption in the proximal tubule, and inhibiting release of renin and aldosterone; and (2) dilation of arterioles and venules by antagonizing the vasoconstrictor actions of AII, AVP, and sympathetic stimulation. Thus, elevated levels of natriuretic peptides have the capacity to oppose sodium retention in hypervolemic and edematous states. Although circulating levels of ANP and BNP are elevated in heart failure and in cirrhosis with ascites, the natriuretic peptides are not sufficiently potent to prevent edema formation. Indeed, in edematous states, resistance to the actions of natriuretic peptides may be increased, further reducing their effectiveness. Further discussion of the control of sodium and water balance is found in Chap. 64e. A weight gain of several kilograms usually precedes overt manifestations of generalized edema, and a similar weight loss from diuresis can be induced in a slightly edematous patient before “dry weight” is achieved. Anasarca refers to gross, generalized edema. Ascites (Chap. 59) and hydrothorax refer to accumulation of excess fluid in the peritoneal and pleural cavities, respectively, and are considered special forms of edema. Edema is recognized by the persistence of an indentation of the skin after pressure; this is known as “pitting” edema. In its more subtle form, edema may be detected by noting that after the stethoscope is removed from the chest wall, the rim of the bell leaves an indentation on the skin of the chest for a few minutes. Edema may be present when the ring on a finger fits more snugly than in the past or when a patient complains of difficulty putting on shoes, particularly in the evening. Edema may also be recognized by puffiness of the face, which is most readily apparent in the periorbital areas. The differences among the major causes of generalized edema are shown in Table 50-1. Cardiac, renal, hepatic, or nutritional disorders are responsible for a majority of patients with generalized edema. Consequently, the differential diagnosis of generalized edema should be directed toward identifying or excluding these several conditions. Heart Failure (See also Chap. 279) In heart failure, the impaired systolic emptying of the ventricle(s) and/or the impairment of ventricular relaxation promotes an accumulation of blood in the venous circulation at the expense of the effective arterial volume. In addition, the heightened tone of the sympathetic nervous system causes renal vasoconstriction and reduction of glomerular filtration. In mild heart failure, a small increment of total blood volume may repair the deficit of 252 TABLE 50-1 PRinCiPAL CAuSES of gEnERALizED EDEMA: HiSToRy, PHySiCAL ExAMinATion, AnD LABoRAToRy finDingS PART 2 Cardinal Manifestations and Presentation of Diseases Renal (CRF) Usually chronic: may be associated with uremic signs and symptoms, including decreased appetite, altered (metallic or fishy) taste, altered sleep pattern, difficulty concentrating, restless legs, or myoclonus; dyspnea can be present, but generally less prominent than in heart failure Abbreviations: CRF, chronic renal failure; NS, nephrotic syndrome. Elevated jugular venous pressure, ventricular (S3) gallop; occasionally with displaced or dyskinetic apical pulse; peripheral cyanosis, cool extremities, small pulse pressure when severe Frequently associated with ascites; jugular venous pressure normal or low; blood pressure lower than in renal or cardiac disease; one or more additional signs of chronic liver disease (jaundice, palmar erythema, Dupuytren’s contracture, spider angiomata, male gynecomastia; asterixis and other signs of encephalopathy) may be present Elevated blood pressure; hypertensive retinopathy; nitrogenous fetor; pericardial friction rub in advanced cases with uremia If severe, reductions in serum albumin, cholesterol, other hepatic proteins (transferrin, fibrinogen); liver enzymes elevated, depending on the cause and acuity of liver injury; tendency toward hypokalemia, respiratory alkalosis; macrocytosis from folate deficiency Elevation of serum creatinine and cystatin C; albuminuria; hyperkalemia, metabolic acidosis, hyperphosphatemia, hypocalcemia, anemia (usually normocytic) Proteinuria (≥3.5 g/d); hypoalbuminemia; hypercholesterolemia; microscopic hematuria Source: Modified from GM Chertow: Approach to the patient with edema, in Primary Cardiology, 2nd ed, E Braunwald, L Goldman (eds). Philadelphia, Saunders, 2003, pp 117–128. effective arterial volume through the operation of Starling’s law of the heart, in which an increase in ventricular diastolic volume promotes a more forceful contraction and may thereby maintain the cardiac output. However, if the cardiac disorder is more severe, sodium and water retention continue, and the increment in blood volume accumulates in the venous circulation, raising venous pressure and causing edema (Fig. 50-1). The presence of heart disease, as manifested by cardiac enlargement and/or ventricular hypertrophy, together with evidence of cardiac failure, such as dyspnea, basilar rales, venous distention, and hepatomegaly, usually indicates that edema results from heart failure. Noninvasive tests such as echocardiography may be helpful in establishing the diagnosis of heart disease. The edema of heart failure typically occurs in the dependent portions of the body. Edema of Renal Disease (See also Chap. 338) The edema that occurs during the acute phase of glomerulonephritis is characteristically associated with hematuria, proteinuria, and hypertension. Although some evidence supports the view that the fluid retention is due to increased capillary permeability, in most instances, the edema results from primary retention of sodium and water by the kidneys owing to renal insufficiency. This state differs from most forms of heart failure in that it is characterized by a normal (or sometimes even increased) cardiac output. Patients with edema due to acute renal failure commonly have arterial hypertension as well as pulmonary congestion on chest roentgenogram, often without considerable cardiac enlargement, but they may not develop orthopnea. Patients with chronic renal failure may also develop edema due to primary renal retention of sodium and water. Nephrotic Syndrome and other Hypoalbuminemic States The primary alteration in the nephrotic syndrome is a diminished colloid oncotic pressure due to losses of large quantities (≥3.5 g/d) of protein into the urine. With severe hypoalbuminemia (<35 g/L) and the consequent reduced colloid osmotic pressure, the sodium and water that are retained cannot be restrained within the vascular compartment, and total and effective arterial blood volumes decline. This process initiates the edema-forming sequence of events described above, including activation of the RAAS. The nephrotic syndrome may occur during the course of a variety of kidney diseases, which include glomerulonephritis, diabetic glomerulosclerosis, and hypersensitivity reactions. The edema is diffuse, symmetric, and most prominent in the dependent areas; as a consequence, periorbital edema is most prominent in the morning. Hepatic Cirrhosis (See also Chap. 365) This condition is characterized in part by hepatic venous outflow blockade, which in turn expands the splanchnic blood volume and increases hepatic lymph formation. Intrahepatic hypertension acts as a stimulus for renal sodium retention and causes a reduction of effective arterial blood volume. These alterations are frequently complicated by hypoalbuminemia secondary to reduced hepatic synthesis of albumin, as well as peripheral arterial vasodilation. These effects reduce the effective arterial blood volume further, leading to activation of the RAAS and renal sympathetic nerves and to release of AVP, endothelin, and other sodium-and water-retaining mechanisms (Fig. 50-1B). The concentration of circulating aldosterone often is elevated by the failure of the liver to metabolize this hormone. Initially, the excess interstitial fluid is localized preferentially proximal (upstream) to the congested portal venous system and obstructed hepatic lymphatics, i.e., in the peritoneal cavity (causing ascites, Chap. 59). In later stages, particularly when there is severe hypoalbuminemia, peripheral edema may develop. A sizable accumulation of ascitic fluid may increase intraabdominal pressure and impede venous return from the lower extremities and contribute to the accumulation of edema of the lower extremities. The excess production of prostaglandins (PGE2 and PGI2) in cirrhosis attenuates renal sodium retention. When the synthesis of these substances is inhibited by nonsteroidal anti-inflammatory drugs (NSAIDs), renal function may deteriorate, and this may increase sodium retention further. Drug-Induced Edema A large number of widely used drugs can cause edema (Table 50-2). Mechanisms include renal vasoconstriction (NSAIDs and cyclosporine), arteriolar dilation (vasodilators), augmented renal sodium reabsorption (steroid hormones), and capillary damage. Edema of Nutritional Origin A diet grossly deficient in protein over a prolonged period may produce hypoproteinemia and edema. The latter may be intensified by the development of beriberi heart disease, which also is of nutritional origin, in which multiple peripheral arteriovenous fistulae result in reduced effective systemic perfusion and effective arterial blood volume, thereby enhancing edema formation (Chap. 96e) (Fig. 50-1B). Edema may actually become intensified OKT3 monoclonal antibody Source: Modified from Chertow GM: Approach to the patient with edema, in Primary Cardiology, 2nd ed, E Braunwald, L Goldman (eds). Philadelphia, Saunders, 2003, pp 117–128. when famished subjects are first provided with an adequate diet. The ingestion of more food may increase the quantity of sodium ingested, which is then retained along with water. So-called refeeding edema also may be linked to increased release of insulin, which directly increases tubular sodium reabsorption. In addition to hypoalbuminemia, hypokalemia and caloric deficits may be involved in the edema of starvation. In this condition, the hydrostatic pressure in the capillary bed upstream (proximal) of the obstruction increases so that an abnormal quantity of fluid is transferred from the vascular to the interstitial space. Since the alternative route (i.e., the lymphatic channels) also may be obstructed or maximally filled, an increased volume of interstitial fluid in the limb develops (i.e., there is trapping of fluid in the interstitium of the extremity). The displacement of large quantities of fluid into a limb may occur at the expense of the blood volume in the remainder of the body, thereby reducing effective arterial blood volume and leading to the retention of NaCl and H2O until the deficit in plasma volume has been corrected. Localized edema due to venous or lymphatic obstruction may be caused by thrombophlebitis, chronic lymphangitis, resection of regional lymph nodes, and filariasis, among other causes. Lymphedema is particularly intractable because restriction of lymphatic flow results in increased protein concentration in the interstitial fluid, a circumstance that aggravates fluid retention. Other Causes of Edema These causes include hypothyroidism (myxedema) and hyperthyroidism (pretibial myxedema secondary to Graves’ disease), the edema in which is typically nonpitting and due to deposition of hyaluronic acid and, in Graves’ disease, lymphocytic infiltration and inflammation; exogenous hyperadrenocortism; pregnancy; and administration of estrogens and vasodilators, particularly dihydropyridines such as nifedipine. The distribution of edema is an important guide to its cause. Edema associated with heart failure tends to be more extensive in the legs and to be accentuated in the evening, a feature also determined largely by posture. When patients with heart failure are confined to bed, edema 253 may be most prominent in the presacral region. Severe heart failure may cause ascites that may be distinguished from the ascites caused by hepatic cirrhosis by the jugular venous pressure, which is usually elevated in heart failure and normal in cirrhosis. Edema resulting from hypoproteinemia, as occurs in the nephrotic syndrome, characteristically is generalized, but it is especially evident in the very soft tissues of the eyelids and face and tends to be most pronounced in the morning owing to the recumbent posture assumed during the night. Less common causes of facial edema include trichinosis, allergic reactions, and myxedema. Edema limited to one leg or to one or both arms is usually the result of venous and/or lymphatic obstruction. Unilateral paralysis reduces lymphatic and venous drainage on the affected side and may also be responsible for unilateral edema. In patients with obstruction of the superior vena cava, edema is confined to the face, neck, and upper extremities in which the venous pressure is elevated compared with that in the lower extremities. APPROACH TO THE PATIENT: CHAPTER 51e Approach to the Patient with a Heart Murmur An important first question is whether the edema is localized or generalized. If it is localized, the local phenomena that may be responsible should be considered. If the edema is generalized, one should first determine if there is serious hypoalbuminemia, e.g., serum albumin <25 g/L. If so, the history, physical examination, urinalysis, and other laboratory data will help evaluate the question of cirrhosis, severe malnutrition, or the nephrotic syndrome as the underlying disorder. If hypoalbuminemia is not present, it should be determined if there is evidence of heart failure severe enough to promote generalized edema. Finally, it should be ascertained as to whether or not the patient has an adequate urine output or if there is significant oliguria or anuria. These abnormalities are discussed in Chaps. 61, 334, and 335. Approach to the Patient with a Patrick T. O’Gara, Joseph Loscalzo This is a digital-only chapter. it is available on the DvD that accompanies this book, as well as on Access Medicine/Harrison’s online, and the eBook and “app” editions of HPiM 19e. The differential diagnosis of a heart murmur begins with a careful assessment of its major attributes and response to bedside maneuvers. The history, clinical context, and associated physical examination findings provide additional clues by which the significance of a heart murmur can be established. Accurate bedside identification of a heart murmur can inform decisions regarding the indications for noninvasive testing and the need for referral to a cardiovascular specialist. Preliminary discussions can be held with the patient regarding antibiotic or rheumatic fever prophylaxis, the need to restrict various forms of physical activity, and the potential role for family screening. Approach to the Patient with a Heart Murmur Patrick T. O’Gara, Joseph Loscalzo The differential diagnosis of a heart murmur begins with a careful assessment of its major attributes and response to bedside maneuvers. 51e The history, clinical context, and associated physical examination findings provide additional clues by which the significance of a heart murmur can be established. Accurate bedside identification of a heart murmur can inform decisions regarding the indications for noninvasive testing and the need for referral to a cardiovascular specialist. Preliminary discussions can be held with the patient regarding antibiotic or rheumatic fever prophylaxis, the need to restrict various forms of physical activity, and the potential role for family screening. Heart murmurs are caused by audible vibrations that are due to increased turbulence from accelerated blood flow through normal or abnormal orifices, flow through a narrowed or irregular orifice into a dilated vessel or chamber, or backward flow through an incompetent valve, ventricular septal defect, or patent ductus arteriosus. They traditionally are defined in terms of their timing within the cardiac cycle (Fig. 51e-1). Systolic murmurs begin with or after the first heart sound (S1) and terminate at or before the component (A2 or P2) of the second heart sound (S2) that corresponds to their site of origin (left or right, respectively). Diastolic murmurs begin with or after the associated component of S2 and end at or before the subsequent S1. Continuous murmurs are not confined to either phase of the cardiac cycle but instead begin in early systole and proceed through S2 into all or part of diastole. The accurate timing of heart murmurs is the first step in their identification. The distinction between S1 and S2 and, therefore, systole and diastole is usually a straightforward process but can be difficult in the setting of a tachyarrhythmia, in which case the heart sounds can be distinguished by simultaneous palpation of the carotid upstroke, which should closely follow S1. Duration and Character The duration of a heart murmur depends on the length of time over which a pressure difference exists between two cardiac chambers, the left ventricle and the aorta, the right ventricle and the pulmonary artery, or the great vessels. The magnitude and variability of this pressure difference, coupled with the geometry and compliance of the involved chambers or vessels, dictate the velocity of flow; the degree of turbulence; and the resulting frequency, configuration, and intensity of the murmur. The diastolic murmur of chronic aortic regurgitation (AR) is a blowing, high-frequency event, whereas the murmur of mitral stenosis (MS), indicative of the left atrial–left ventricular diastolic pressure gradient, is a low-frequency event, heard as a rumbling sound with the bell of the stethoscope. The frequency components of a heart murmur may vary at different sites of auscultation. The coarse systolic murmur of aortic stenosis (AS) may sound higher pitched and more acoustically pure at the apex, a phenomenon eponymously referred to as the Gallavardin effect. Some murmurs may have a distinct or unusual quality, such as the “honking” sound appreciated in some patients with mitral regurgitation (MR) due to mitral valve prolapse (MVP). The configuration of a heart murmur may be described as crescendo, decrescendo, crescendo-decrescendo, or plateau. The decrescendo configuration of the murmur of chronic AR (Fig. 51e-1E) can be understood in terms of the progressive decline in the diastolic pressure gradient between the aorta and the left ventricle. The crescendo-decrescendo configuration of the murmur of AS reflects the changes in the systolic pressure gradient between the left ventricle and the aorta as ejection occurs, whereas the plateau configuration of the murmur of chronic MR (Fig. 51e-1B) is consistent with the large and nearly constant pressure difference between the left ventricle and the left atrium. Intensity The intensity of a heart murmur is graded on a scale of 1–6 (or I–VI). A grade 1 murmur is very soft and is heard only with great FIguRe 51e-1 Diagram depicting principal heart murmurs. A. Presystolic murmur of mitral or tricuspid stenosis. B. Holosystolic (pan-systolic) murmur of mitral or tricuspid regurgitation or of ventricular septal defect. C. Aortic ejection murmur beginning with an ejection click and fading before the second heart sound. D. Systolic murmur in pulmonic stenosis spilling through the aortic second sound, pulmonic valve closure being delayed. E. Aortic or pulmonary diastolic murmur. F. Long diastolic murmur of mitral stenosis after the opening snap (OS). G. Short mid-diastolic inflow murmur after a third heart sound. H. Continuous murmur of patent ductus arteriosus. (Adapted from P Wood: Diseases of the Heart and Circulation, London, Eyre & Spottiswood, 1968. Permission granted courtesy of Antony and Julie Wood.) effort. A grade 2 murmur is easily heard but not particularly loud. A grade 3 murmur is loud but is not accompanied by a palpable thrill over the site of maximal intensity. A grade 4 murmur is very loud and is accompanied by a thrill. A grade 5 murmur is loud enough to be heard with only the edge of the stethoscope touching the chest, whereas a grade 6 murmur is loud enough to be heard with the stethoscope slightly off the chest. Murmurs of grade 3 or greater intensity usually signify important structural heart disease and indicate high blood flow velocity at the site of murmur production. Small ventricular septal defects (VSDs), for example, are accompanied by loud, usually grade 4 or greater, systolic murmurs as blood is ejected at high velocity from the left ventricle to the right ventricle. Low-velocity events, such as left-to-right shunting across an atrial septal defect (ASD), are usually silent. The intensity of a heart murmur may be diminished by any process that increases the distance between the intracardiac source and the stethoscope on the chest wall, such as obesity, obstructive lung disease, and a large pericardial effusion. The intensity of a murmur also may be misleadingly soft when cardiac output is reduced significantly or when the pressure gradient between the involved cardiac structures is low. Location and Radiation Recognition of the location and radiation of the murmur helps facilitate its accurate identification (Fig. 51e-2). Adventitious sounds, such as a systolic click or diastolic snap, or abnormalities of S1 or S2 may provide additional clues. Careful attention to the characteristics of the murmur and other heart sounds CHAPTER 51e Approach to the Patient with a Heart Murmur PART 2 Cardinal Manifestations and Presentation of Diseases FIguRe 51e-2 Maximal intensity and radiation of six isolated systolic murmurs. HOCM, hypertrophic obstructive cardiomyopathy; MR, mitral regurgitation; Pulm, pulmonary stenosis; Aortic, aortic stenosis; VSD, ventricular septal defect. (From JB Barlow: Perspectives on the Mitral Valve. Philadelphia, FA Davis, 1987, p 140.) during the respiratory cycle and the performance of simple bedside maneuvers complete the auscultatory examination. These features, along with recommendations for further testing, are discussed below in the context of specific systolic, diastolic, and continuous heart murmurs (Table 51e-1). SYSTOLIC HeART MuRMuRS early Systolic Murmurs Early systolic murmurs begin with S1 and extend for a variable period, ending well before S2. Their causes are relatively few in number. Acute, severe MR into a normal-sized, relatively noncompliant left atrium results in an early, decrescendo systolic murmur best heard at or just medial to the apical impulse. These characteristics reflect the progressive attenuation of the pressure gradient between the left ventricle and the left atrium during systole owing to the rapid rise in left atrial pressure caused by the sudden volume load into an unprepared, noncompliant chamber and contrast sharply with the auscultatory features of chronic MR. Clinical settings in which acute, severe MR occur include (1) papillary muscle rupture complicating acute myocardial infarction (MI) (Chap. 295), (2) rupture of chordae tendineae in the setting of myxomatous mitral valve disease (MVP, Chap. 283), (3) infective endocarditis (Chap. 155), and (4) blunt chest wall trauma. Acute, severe MR from papillary muscle rupture usually accompanies an inferior, posterior, or lateral MI and occurs 2–7 days after presentation. It often is signaled by chest pain, hypotension, and pulmonary edema, but a murmur may be absent in up to 50% of cases. The posteromedial papillary muscle is involved 6 to 10 times more frequently than the anterolateral papillary muscle. The murmur is to be distinguished from that associated with post-MI ventricular septal rupture, which is accompanied by a systolic thrill at the left sternal border in nearly all patients and is holosystolic in duration. A new heart murmur after an MI is an indication for transthoracic echocardiography (TTE) (Chap. 270e), which allows bedside delineation of its etiology and pathophysiologic significance. The distinction between acute MR and ventricular septal rupture also can be achieved with right heart catheterization, sequential determination of oxygen saturations, and analysis of the pressure waveforms (tall v wave in the pulmonary artery wedge pressure in MR). Post-MI mechanical complications of this nature mandate aggressive medical stabilization and prompt referral for surgical repair. Spontaneous chordal rupture can complicate the course of myxomatous mitral valve disease (MVP) and result in new-onset or “acute on chronic” severe MR. MVP may occur as an isolated phenomenon, or the lesion may be part of a more generalized connective tissue disorder as seen, for example, in patients with Marfan syndrome. Acute, severe MR as a consequence of infective endocarditis results from destruction of leaflet tissue, chordal rupture, or both. Blunt chest wall trauma is usually self-evident but may be disarmingly trivial; it VSD Muscular Nonrestrictive with pulmonary hypertension Tricuspid TR with normal pulmonary artery pressure Mid-systolic Aortic Obstructive Supravalvular–supravalvular aortic stenosis, coarctation of the aorta Valvular–AS and aortic sclerosis Subvalvular–discrete, tunnel or HOCM Increased flow, hyperkinetic states, AR, complete heart block Dilation of ascending aorta, atheroma, aortitis Pulmonary Increased flow, hyperkinetic states, left-to-right shunt (e.g., ASD) Dilation of pulmonary artery Late systolic Mitral MVP, acute myocardial ischemia Tricuspid TVP Holosystolic Atrioventricular valve regurgitation (MR, TR) Left-to-right shunt at ventricular level (VSD) Valvular: congenital (bicuspid valve), rheumatic deformity, endocarditis, prolapse, trauma, post-valvulotomy Dilation of valve ring: aorta dissection, annuloaortic ectasia, cystic medial degeneration, hypertension, ankylosing spondylitis Widening of commissures: syphilis Pulmonic regurgitation Valvular: post-valvulotomy, endocarditis, rheumatic fever, carcinoid Dilation of valve ring: pulmonary hypertension; Marfan syndrome Congenital: isolated or associated with tetralogy of Fallot, VSD, pulmonic stenosis fever) Increased flow across nonstenotic mitral valve (e.g., MR, VSD, PDA, high-output states, and complete heart block) Tricuspid Tricuspid stenosis Increased flow across nonstenotic tricuspid valve (e.g., TR, ASD, and anomalous pulmonary venous return) Coronary AV fistula Mammary souffle of pregnancy Ruptured sinus of Valsalva aneurysm Pulmonary artery branch stenosis Cervical venous hum Small (restrictive) ASD with MS Anomalous left coronary artery Intercostal AV fistula Abbreviations: AR, aortic regurgitation; AS, aortic stenosis; ASD, atrial septal defect; AV, arteriovenous; HOCM, hypertrophic obstructive cardiomyopathy; MR, mitral regurgitation; MS, mitral stenosis; MVP, mitral valve prolapse; PDA, patent ductus arteriosus; TR, tricuspid regurgitation; TVP, tricuspid valve prolapse; VSD, ventricular septal defect. Source: E Braunwald, JK Perloff, in D Zipes et al (eds): Braunwald’s Heart Disease, 7th ed. Philadelphia, Elsevier, 2005; PJ Norton, RA O’Rourke, in E Braunwald, L Goldman (eds): Primary Cardiology, 2nd ed. Philadelphia, Elsevier, 2003. can result in papillary muscle contusion and rupture, chordal detachment, or leaflet avulsion. TTE is indicated in all cases of suspected acute, severe MR to define its mechanism and severity, delineate left ventricular size and systolic function, and provide an assessment of suitability for primary valve repair. A congenital, small muscular VSD (Chap. 282) may be associated with an early systolic murmur. The defect closes progressively during septal contraction, and thus, the murmur is confined to early systole. It is localized to the left sternal border (Fig. 51e-2) and is usually of grade 4 or 5 intensity. Signs of pulmonary hypertension or left ventricular volume overload are absent. Anatomically large and uncorrected VSDs, which usually involve the membranous portion of the septum, may lead to pulmonary hypertension. The murmur associated with the left-to-right shunt, which earlier may have been holosystolic, becomes limited to the first portion of systole as the elevated pulmonary vascular resistance leads to an abrupt rise in right ventricular pressure and an attenuation of the interventricular pressure gradient during the remainder of the cardiac cycle. In such instances, signs of pulmonary hypertension (right ventricular lift, loud and single or closely split S2) may predominate. The murmur is best heard along the left sternal border but is softer. Suspicion of a VSD is an indication for TTE. Tricuspid regurgitation (TR) with normal pulmonary artery pressures, as may occur with infective endocarditis, may produce an early systolic murmur. The murmur is soft (grade 1 or 2), is best heard at the lower left sternal border, and may increase in intensity with inspiration (Carvallo’s sign). Regurgitant “c-v” waves may be visible in the jugular venous pulse. TR in this setting is not associated with signs of right heart failure. Mid-Systolic Murmurs Mid-systolic murmurs begin at a short interval after , end before S (Fig. 51e-1C), and are usually crescendo-decrescendo in configuration. Aortic stenosis is the most common cause of a mid-systolic murmur in an adult. The murmur of AS is usually loudest to the right of the sternum in the second intercostal space (aortic area, Fig. 51e-2) and radiates into the carotids. Transmission of the mid-systolic murmur to the apex, where it becomes higher-pitched, is common (Gallavardin effect; see above). Differentiation of this apical systolic murmur from MR can be difficult. The murmur of AS will increase in intensity, or become louder, in the beat after a premature beat, whereas the murmur of MR will have constant intensity from beat to beat. The intensity of the AS murmur also varies directly with the cardiac output. With a normal cardiac output, a systolic thrill and a grade 4 or higher murmur suggest severe AS. The murmur is softer in the setting of heart failure and low cardiac output. Other auscultatory findings of severe AS include a soft or absent A2, paradoxical splitting of S2, an apical S4, and a late-peaking systolic murmur. In children, adolescents, and young adults with congenital valvular AS, an early ejection sound (click) is usually audible, more often along the left sternal border than at the base. Its presence signifies a flexible, noncalcified bicuspid valve (or one of its variants) and localizes the left ventricular outflow obstruction to the valvular (rather than subor supravalvular) level. Assessment of the volume and rate of rise of the carotid pulse can provide additional information. A small and delayed upstroke (parvus et tardus) is consistent with severe AS. The carotid pulse examination is less discriminatory, however, in older patients with stiffened arteries. The electrocardiogram (ECG) shows signs of left ventricular hypertrophy (LVH) as the severity of the stenosis increases. TTE is indicated to assess the anatomic features of the aortic valve, the severity of the stenosis, left ventricular size, wall thickness and function, and the size and contour of the aortic root and proximal ascending aorta. The obstructive form of hypertrophic cardiomyopathy (HOCM) is associated with a mid-systolic murmur that is usually loudest along the left sternal border or between the left lower sternal border and the apex (Chap. 287, Fig. 51e-2). The murmur is produced by both dynamic left ventricular outflow tract obstruction and MR, and thus, its configuration is a hybrid between ejection and regurgitant phenomena. The intensity of the murmur may vary from beat to beat and after provocative maneuvers but usually does not exceed grade 3. The murmur classically will increase in intensity with maneuvers that result 51e-3 in increasing degrees of outflow tract obstruction, such as a reduction in preload or afterload (Valsalva, standing, vasodilators), or with an augmentation of contractility (inotropic stimulation). Maneuvers that increase preload (squatting, passive leg raising, volume administration) or afterload (squatting, vasopressors) or that reduce contractility (β-adrenoreceptor blockers) decrease the intensity of the murmur. In rare patients, there may be reversed splitting of S2. A sustained left ventricular apical impulse and an S4 may be appreciated. In contrast to AS, the carotid upstroke is rapid and of normal volume. Rarely, it is bisferiens or bifid in contour (see Fig. 267-2D) due to mid-systolic closure of the aortic valve. LVH is present on the ECG, and the diagnosis is confirmed by TTE. Although the systolic murmur associated with MVP behaves similarly to that due to HOCM in response to the Valsalva maneuver and to standing/squatting (Fig. 51e-3), these two lesions can be distinguished on the basis of their associated findings, such as the presence of LVH in HOCM or a nonejection click in MVP. The mid-systolic, crescendo-decrescendo murmur of congenital pulmonic stenosis (PS, Chap. 282) is best appreciated in the second and third left intercostal spaces (pulmonic area) (Figs. 51e-2 and 51e-4). The duration of the murmur lengthens and the intensity of P2 diminishes with increasing degrees of valvular stenosis (Fig. 51e1D). An early ejection sound, the intensity of which decreases with inspiration, is heard in younger patients. A parasternal lift and ECG evidence of right ventricular hypertrophy indicate severe pressure overload. If obtained, the chest x-ray may show poststenotic dilation of the main pulmonary artery. TTE is recommended for complete characterization. Significant left-to-right intracardiac shunting due to an ASD (Chap. 282) leads to an increase in pulmonary blood flow and a grade 2–3 mid-systolic murmur at the middle to upper left sternal border CHAPTER 51e Approach to the Patient with a Heart Murmur FIguRe 51e-3 A mid-systolic nonejection sound (C) occurs in mitral valve prolapse and is followed by a late systolic murmur that crescendos to the second heart sound (S2). Standing decreases venous return; the heart becomes smaller; C moves closer to the first heart sound (S1), and the mitral regurgitant murmur has an earlier onset. With prompt squatting, venous return and afterload increase; the heart becomes larger; C moves toward S2; and the duration of the murmur shortens. (From JA Shaver, JJ Leonard, DF Leon: Examination of the Heart, Part IV, Auscultation of the Heart. Dallas, American Heart Association, 1990, p 13. Copyright, American Heart Association.) 51e-4 Pulmonic stenosis Tetralogy of Fallot P.Ej S1 S2S1 S2 PART 2 Cardinal Manifestations and Presentation of Diseases P.Ej A2 P2 A.Ej A2P.Ej P.Ej = Pulmonary ejection (valvular) A.Ej = Aortic ejection (root) FIguRe 51e-4 Left. In valvular pulmonic stenosis with intact ventricular septum, right ventricular systolic ejection becomes progressively longer, with increasing obstruction to flow. As a result, the murmur becomes longer and louder, enveloping the aortic component of the second heart sound (A2). The pulmonic component (P2) occurs later, and splitting becomes wider but more difficult to hear because A2 is lost in the murmur and P2 becomes progressively fainter and lower pitched. As the pulmonic gradient increases, the isometric contraction phase shortens until the pulmonic valve ejection sound fuses with the first heart sound (S1). In severe pulmonic stenosis with concentric hypertrophy and decreasing right ventricular compliance, a fourth heart sound appears. Right. In tetralogy of Fallot with increasing obstruction at the pulmonic infundibular area, an increasing amount of right ventricular blood is shunted across the silent ventricular septal defect and flow across the obstructed outflow tract decreases. Therefore, with increasing obstruction the murmur becomes shorter, earlier, and fainter. P2 is absent in severe tetralogy of Fallot. A large aortic root receives almost all cardiac output from both ventricular chambers, and the aorta dilates and is accompanied by a root ejection sound that does not vary with respiration. (From JA Shaver, JJ Leonard, DF Leon: Examination of the Heart, Part IV, Auscultation of the Heart. Dallas, American Heart Association, 1990, p 45. Copyright, American Heart Association.) attributed to increased flow rates across the pulmonic valve with fixed splitting of S2. Ostium secundum ASDs are the most common cause of these shunts in adults. Features suggestive of a primum ASD include the coexistence of MR due to a cleft anterior mitral valve leaflet and left axis deviation of the QRS complex on the ECG. With sinus venosus ASDs, the left-to-right shunt is usually not large enough to result in a systolic murmur, although the ECG may show abnormalities of sinus node function. A grade 2 or 3 mid-systolic murmur may also be heard best at the upper left sternal border in patients with idiopathic dilation of the pulmonary artery; a pulmonary ejection sound is also present in these patients. TTE is indicated to evaluate a grade 2 or 3 mid-systolic murmur when there are other signs of cardiac disease. An isolated grade 1 or 2 mid-systolic murmur, heard in the absence of symptoms or signs of heart disease, is most often a benign finding for which no further evaluation, including TTE, is necessary. The most common example of a murmur of this type in an older adult patient is the crescendo-decrescendo murmur of aortic valve sclerosis, heard at the second right interspace (Fig. 51e-2). Aortic sclerosis is defined as focal thickening and calcification of the aortic valve to a degree that does not interfere with leaflet opening. The carotid upstrokes are normal, and electrocardiographic LVH is not present. A grade 1 or 2 mid-systolic murmur often can be heard at the left sternal border with pregnancy, hyperthyroidism, or anemia, physiologic states that are associated with accelerated blood flow. Still’s murmur refers to a benign grade 2, vibratory or musical mid-systolic murmur at the mid or lower left sternal border in normal children and adolescents, best heard in the supine position (Fig. 51e-2). Late Systolic Murmurs A late systolic murmur that is best heard at the left ventricular apex is usually due to MVP (Chap. 283). Often, this murmur is introduced by one or more nonejection clicks. The radiation of the murmur can help identify the specific mitral leaflet involved in the process of prolapse or flail. The term flail refers to the movement made by an unsupported portion of the leaflet after loss of its chordal attachment(s). With posterior leaflet prolapse or flail, the resultant jet of MR is directed anteriorly and medially, as a result of which the murmur radiates to the base of the heart and masquerades as AS. Anterior leaflet prolapse or flail results in a posteriorly directed MR jet that radiates to the axilla or left infrascapular region. Leaflet flail is associated with a murmur of grade 3 or 4 intensity that can be heard throughout the precordium in thin-chested patients. The presence of an S3 or a short, rumbling mid-diastolic murmur due to enhanced flow signifies severe MR. Bedside maneuvers that decrease left ventricular preload, such as standing, will cause the click and murmur of MVP to move closer to the first heart sound, as leaflet prolapse occurs earlier in systole. Standing also causes the murmur to become louder and longer. With squatting, left ventricular preload and afterload are increased abruptly, leading to an increase in left ventricular volume, and the click and murmur move away from the first heart sound as leaflet prolapse is delayed; the murmur becomes softer and shorter in duration (Fig. 51e–3). As noted above, these responses to standing and squatting are directionally similar to those observed in patients with HOCM. A late, apical systolic murmur indicative of MR may be heard transiently in the setting of acute myocardial ischemia; it is due to apical tethering and malcoaptation of the leaflets in response to structural and functional changes of the ventricle and mitral annulus. The intensity of the murmur varies as a function of left ventricular afterload and will increase in the setting of hypertension. TTE is recommended for assessment of late systolic murmurs. Holosystolic Murmurs (Figs. 51e-1B and 51e-5) Holosystolic murmurs begin with S1 and continue through systole to S2. They are usually indicative of chronic mitral or tricuspid valve regurgitation or a VSD and warrant TTE for further characterization. The holosystolic murmur of chronic MR is best heard at the left ventricular apex and radiates to the axilla (Fig. 51e-2); it is usually high-pitched and plateau in configuration because of the wide difference between left ventricular and left atrial pressure throughout systole. In contrast to acute MR, left atrial compliance is normal or even increased in chronic MR. As a result, there is only a small increase in left atrial pressure for any increase in regurgitant volume. Several conditions are associated with chronic MR and an apical holosystolic murmur, including rheumatic scarring of the leaflets, mitral annular calcification, postinfarction left ventricular remodeling, and severe left ventricular chamber enlargement. The circumference of the mitral annulus increases as the left ventricle enlarges and leads to failure of leaflet coaptation with central MR in patients with dilated cardiomyopathy (Chap. 287). The severity of the MR is worsened by any contribution from apical displacement of the papillary muscles and leaflet tethering (remodeling). Because the mitral annulus is contiguous with the left atrial endocardium, gradual enlargement of the left atrium from chronic MR will result in further stretching of the annulus and more MR; thus, “MR begets MR.” Chronic severe MR results in enlargement and leftward displacement of the left ventricular apex beat and, in some patients, a diastolic filling complex, as described previously. The holosystolic murmur of chronic TR is generally softer than that of MR, is loudest at the left lower sternal border, and usually increases in intensity with inspiration (Carvallo’s sign). Associated signs include c-v waves in the jugular venous pulse, an enlarged and pulsatile liver, ascites, and peripheral edema. The abnormal jugular venous waveforms are the predominant finding and are seen very often in the HOLOSYSTOLIC MURMUR: DIFFERENTIAL DIAGNOSIS Maximum intensity over left sternal border Radiation to epigastrium and right sternal border Prominent c-v wave with sharp y descent in jugular venous pulse Maximum intensity over lower left third and fourth interspace Widespread radiation, palpable thrill Decreased intensity with amyl nitrate Wide splitting of S Hyperdynamic left ventricular impulse Wide splitting of S2 Sustained left ventricular impulse Single S2 or narrow splitting of S2 Prominent left parasternal diastolic impulse Normal brief left paraster-nal systolic impulse Normal P2 Rarely paradoxical S2 Sustained systolic left parasternal impulse Narrow splitting of S2 with marked increase in intensity of P2 Primary mitral regurgitation (e.g., rheumatic, ruptured chordae) Secondary mitral regurgitation (dilated cardiomyopathy; papillary muscle dysfunction, or late stage of primary mitral regurgitation) Primary Secondary to pulmonary hypertension Favors ventricular septal defect; often difficult to differentiate from mitral regurgitant murmur FIguRe 51e-5 Differential diagnosis of a holosystolic murmur. absence of an audible murmur despite Doppler echocardiographic verification of TR. Causes of primary TR include myxomatous disease (prolapse), endocarditis, rheumatic disease, radiation, carcinoid, Ebstein’s anomaly, and chordal detachment as a complication of right ventricular endomyocardial biopsy. TR is more commonly a passive process that results secondarily from annular enlargement due to right ventricular dilatation in the face of volume or pressure overload. The holosystolic murmur of a VSD is loudest at the midto lower left sternal border (Fig. 51e-2) and radiates widely. A thrill is present at the site of maximal intensity in the majority of patients. There is no change in the intensity of the murmur with inspiration. The intensity of the murmur varies as a function of the anatomic size of the defect. Small, restrictive VSDs, as exemplified by the maladie de Roger, create a very loud murmur due to the significant and sustained systolic pressure gradient between the left and right ventricles. With large defects, the ventricular pressures tend to equalize, shunt flow is balanced, and a murmur is not appreciated. The distinction between post-MI ventricular septal rupture and MR has been reviewed previously. DIASTOLIC HeART MuRMuRS early Diastolic Murmurs (Fig. 51e-1E) Chronic AR results in a high-pitched, blowing, decrescendo, early to mid-diastolic murmur that begins after the aortic component of S2 (A2) and is best heard at the second right interspace (Fig. 51e-6). The murmur may be soft and difficult to hear unless auscultation is performed with the patient leaning forward at end expiration. This maneuver brings the aortic root closer to the anterior chest wall. Radiation of the murmur may provide a clue to the cause of the AR. With primary valve disease, such as that due to congenital bicuspid disease, prolapse, or endocarditis, the diastolic murmur tends to radiate along the left sternal border, where it is often louder than appreciated in the second right interspace. When AR is caused by aortic root disease, the diastolic murmur may radiate along the right sternal border. Diseases of the aortic root cause dilation or distortion of the aortic annulus and failure of leaflet coaptation. Causes include Marfan syndrome with aneurysm formation, annuloaortic ectasia, ankylosing spondylitis, and aortic dissection. Chronic, severe AR also may produce a lower-pitched mid to late, grade 1 or 2 diastolic murmur at the apex (Austin Flint murmur), which is thought to reflect turbulence at the mitral inflow area from the admixture of regurgitant (aortic) and forward (mitral) blood flow (Fig. 51e-1G). This lower-pitched, apical diastolic murmur can be distinguished from that due to MS by the absence of an opening snap and the response of the murmur to a vasodilator challenge. Lowering after-load with an agent such as amyl nitrite will decrease the duration and magnitude of the aortic–left ventricular diastolic pressure gradient, and thus, the Austin Flint murmur of severe AR will become shorter and softer. The intensity of the diastolic murmur of mitral stenosis (Fig. 51e-6) may either remain constant or increase with afterload reduction because of the reflex increase in cardiac output and mitral valve flow. Although AS and AR may coexist, a grade 2 or 3 crescendo-decrescendo mid-systolic murmur frequently is heard at the base of the heart in patients with isolated, severe AR and is due to an increased volume and rate of systolic flow. Accurate bedside identification of coexistent AS can be difficult unless the carotid pulse examination is abnormal or the mid-systolic murmur is of grade 4 or greater intensity. In the absence of heart failure, chronic severe AR is accompanied by several peripheral signs of significant diastolic run-off, including a wide pulse pressure, a “water-hammer” carotid upstroke (Corrigan’s pulse), and Quincke’s pulsations of the nail beds. The diastolic murmur of acute, severe AR is notably shorter in duration and lower pitched than the murmur of chronic AR. It can be very difficult to appreciate in the presence of a rapid heart rate. These attributes reflect the abrupt rate of rise of diastolic pressure within the unprepared and noncompliant left ventricle and the correspondingly rapid decline in the aortic–left ventricular diastolic pressure gradient. Left ventricular diastolic pressure may increase sufficiently to result in premature closure of the mitral valve and a soft first heart sound. Peripheral signs of significant diastolic run-off are not present. Pulmonic regurgitation (PR) results in a decrescendo, early to mid-diastolic murmur (Graham Steell murmur) that begins after the pulmonic component of S2 (P2), is best heard at the second left interspace, and radiates along the left sternal border. The intensity of the murmur may increase with inspiration. PR is most commonly due to dilation of the valve annulus from chronic elevation of the pulmonary artery pressure. Signs of pulmonary hypertension, including a right ventricular CHAPTER 51e Approach to the Patient with a Heart Murmur PART 2 Cardinal Manifestations and Presentation of Diseases O.S. O.S. FIguRe 51e-6 Diastolic filling murmur (rumble) in mitral stenosis. In mild mitral stenosis, the diastolic gradient across the valve is limited to the phases of rapid ventricular filling in early diastole and presystole. The rumble may occur during either or both periods. As the stenotic process becomes severe, a large pressure gradient exists across the valve during the entire diastolic filling period, and the rumble persists throughout diastole. As the left atrial pressure becomes greater, the interval between A2 (or P2) and the opening snap (O.S.) shortens. In severe mitral stenosis, secondary pulmonary hypertension develops and results in a loud P2 and the splitting interval usually narrows. ECG, electrocardiogram. (From JA Shaver, JJ Leonard, DF Leon: Examination of the Heart, Part IV, Auscultation of the Heart. Dallas, American Heart Association, 1990, p 55. Copyright, American Heart Association.) lift and a loud, single or narrowly split S2, are present. These features also help distinguish PR from AR as the cause of a decrescendo diastolic murmur heard along the left sternal border. PR in the absence of pulmonary hypertension can occur with endocarditis or a congenitally deformed valve. It is usually present after repair of tetralogy of Fallot in childhood. When pulmonary hypertension is not present, the diastolic murmur is softer and lower pitched than the classic Graham Steell murmur, and the severity of the PR can be difficult to appreciate. TTE is indicated for the further evaluation of a patient with an early to mid-diastolic murmur. Longitudinal assessment of the severity of the valve lesion and ventricular size and systolic function help guide a potential decision for surgical management. TTE also can provide anatomic information regarding the root and proximal ascending aorta, although computed tomographic or magnetic resonance angiography may be indicated for more precise characterization (Chap. 270e). Mid-Diastolic Murmurs (Figs. 51e-1G and 51e-1H) Mid-diastolic murmurs result from obstruction and/or augmented flow at the level of the mitral or tricuspid valve. Rheumatic fever is the most common cause of MS (Fig. 51e-6). In younger patients with pliable valves, S1 is loud and the murmur begins after an opening snap, which is a high-pitched sound that occurs shortly after S2. The interval between the pulmonic component of the second heart sound (P2) and the opening snap is inversely related to the magnitude of the left atrial–left ventricular pressure gradient. The murmur of MS is low-pitched and thus is best heard with the bell of the stethoscope. It is loudest at the left ventricular apex and often is appreciated only when the patient is turned in the left lateral decubitus position. It is usually of grade 1 or 2 intensity but may be absent when the cardiac output is severely reduced despite significant obstruction. The intensity of the murmur increases during maneuvers that increase cardiac output and mitral valve flow, such as exercise. The duration of the murmur reflects the length of time over which left atrial pressure exceeds left ventricular diastolic pressure. An increase in the intensity of the murmur just before S1, a phenomenon known as presystolic accentuation (Figs. 51e-1A and 51e-6), occurs in patients in sinus rhythm and is due to a late increase in transmitral flow with atrial contraction. Presystolic accentuation does not occur in patients with atrial fibrillation. The mid-diastolic murmur associated with tricuspid stenosis is best heard at the lower left sternal border and increases in intensity with inspiration. A prolonged y descent may be visible in the jugular venous waveform. This murmur is very difficult to hear and often is obscured by left-sided acoustical events. There are several other causes of mid-diastolic murmurs. Large left atrial myxomas may prolapse across the mitral valve and cause variable degrees of obstruction to left ventricular inflow (Chap. 289e). The murmur associated with an atrial myxoma may change in duration and intensity with changes in body position. An opening snap is not present, and there is no presystolic accentuation. Augmented mitral diastolic flow can occur with isolated severe MR or with a large left-to-right shunt at the ventricular or great vessel level and produce a soft, rapid filling sound (S3) followed by a short, low-pitched mid-diastolic apical murmur. The Austin Flint murmur of severe, chronic AR has already been described. A short, mid-diastolic murmur is rarely heard during an episode of acute rheumatic fever (Carey-Coombs murmur) and probably is due to flow through an edematous mitral valve. An opening snap is not present in the acute phase, and the murmur dissipates with resolution of the acute attack. Complete heart block with dyssynchronous atrial and ventricular activation may be associated with intermittent midto late diastolic murmurs if atrial contraction occurs when the mitral valve is partially closed. Mid-diastolic murmurs indicative of increased tricuspid valve flow can occur with severe, isolated TR and with large ASDs and significant left-to-right shunting. Other signs of an ASD are present (Chap. 282), including fixed splitting of S2 and a mid-systolic murmur at the midto upper left sternal border. TTE is indicated for evaluation of a patient with a midto late diastolic murmur. Findings specific to the diseases discussed above will help guide management. (Figs. 51e-1H and 51e-7) Continuous murmurs begin in systole, peak near the second heart sound, and continue into all or part of diastole. Their presence throughout the cardiac cycle implies a pressure gradient between two chambers or vessels during both systole and diastole. The continuous murmur associated with a patent ductus arteriosus is best heard at the upper left sternal border. Large, uncorrected shunts may lead to pulmonary hypertension, attenuation or obliteration of the diastolic component of the murmur, reversal of shunt flow, and differential cyanosis of the lower extremities. A ruptured sinus of Valsalva aneurysm creates a continuous murmur of abrupt onset at the upper right sternal border. Rupture typically occurs into a right heart chamber, and the murmur is indicative of a continuous pressure difference between the aorta and either the right ventricle or the right atrium. A continuous murmur also may be audible along the left sternal border with a coronary arteriovenous fistula and at the site of an arteriovenous fistula used for hemodialysis access. Enhanced flow through enlarged intercostal collateral arteries in patients with aortic coarctation may produce a continuous murmur along the course of one or more ribs. A cervical bruit with both systolic and diastolic components (a to-fro murmur, Fig. 51e-7) usually indicates a high-grade carotid artery stenosis. Not all continuous murmurs are pathologic. A continuous venous hum can be heard in healthy children and young adults, especially during pregnancy; it is best appreciated in the right supraclavicular fossa and can be obliterated by pressure over the right internal jugular vein or by having the patient turn his or her head toward the examiner. The continuous mammary souffle of pregnancy is created by enhanced arterial flow through engorged breasts and usually appears during the late third trimester or early puerperium. The murmur is louder in systole. Firm pressure with the diaphragm of the stethoscope can eliminate the diastolic portion of the murmur. (Table 51e-2; see Table 267-1) Careful attention to the behavior of heart murmurs during simple maneuvers that alter cardiac hemodynamics can provide important clues to their cause and significance. Respiration Auscultation should be performed during quiet respiration or with a modest increase in inspiratory effort, as more forceful Continuous Murmur vs. To-Fro Murmur FIguRe 51e-7 Comparison of the continuous murmur and the to-fro murmur. During abnormal communication between high-pressure and low-pressure systems, a large pressure gradient exists throughout the cardiac cycle, producing a continuous murmur. A classic example is patent ductus arteriosus. At times, this type of murmur can be confused with a to-fro murmur, which is a combination of systolic ejection murmur and a murmur of semilunar valve incompetence. A classic example of a to-fro murmur is aortic stenosis and regurgitation. A continuous murmur crescendos to near the second heart sound (S2), whereas a to-fro murmur has two components. The mid-systolic ejection component decrescendos and disappears as it approaches S2. (From JA Shaver, JJ Leonard, DF Leon: Examination of the Heart, Part IV, Auscultation of the Heart. Dallas, American Heart Association, 1990, p 55. Copyright, American Heart Association.) movement of the chest tends to obscure the heart sounds. Left-sided murmurs may be best heard at end expiration, when lung volumes are minimized and the heart and great vessels are brought closer to the chest wall. This phenomenon is characteristic of the murmur of AR. Murmurs of right-sided origin, such as tricuspid or pulmonic regurgitation, increase in intensity during inspiration. The intensity of left-sided murmurs either remains constant or decreases with inspiration. Bedside assessment also should evaluate the behavior of S2 with respiration and the dynamic relationship between the aortic and pulmonic components (Fig. 51e-8). Reversed splitting can be a feature of severe AS, HOCM, left bundle branch block, right ventricular pacing, or acute myocardial ischemia. Fixed splitting of S2 in the presence of a grade 2 or 3 mid-systolic murmur at the midor upper left sternal border indicates an ASD. Physiologic but wide splitting during the respiratory cycle implies either premature aortic valve closure, as can occur with severe MR, or delayed pulmonic valve closure due to PS or right bundle branch block. Alterations of Systemic Vascular Resistance Murmurs can change characteristics after maneuvers that alter systemic vascular resistance and left ventricular afterload. The systolic murmurs of MR and VSD become louder during sustained handgrip, simultaneous inflation of blood pressure cuffs on both upper extremities to pressures 20–40 mmHg above systolic pressure for 20 s, or infusion of a vasopressor agent. The murmurs associated with AS or HOCM will become softer or remain unchanged with these maneuvers. The diastolic murmur of AR becomes louder in response to interventions that raise systemic vascular resistance. Opposite changes in systolic and diastolic murmurs may occur with the use of pharmacologic agents that lower systemic vascular resistance. DynAMiC AusCuLTATion: BEDsiDE MAnEuvERs THAT CAn BE usED To CHAngE THE inTEnsiTy of CARDiAC MuRMuRs (sEE TExT) 1. 2. 3. 4. Pharmacologic manipulation of preload and/or afterload 5. 6. 7. 8. FIguRe 51e-8 Top. Normal physiologic splitting. During expiration, the aortic (A2) and pulmonic (P2) components of the second heart sound are separated by <30 ms and are appreciated as a single sound. During inspiration, the splitting interval widens, and A2 and P2 are clearly separated into two distinct sounds. Bottom. Audible expiratory splitting. Wide physiologic splitting is caused by a delay in P2. Reversed splitting is caused by a delay in A2, resulting in paradoxical movement; i.e., with inspiration P2 moves toward A2, and the splitting interval narrows. Narrow physiologic splitting occurs in pulmonary hypertension, and both A2 and P2 are heard during expiration at a narrow splitting interval because of the increased intensity and high-frequency composition of P2. (From JA Shaver, JJ Leonard, DF Leon: Examination of the Heart, Part IV, Auscultation of the Heart. Dallas, American Heart Association, 1990, p 17. Copyright, American Heart Association.) Inhaled amyl nitrite is now rarely used for this purpose but can help distinguish the murmur of AS or HOCM from that of either MR or VSD, if necessary. The former two murmurs increase in intensity, whereas the latter two become softer after exposure to amyl nitrite. As noted previously, the Austin Flint murmur of severe AR becomes softer, but the mid-diastolic rumble of MS becomes louder, in response to the abrupt lowering of systemic vascular resistance with amyl nitrite. Changes in Venous Return The Valsalva maneuver results in an increase in intrathoracic pressure, followed by a decrease in venous return, ventricular filling, and cardiac output. The majority of murmurs decrease in intensity during the strain phase of the maneuver. Two notable exceptions are the murmurs associated with MVP and obstructive HOCM, both of which become louder during the Valsalva maneuver. The murmur of MVP may also become longer as leaflet prolapse occurs earlier in systole at smaller ventricular volumes. These murmurs behave in a similar and parallel fashion with standing. Both the click and the murmur of MVP move closer in timing to S1 on rapid standing from a squatting position (Fig. 51e-3). The increase in the intensity of the murmur of HOCM is predicated on the augmentation of the dynamic left ventricular outflow tract gradient that occurs with reduced ventricular filling. Squatting results in abrupt increases in both venous return (preload) and left ventricular afterload that increase ventricular volume, changes that predictably cause a decrease in the intensity and duration of the murmurs associated with MVP and HOCM; the click and murmur of MVP move away from S1 with squatting. Passive leg raising can be used to increase venous return in patients who are unable to squat and stand. This maneuver may lead to a decrease in the intensity of the murmur associated with HOCM but has less effect in patients with MVP. CHAPTER 51e Approach to the Patient with a Heart Murmur 51e-8 Post-Premature Ventricular Contraction A change in the intensity of a pulmonic, and mitral valves. Such signals are not likely to generate systolic murmur in the first beat after a premature beat, or in the beat after a long cycle length in patients with atrial fibrillation, can help distinguish AS from MR, particularly in an older patient in whom the murmur of AS is well transmitted to the apex. Systolic murmurs due to left ventricular outflow obstruction, including that due to AS, increase in intensity in the beat after a premature beat because of the combined effects of enhanced left ventricular filling and post-extrasystolic potentiation of contractile function. Forward flow accelerates, causing an increase in the gradient and a louder murmur. The intensity of the murmur of MR does not change in the post-premature beat as there is relatively little further increase in mitral valve flow or change in the left ventricular–left atrial gradient. Additional clues to the etiology and importance of a heart murmur can be gleaned from the history and other physical examination findings. Symptoms suggestive of cardiovascular, neurologic, or pulmonary disease help focus the differential diagnosis, as do findings relevant to the jugular venous pressure and waveforms, the arterial pulses, other heart sounds, the lungs, the abdomen, the skin, and the extremities. In many instances, laboratory studies, an ECG, and/or a chest x-ray may have been obtained earlier and may contain valuable information. A patient with suspected infective endocarditis, for example, may have a murmur in the setting of fever, chills, anorexia, fatigue, dyspnea, splenomegaly, petechiae, and positive blood cultures. A new systolic murmur in a patient with a marked fall in blood pressure after a recent MI suggests myocardial rupture. By contrast, an isolated grade 1 or 2 mid-systolic murmur at the left sternal border in a healthy, active, and asymptomatic young adult is most likely a benign finding for which no further evaluation is indicated. The context in which the murmur is appreciated often dictates the need for further testing. (Fig. 51e–9; Chaps. 267 and 270e) Echocardiography with color flow and spectral Doppler is a valuable tool for the assessment of cardiac murmurs. Information regarding valve structure and function, chamber size, wall thickness, ventricular function, estimated pulmonary artery pressures, intracardiac shunt flow, pulmonary and hepatic vein flow, and aortic flow can be ascertained readily. It is important to note that Doppler signals of trace or mild valvular regurgitation of no clinical consequence can be detected with structurally normal tricuspid, PART 2 Cardinal Manifestations and Presentation of Diseases Cardiac murmur Systolic murmur Diastolic murmur Continuous murmur Mid-systolic, grade 2 or less • Early systolic • Mid-systolic, grade 3 or more • Late systolic • Holosystolic Asymptomatic and no associated findings Symptomatic or other signs of cardiac disease* TEE, cardiac MR, catheterization if appropriate • Venous hum • Mammary souffle TTE No further workup enough turbulence to create an audible murmur. Echocardiography is indicated for the evaluation of patients with early, late, or holosystolic murmurs and patients with grade 3 or louder mid-systolic murmurs. Patients with grade 1 or 2 mid-systolic murmurs but other symptoms or signs of cardiovascular disease, including those from ECG or chest x-ray, should also undergo echocardiography. Echocardiography is also indicated for the evaluation of any patient with a diastolic murmur and for patients with continuous murmurs not due to a venous hum or mammary souffle. Echocardiography also should be considered when there is a clinical need to verify normal cardiac structure and function in a patient whose symptoms and signs are probably noncardiac in origin. The performance of serial echocardiography to follow the course of asymptomatic individuals with valvular heart disease is a central feature of their longitudinal assessment and provides valuable information that may have an impact on decisions regarding the timing of surgery. Routine echocardiography is not recommended for asymptomatic patients with a grade 1 or 2 mid-systolic murmur without other signs of heart disease. For this category of patients, referral to a cardiovascular specialist should be considered if there is doubt about the significance of the murmur after the initial examination. The selective use of echocardiography outlined above has not been subjected to rigorous analysis of its cost-effectiveness. For some clinicians, handheld or miniaturized cardiac ultrasound devices have replaced the stethoscope. Although several reports attest to the improved sensitivity of such devices for the detection of valvular heart disease, accuracy is highly operator-dependent, and incremental cost considerations and outcomes have not been addressed adequately. The use of electronic or digital stethoscopes with spectral display capabilities has also been proposed as a method to improve the characterization of heart murmurs and the mentored teaching of cardiac auscultation. (Chap. 270e, Fig. 51e-9) In relatively few patients, clinical assessment and TTE do not adequately characterize the origin and significance of a heart murmur. Transesophageal echocardiography (TEE) can be considered for further evaluation, especially when the TTE windows are limited by body size, chest configuration, or intrathoracic pathology. TEE offers enhanced sensitivity for the detection of a wide range of structural cardiac disorders. Electrocardiographically gated cardiac magnetic resonance (CMR) imaging, although limited in its ability to display valvular morphology, can provide quantitative information regarding valvular function, stenosis severity, regurgitant fraction, regurgitant volume, shunt flow, chamber and great vessel size, ventricular function, and myocardial perfusion. CMR has largely supplanted the need for cardiac catheterization and invasive hemodynamic assessment when there is a discrepancy between the clinical and echocardiographic findings. Invasive coronary angiography is performed routinely in most adult patients before valve surgery, especially when there is a suspicion of coronary artery disease predicated on symptoms, risk factors, and/or age. The use of computed tomography coronary angiography (CCTA) to exclude coronary artery disease in patients with a low pretest probability of disease before valve surgery is gaining wider acceptance. The accurate identification of a heart murmur begins with a systematic approach to cardiac auscultation. Characterization of its major attributes, FIguRe 51e-9 Strategy for evaluating heart murmurs. *If an electrocardiogram or as reviewed above, allows the examiner to construct chest x-ray has been obtained and is abnormal, echocardiography is indicated. TTE, a preliminary differential diagnosis, which is then transthoracic echocardiography; TEE, transesophageal echocardiography; MR, magnetic refined by integration of information available resonance. (Adapted from RO Bonow et al: J Am Coll Cardiol 32:1486, 1998.) from the history, associated cardiac findings, the CHAPTER 51e Approach to the Patient with a Heart Murmur Palpitations Joseph Loscalzo Palpitations are extremely common among patients who present to their internists and can best be defined as an intermittent “thumping,” “pounding,” or “fluttering” sensation in the chest. This sensation can be either intermittent or sustained and either regular or irregular. Most 52254 patients interpret palpitations as an unusual awareness of the heartbeat and become especially concerned when they sense that they have had “skipped” or “missing” heartbeats. Palpitations are often noted when the patient is quietly resting, during which time other stimuli are minimal. Palpitations that are positional generally reflect a structural process within (e.g., atrial myxoma) or adjacent to (e.g., mediastinal mass) the heart. Palpitations are brought about by cardiac (43%), psychiatric (31%), miscellaneous (10%), and unknown (16%) causes, according to one large series. Among the cardiovascular causes are premature atrial and ventricular contractions, supraventricular and ventricular arrhythmias, mitral valve prolapse (with or without associated arrhythmias), aortic insufficiency, atrial myxoma, and pulmonary embolism. Intermittent palpitations are commonly caused by premature atrial or ventricular contractions: the post-extrasystolic beat is sensed by the patient owing to the increase in ventricular end-diastolic dimension following the pause in the cardiac cycle and the increased strength of contraction (post-extrasystolic potentiation) of that beat. Regular, sustained palpitations can be caused by regular supraventricular and ventricular tachycardias. Irregular, sustained palpitations can be caused by atrial fibrillation. It is important to note that most arrhythmias are not associated with palpitations. In those that are, it is often useful either to ask the patient to “tap out” the rhythm of the palpitations or to take his/her pulse during palpitations. In general, hyperdynamic cardiovascular states caused by catecholaminergic stimulation from exercise, stress, or pheochromocytoma can lead to palpitations. Palpitations are common among athletes, especially older endurance athletes. In addition, the enlarged ventricle of aortic regurgitation and accompanying hyperdynamic precordium frequently lead to the sensation of palpitations. Other factors that enhance the strength of myocardial contraction, including tobacco, caffeine, aminophylline, atropine, thyroxine, cocaine, and amphetamines, can cause palpitations. Psychiatric causes of palpitations include panic attacks or disorders, anxiety states, and somatization, alone or in combination. Patients with psychiatric causes for palpitations more commonly report a longer duration of the sensation (>15 min) and other accompanying symptoms than do patients with other causes. Among the miscellaneous causes of palpitations are thyrotoxicosis, drugs (see above) and ethanol, spontaneous skeletal muscle contractions of the chest wall, pheochromocytoma, and systemic mastocytosis. APPROACH TO THE PATIENT: The principal goal in assessing patients with palpitations is to determine whether the symptom is caused by a life-threatening arrhythmia. Patients with preexisting coronary artery disease (CAD) or risk factors for CAD are at greatest risk for ventricular arrhythmias (Chap. 276) as a cause for palpitations. In addition, the association of palpitations with other symptoms suggesting hemodynamic compromise, including syncope or lightheadedness, supports this diagnosis. Palpitations caused by sustained tachyarrhythmias in patients with CAD can be accompanied by angina pectoris or dyspnea, and, in patients with ventricular dysfunction (systolic or diastolic), aortic stenosis, hypertrophic cardiomyopathy, or mitral stenosis (with or without CAD), can be accompanied by dyspnea from increased left atrial and pulmonary venous pressure. Key features of the physical examination that will help confirm or refute the presence of an arrhythmia as a cause for palpitations (as well as its adverse hemodynamic consequences) include measurement of the vital signs, assessment of the jugular venous pressure and pulse, and auscultation of the chest and precordium. A resting electrocardiogram can be used to document the arrhythmia. If exertion is known to induce the arrhythmia and accompanying palpitations, exercise electrocardiography can be used to make the diagnosis. If the arrhythmia is sufficiently infrequent, other methods must be used, including continuous electrocardiographic (Holter) monitoring; telephonic monitoring, through which the patient can transmit an electrocardiographic tracing during a sensed episode; loop recordings (external or implantable), which can capture the electrocardiographic event for later review; and mobile cardiac outpatient telemetry. Data suggest that Holter monitoring is of limited clinical utility, while the implantable loop recorder and mobile cardiac outpatient telemetry are safe and possibly more cost-effective in the assessment of patients with (infrequent) recurrent, unexplained palpitations. Most patients with palpitations do not have serious arrhythmias or underlying structural heart disease. If sufficiently troubling to the patient, occasional benign atrial or ventricular premature contractions can often be managed with beta-blocker therapy. Palpitations incited by alcohol, tobacco, or illicit drugs need to be managed by abstention, while those caused by pharmacologic agents should be addressed by considering alternative therapies when appropriate or possible. Psychiatric causes of palpitations may benefit from cognitive therapy or pharmacotherapy. The physician should note that palpitations are at the very least bothersome and, on occasion, frightening to the patient. Once serious causes for the symptom have been excluded, the patient should be reassured that the palpitations will not adversely affect prognosis. PART 2 Cardinal Manifestations and Presentation of Diseases Ikuo Hirano, Peter J. Kahrilas Dysphagia—difficulty with swallowing—refers to problems with the transit of food or liquid from the mouth to the hypopharynx or through the esophagus. Severe dysphagia can compromise nutrition, cause aspiration, and reduce quality of life. Additional terminology pertaining to swallowing dysfunction is as follows. Aphagia (inability to swallow) typically denotes complete esophageal obstruction, most commonly encountered in the acute setting of a food bolus or foreign body impaction. Odynophagia refers to painful swallowing, typically resulting from mucosal ulceration within the oropharynx or esophagus. It commonly is accompanied by dysphagia, but the converse is not true. Globus pharyngeus is a foreign body sensation localized in the neck that does not interfere with swallowing and sometimes is relieved by swallowing. Transfer dysphagia frequently results in nasal regurgitation and pulmonary aspiration during swallowing and is characteristic of oropharyngeal dysphagia. Phagophobia (fear of swallowing) and refusal to swallow may be psychogenic or related to anticipatory anxiety about food bolus obstruction, odynophagia, or aspiration. Swallowing begins with a voluntary (oral) phase that includes preparation during which food is masticated and mixed with saliva. This is followed by a transfer phase during which the bolus is pushed into the pharynx by the tongue. Bolus entry into the hypopharynx initiates the pharyngeal swallow response, which is centrally mediated and involves a complex series of actions, the net result of which is to propel food through the pharynx into the esophagus while preventing its entry into the airway. To accomplish this, the larynx is elevated and pulled forward, actions that also facilitate upper esophageal sphincter (UES) opening. Tongue pulsion then propels the bolus through the UES, followed by a peristaltic contraction that clears residue from the pharynx and through the esophagus. The lower esophageal sphincter (LES) relaxes as the food enters the esophagus and remains relaxed until the peristaltic contraction has delivered the bolus into the stomach. Peristaltic contractions elicited in response to a swallow are called primary peristalsis and involve sequenced inhibition followed by contraction of the musculature along the entire length of the esophagus. The inhibition that precedes the peristaltic contraction is called deglutitive inhibition. Local distention of the esophagus anywhere along its length, as may occur with gastroesophageal reflux, activates secondary peristalsis that begins at the point of distention and proceeds distally. Tertiary esophageal contractions are nonperistaltic, disordered esophageal contractions that may be observed to occur spontaneously during fluoroscopic observation. The musculature of the oral cavity, pharynx, UES, and cervical esophagus is striated and directly innervated by lower motor neurons carried in cranial nerves (Fig. 53-1). Oral cavity muscles are innervated by the fifth (trigeminal) and seventh (facial) cranial nerves; the tongue, by the twelfth (hypoglossal) cranial nerve. Pharyngeal muscles are innervated by the ninth (glossopharyngeal) and tenth (vagus) cranial nerves. Physiologically, the UES consists of the cricopharyngeus muscle, the adjacent inferior pharyngeal constrictor, and the proximal portion of the cervical esophagus. UES innervation is derived from the vagus nerve, whereas the innervation to the musculature acting on the UES to facilitate its opening during swallowing comes from the fifth, seventh, and twelfth cranial nerves. The UES remains closed at rest owing to both its inherent elastic properties and neurogenically mediated contraction of the cricopharyngeus muscle. UES opening during swallowing involves both cessation of vagal excitation to the cricopha-255 ryngeus and simultaneous contraction of the suprahyoid and geniohyoid muscles that pull open the UES in conjunction with the upward and forward displacement of the larynx. The neuromuscular apparatus for peristalsis is distinct in proximal and distal parts of the esophagus. The cervical esophagus, like the pharyngeal musculature, consists of striated muscle and is directly innervated by lower motor neurons of the vagus nerve. Peristalsis in the proximal esophagus is governed by the sequential activation of the vagal motor neurons in the nucleus ambiguus. In contrast, the distal esophagus and LES are composed of smooth muscle and are controlled by excitatory and inhibitory neurons within the esophageal myenteric plexus. Medullary preganglionic neurons from the dorsal motor nucleus of the vagus trigger peristalsis via these ganglionic neurons during primary peristalsis. Neurotransmitters of the excitatory ganglionic neurons are acetylcholine and substance P; those of the inhibitory neurons are vasoactive intestinal peptide and nitric oxide. Peristalsis results from the patterned activation of inhibitory followed by excitatory ganglionic neurons, with progressive dominance of the inhibitory neurons distally. Similarly, LES relaxation occurs with the onset of deglutitive inhibition and persists until the peristaltic sequence is complete. At rest, the LES is contracted because of excitatory ganglionic stimulation and its intrinsic myogenic tone, a property that distinguishes it from the adjacent esophagus. The function of the LES is supplemented by the surrounding muscle of the right diaphragmatic crus, which acts as an external sphincter during inspiration, cough, or abdominal straining. Dysphagia can be subclassified both by location and by the circumstances in which it occurs. With respect to location, distinct considerations apply to oral, pharyngeal, or esophageal dysphagia. Normal transport of an ingested bolus depends on the consistency and size of the bolus, the caliber of the lumen, the integrity of peristaltic contraction, and deglutitive inhibition of both the UES and the LES. Dysphagia caused by an oversized bolus or a narrow lumen is called structural dysphagia, whereas dysphagia due to abnormalities of peristalsis or impaired sphincter relaxation after swallowing is called propulsive or Sagittal view of the pharynx Soft palate Thyrohyoid membrane Cricothyroid membrane Hyoid bone Vocal cord Transverse arytenoid ms. Hard palate Epiglottis Digastric (post. belly) Oral cavity Mylohoid ms. (hypopharynx) (ant. belly) Middle constrictor FIguRE 53-1 Sagittal and diagrammatic views of the musculature involved in enacting oropharyngeal swallowing. Note the dominance of the tongue in the sagittal view and the intimate relationship between the entrance to the larynx (airway) and the esophagus. In the resting configuration illustrated, the esophageal inlet is closed. This is transiently reconfigured such that the esophageal inlet is open and the laryngeal inlet closed during swallowing. (Adapted from PJ Kahrilas, in DW Gelfand and JE Richter [eds]: Dysphagia: Diagnosis and Treatment. New York: Igaku-Shoin Medical Publishers, 1989, pp. 11–28.) 256 motor dysphagia. More than one mechanism may be operative in a patient with dysphagia. Scleroderma commonly presents with absent peristalsis as well as a weakened LES that predisposes patients to peptic stricture formation. Likewise, radiation therapy for head and neck cancer may compound the functional deficits in the oropharyngeal swallow attributable to the tumor and cause cervical esophageal stenosis. Oral and Pharyngeal (Oropharyngeal) Dysphagia Oral-phase dysphagia is associated with poor bolus formation and control so that food has prolonged retention within the oral cavity and may seep out of the mouth. Drooling and difficulty in initiating swallowing are other characteristic signs. Poor bolus control also may lead to premature spillage of food into the hypopharynx with resultant aspiration into the trachea or regurgitation into the nasal cavity. Pharyngeal-phase dysphagia is associated with retention of food in the pharynx due to poor tongue or pharyngeal propulsion or obstruction at the UES. Signs and symptoms of concomitant hoarseness or cranial nerve dysfunction may be associated with oropharyngeal dysphagia. Oropharyngeal dysphagia may be due to neurologic, muscular, structural, iatrogenic, infectious, and metabolic causes. Iatrogenic, neurologic, and structural pathologies are most common. Iatrogenic causes include surgery and radiation, often in the setting of head and neck cancer. Neurogenic dysphagia resulting from cerebrovascular accidents, Parkinson’s disease, and amyotrophic lateral sclerosis is a major source of morbidity related to aspiration and malnutrition. Medullary nuclei directly innervate the oropharynx. Lateralization of pharyngeal dysphagia implies either a structural pharyngeal lesion or a neurologic process that selectively targeted the ipsilateral brainstem nuclei or cranial nerve. Advances in functional brain imaging have elucidated an important role of the cerebral cortex in swallow function and dysphagia. Asymmetry in the cortical representation of the pharynx provides an explanation for the dysphagia that occurs as a consequence of unilateral cortical cerebrovascular accidents. Oropharyngeal structural lesions causing dysphagia include Zenker’s diverticulum, cricopharyngeal bar, and neoplasia. Zenker’s diverticulum typically is encountered in elderly patients, with an estimated prevalence between 1:1000 and 1:10,000. In addition to dysphagia, patients may present with regurgitation of particulate food debris, aspiration, and halitosis. The pathogenesis is related to stenosis of the cricopharyngeus that causes diminished opening of the UES and results in increased hypopharyngeal pressure during swallowing with development of a pulsion diverticulum immediately above the cricopharyngeus in a region of potential weakness known as Killian’s dehiscence. A cricopharyngeal bar, appearing as a prominent indentation behind the lower third of the cricoid cartilage, is related to Zenker’s diverticulum in that it involves limited distensibility of the cricopharyngeus and can lead to the formation of a Zenker’s diverticulum. However, a cricopharyngeal bar is a common radiographic finding, and most patients with transient cricopharyngeal bars are asymptomatic, making it important to rule out alternative etiologies of dysphagia before treatment. Furthermore, cricopharyngeal bars may be secondary to other neuromuscular disorders. Since the pharyngeal phase of swallowing occurs in less than a second, rapid-sequence fluoroscopy is necessary to evaluate for functional abnormalities. Adequate fluoroscopic examination requires that the patient be conscious and cooperative. The study incorporates recordings of swallow sequences during ingestion of food and liquids of varying consistencies. The pharynx is examined to detect bolus retention, regurgitation into the nose, or aspiration into the trachea. Timing and integrity of pharyngeal contraction and opening of the UES with a swallow are analyzed to assess both aspiration risk and the potential for swallow therapy. Structural abnormalities of the oropharynx, especially those which may require biopsies, also should be assessed by direct laryngoscopic examination. Esophageal Dysphagia The adult esophagus measures 18–26 cm in length and is anatomically divided into the cervical esophagus, extending from the pharyngoesophageal junction to the suprasternal notch, and the thoracic esophagus, which continues to the diaphragmatic hiatus. When distended, the esophageal lumen has internal dimensions of PART 2 Cardinal Manifestations and Presentation of Diseases about 2 cm in the anteroposterior plane and 3 cm in the lateral plane. Solid food dysphagia becomes common when the lumen is narrowed to <13 mm but also can occur with larger diameters in the setting of poorly masticated food or motor dysfunction. Circumferential lesions are more likely to cause dysphagia than are lesions that involve only a partial circumference of the esophageal wall. The most common structural causes of dysphagia are Schatzki’s rings, eosinophilic esophagitis, and peptic strictures. Dysphagia also occurs in the setting of gastroesophageal reflux disease without a stricture, perhaps on the basis of altered esophageal sensation, distensibility, or motor function. Propulsive disorders leading to esophageal dysphagia result from abnormalities of peristalsis and/or deglutitive inhibition, potentially affecting the cervical or thoracic esophagus. Since striated muscle pathology usually involves both the oropharynx and the cervical esophagus, the clinical manifestations usually are dominated by oropharyngeal dysphagia. Diseases affecting smooth muscle involve both the thoracic esophagus and the LES. A dominant manifestation of this, absent peristalsis, refers to either the complete absence of swallow-induced contraction or the presence of nonperistaltic, disordered contractions. Absent peristalsis and failure of deglutitive LES relaxation are the defining features of achalasia. In diffuse esophageal spasm (DES), LES function is normal, with the disordered motility restricted to the esophageal body. Absent peristalsis combined with severe weakness of the LES is a nonspecific pattern commonly found in patients with scleroderma. APPROACH TO THE PATIENT: Figure 53-2 shows an algorithm for the approach to a patient with dysphagia. The patient history is extremely valuable in making a presumptive diagnosis or at least substantially restricting the differential diagnoses in most patients. Key elements of the history are the localization of dysphagia, the circumstances in which dysphagia is experienced, other symptoms associated with dysphagia, and progression. Dysphagia that localizes to the suprasternal notch may indicate either an oropharyngeal or an esophageal etiology as distal dysphagia is referred proximally about 30% of the time. Dysphagia that localizes to the chest is esophageal in origin. Nasal regurgitation and tracheobronchial aspiration manifest by coughing with swallowing are hallmarks of oropharyngeal dysphagia. Severe cough with swallowing may also be a sign of a tracheoesophageal fistula. The presence of hoarseness may be another important diagnostic clue. When hoarseness precedes dysphagia, the primary lesion is usually laryngeal; hoarseness that occurs after the development of dysphagia may result from compromise of the recurrent laryngeal nerve by a malignancy. The type of food causing dysphagia is a crucial detail. Intermittent dysphagia that occurs only with solid food implies structural dysphagia, whereas constant dysphagia with both liquids and solids strongly suggests a motor abnormality. Two caveats to this pattern are that despite having a motor abnormality, patients with scleroderma generally develop mild dysphagia for solids only and, somewhat paradoxically, that patients with oropharyngeal dysphagia often have greater difficulty managing liquids than solids. Dysphagia that is progressive over the course of weeks to months raises concern for neoplasia. Episodic dysphagia to solids that is unchanged over years indicates a benign disease process such as a Schatzki’s ring or eosinophilic esophagitis. Food impaction with a prolonged inability to pass an ingested bolus even with ingestion of liquid is typical of a structural dysphagia. Chest pain frequently accompanies dysphagia whether it is related to motor disorders, structural disorders, or reflux disease. A prolonged history of heartburn preceding the onset of dysphagia is suggestive of peptic stricture and, infrequently, esophageal adenocarcinoma. A history of prolonged nasogastric intubation, esophageal or head and neck surgery, ingestion of caustic to neck, nasal regurgitation, aspiration, neck, food impaction FIguRE 53-2 Approach to the patient with dysphagia. Etiologies in bold print are the most common. ENT, ear, nose, and throat; GERD, gastroesophageal reflux disease. agents or pills, previous radiation or chemotherapy, or associated mucocutaneous diseases may help isolate the cause of dysphagia. With accompanying odynophagia, which usually is indicative of ulceration, infectious or pill-induced esophagitis should be suspected. In patients with AIDS or other immunocompromised states, esophagitis due to opportunistic infections such as Candida, herpes simplex virus, or cytomegalovirus and to tumors such as Kaposi’s sarcoma and lymphoma should be considered. A strong history of atopy increases concerns for eosinophilic esophagitis. Physical examination is important in the evaluation of oral and pharyngeal dysphagia because dysphagia is usually only one of many manifestations of a more global disease process. Signs of bulbar or pseudobulbar palsy, including dysarthria, dysphonia, ptosis, tongue atrophy, and hyperactive jaw jerk, in addition to evidence of generalized neuromuscular disease, should be elicited. The neck should be examined for thyromegaly. A careful inspection of the mouth and pharynx should disclose lesions that may interfere with passage of food. Missing dentition can interfere with mastication and exacerbate an existing cause of dysphagia. Physical examination is less helpful in the evaluation of esophageal dysphagia as most relevant pathology is restricted to the esophagus. The notable exception is skin disease. Changes in the skin may suggest a diagnosis of scleroderma or mucocutaneous diseases such as pemphigoid, lichen planus and epidermolysis bullosa, all of which can involve the esophagus. Although most instances of dysphagia are attributable to benign disease processes, dysphagia is also a cardinal symptom of several malignancies, making it an important symptom to evaluate. Cancer may result in dysphagia due to intraluminal obstruction (esophageal or proximal gastric cancer, metastatic deposits), extrinsic compression (lymphoma, lung cancer), or paraneoplastic syndromes. Even when not attributable to malignancy, dysphagia is usually a manifestation of an identifiable and treatable disease entity, making its evaluation beneficial to the patient and gratifying to the practitioner. The specific diagnostic algorithm to pursue is guided by the details of the history (Fig. 53-2). If oral or pharyngeal dysphagia is suspected, a fluoroscopic swallow study, usually done by a swallow therapist, is the procedure of choice. Otolaryngoscopic and neurologic evaluation also can be important, depending on the circumstances. For suspected esophageal dysphagia, upper endoscopy is the single most useful test. Endoscopy allows better visualization of mucosal lesions than does barium radiography and also allows one to obtain mucosal biopsies. Endoscopic or histologic abnormalities are evident in the leading causes of esophageal dysphagia: Schatzki ring, gastroesophageal reflux disease and eosinophilic esophagitis. Furthermore, therapeutic intervention with esophageal dilation can be done as part of the procedure if it is deemed necessary. The emergence of eosinophilic esophagitis as a leading cause of dysphagia in both children and adults has led to the recommendation that esophageal mucosal biopsies be obtained routinely in the evaluation of unexplained dysphagia even if endoscopically identified esophageal mucosal lesions are absent. For cases of suspected esophageal motility disorders, endoscopy is still the appropriate initial evaluation as neoplastic and inflammatory conditions can secondarily produce patterns of either achalasia or esophageal spasm. Esophageal manometry is done if dysphagia is not adequately explained by endoscopy or to confirm the diagnosis of a suspected esophageal motor disorder. Barium radiography can provide useful adjunctive information in cases of subtle or complex esophageal strictures, prior esophageal surgery, esophageal diverticula, or paraesophageal herniation. In specific cases, computed tomography (CT) examination and endoscopic ultrasonography may be useful. Treatment of dysphagia depends on both the locus and the specific etiology. Oropharyngeal dysphagia most commonly results from functional deficits caused by neurologic disorders. In such circumstances, the treatment focuses on utilizing postures or maneuvers devised to reduce pharyngeal residue and enhance airway protection learned under the direction of a trained swallow therapist. Aspiration risk may be reduced by altering the consistency of ingested food and liquid. Dysphagia resulting from a cerebrovascular accident usually, but not always, spontaneously improves within the first few weeks after the event. More severe and persistent cases may require gastrostomy and enteral feeding. Patients with myasthenia gravis (Chap. 461) and polymyositis (Chap. 388) may respond to medical treatment of the primary neuromuscular disease. Surgical intervention with cricopharyngeal myotomy is usually not helpful, with the exception of specific disorders such as the idiopathic cricopharyngeal bar, Zenker’s diverticulum, and oculopharyngeal muscular dystrophy. Chronic neurologic disorders such as Parkinson’s disease and amyotrophic lateral sclerosis may manifest with severe oropharyngeal dysphagia. Feeding by a nasogastric tube or an endoscopically placed gastrostomy tube may be considered for nutritional support; however, these maneuvers do not provide protection against aspiration of salivary secretions or refluxed gastric contents. Treatment of esophageal dysphagia is covered in detail in Chap. 347. The majority of causes of esophageal dysphagia are effectively managed by means of esophageal dilatation using bougie or balloon dilators. Cancer and achalasia are often managed surgically, although endoscopic techniques are available for both palliation and primary therapy, respectively. Infectious etiologies respond to antimicrobial medications or treatment of the underlying immunosuppressive state. Finally, eosinophilic esophagitis has emerged as an important cause of dysphagia that is amenable to treatment by elimination of dietary allergens or administration of swallowed, topically acting glucocorticoids. PART 2 Cardinal Manifestations and Presentation of Diseases nausea, vomiting, and 54 indigestion William L. Hasler Nausea is the subjective feeling of a need to vomit. Vomiting (emesis) is the oral expulsion of gastrointestinal contents due to contractions of gut and thoracoabdominal wall musculature. Vomiting is contrasted with regurgitation, the effortless passage of gastric contents into the mouth. Rumination is the repeated regurgitation of food residue, which may be rechewed and reswallowed. In contrast to emesis, these phenomena may exhibit volitional control. Indigestion is a term encompassing a range of complaints including nausea, vomiting, heartburn, regurgitation, and dyspepsia (the presence of symptoms thought to originate in the gastroduodenal region). Some individuals with dyspepsia report predominantly epigastric burning, gnawing, or pain. Others experience postprandial fullness, early satiety (an inability to complete a meal due to premature fullness), bloating, eructation (belching), and anorexia. Vomiting is coordinated by the brainstem and is effected by responses in the gut, pharynx, and somatic musculature. Mechanisms underlying nausea are poorly understood but likely involve the cerebral cortex, as nausea requires conscious perception. This is supported by functional brain imaging studies showing activation of a range of cerebral cortical regions during nausea. Coordination of Emesis Brainstem nuclei—including the nucleus tractus solitarius; dorsal vagal and phrenic nuclei; medullary nuclei regulating respiration; and nuclei that control pharyngeal, facial, and tongue movements—coordinate initiation of emesis. Neurokinin NK1, serotonin 5-HT3, and vasopressin pathways participate in this coordination. Somatic and visceral muscles respond stereotypically during emesis. Inspiratory thoracic and abdominal wall muscles contract, producing high intrathoracic and intraabdominal pressures that evacuate the stomach. The gastric cardia herniates above the diaphragm, and the larynx moves upward to propel the vomitus. Distally migrating gut contractions are normally regulated by an electrical phenomenon, the slow wave, which cycles at 3 cycles/min in the stomach and 11 cycles/ min in the duodenum. During emesis, the slow wave is abolished and is replaced by orally propagating spikes that evoke retrograde contractions that assist in expulsion of gut contents. Activators of Emesis Emetic stimuli act at several sites. Emesis evoked by unpleasant thoughts or smells originates in the brain, whereas cranial nerves mediate vomiting after gag reflex activation. Motion sickness and inner ear disorders act on the labyrinthine system. Gastric irritants and cytotoxic agents like cisplatin stimulate gastroduodenal vagal afferent nerves. Nongastric afferents are activated by intestinal and colonic obstruction and mesenteric ischemia. The area postrema, in the medulla, responds to bloodborne stimuli (emetogenic drugs, bacterial toxins, uremia, hypoxia, ketoacidosis) and is termed the chemoreceptor trigger zone. Neurotransmitters mediating vomiting are selective for different sites. Labyrinthine disorders stimulate vestibular muscarinic M1 and histaminergic H1 receptors. Vagal afferent stimuli activate serotonin 5-HT3 receptors. The area postrema is served by nerves acting on 5-HT , M , H , and dopamine D subtypes. Cannabinoid CB pathways may participate in the cerebral cortex. Optimal pharmacologic therapy of vomiting requires understanding of these pathways. Nausea and vomiting are caused by conditions within and outside the gut as well as by drugs and circulating toxins (Table 54-1). Intraperitoneal Disorders Visceral obstruction and inflammation of hollow and solid viscera may elicit vomiting. Gastric obstruction results from ulcers and malignancy, whereas small-bowel and colon blockage occur because of adhesions, benign or malignant tumors, volvulus, intussusception, or inflammatory diseases like Crohn’s disease. The superior mesenteric artery syndrome, occurring after weight loss or prolonged bed rest, results when the duodenum is compressed by the overlying superior mesenteric artery. Abdominal irradiation impairs intestinal motor function and induces strictures. Biliary colic causes nausea by acting on local afferent nerves. Vomiting with pancreatitis, cholecystitis, and appendicitis is due to visceral irritation and induction of ileus. Enteric infections with viruses like norovirus or rotavirus or bacteria such as Staphylococcus aureus and Bacillus cereus often cause vomiting, especially in children. Opportunistic infections like cytomegalovirus or herpes simplex virus induce emesis in immunocompromised individuals. Gut sensorimotor dysfunction often causes nausea and vomiting. Gastroparesis presents with symptoms of gastric retention with evidence of delayed gastric emptying and occurs after vagotomy or with pancreatic carcinoma, mesenteric vascular insufficiency, or organic diseases like diabetes, scleroderma, and amyloidosis. Idiopathic gastroparesis is the most common etiology. It occurs in the absence of systemic illness and may follow a viral illness, suggesting an infectious trigger. Intestinal pseudoobstruction is characterized by disrupted intestinal and colonic motor activity with retention of food residue and secretions; bacterial overgrowth; nutrient malabsorption; and symptoms of nausea, vomiting, bloating, pain, and altered defecation. Intestinal pseudoobstruction may be idiopathic, inherited as a familial visceral myopathy or neuropathy, result from systemic disease, or occur as a paraneoplastic consequence of malignancy (e.g., small-cell lung carcinoma). Patients with gastroesophageal reflux may report nausea and vomiting, as do some with irritable bowel syndrome (IBS) or chronic constipation. Other functional gastroduodenal disorders without organic abnormalities have been characterized in adults. Chronic idiopathic nausea is defined as nausea without vomiting occurring several times a week. Functional vomiting is defined as one or more vomiting episodes weekly in the absence of an eating disorder or psychiatric disease. Cyclic vomiting syndrome presents with periodic discrete episodes of relentless nausea and vomiting in children and adults and shows an association with migraine headaches, suggesting that some cases may be migraine variants. Some adult cases have been described in association with rapid gastric emptying. A related condition, cannabinoid hyperemesis syndrome, presents with cyclical vomiting with intervening well periods in individuals (mostly men) who use large quantities of cannabis over many years and resolves with its discontinuation. Pathologic behaviors such as taking prolonged hot baths or showers are associated with the syndrome. Rumination syndrome, characterized by repetitive regurgitation of recently ingested food, is often misdiagnosed as refractory vomiting. Extraperitoneal Disorders Myocardial infarction and congestive heart failure may cause nausea and vomiting. Postoperative emesis occurs after 25% of surgeries, most commonly laparotomy and orthopedic surgery. Increased intracranial pressure from tumors, bleeding, abscess, or blockage of cerebrospinal fluid outflow produces vomiting with or without nausea. Patients with psychiatric illnesses including anorexia nervosa, bulimia nervosa, anxiety, and depression often report significant nausea that may be associated with delayed gastric emptying. Medications and Metabolic Disorders Drugs evoke vomiting by action on the stomach (analgesics, erythromycin) or area postrema (opiates, anti-parkinsonian drugs). Other emetogenic agents include antibiotics, cardiac antiarrhythmics, antihypertensives, oral hypoglycemics, antide-259 pressants (selective serotonin and serotonin norepinephrine reuptake inhibitors), smoking cessation drugs (varenicline, nicotine), and contraceptives. Cancer chemotherapy causes vomiting that is acute (within hours of administration), delayed (after 1 or more days), or anticipatory. Acute emesis from highly emetogenic agents (e.g., cisplatin) is mediated by 5-HT3 pathways, whereas delayed emesis is less dependent on 5-HT3 mechanisms. Anticipatory nausea may respond to anxiolytic therapy rather than antiemetics. Metabolic disorders elicit nausea and vomiting. Pregnancy is the most prevalent endocrinologic cause, and nausea affects 70% of women in the first trimester. Hyperemesis gravidarum is a severe form of nausea of pregnancy that produces significant fluid loss and electrolyte disturbances. Uremia, ketoacidosis, adrenal insufficiency, and parathyroid and thyroid disease are other metabolic etiologies. Circulating toxins evoke emesis via effects on the area postrema. Endogenous toxins are generated in fulminant liver failure, whereas exogenous enterotoxins may be produced by enteric bacterial infection. Ethanol intoxication is a common toxic etiology of nausea and vomiting. APPROACH TO THE PATIENT: The history helps define the etiology of nausea and vomiting. Drugs, toxins, and infections often cause acute symptoms, whereas established illnesses evoke chronic complaints. Gastroparesis and pyloric obstruction elicit vomiting within an hour of eating. Emesis from intestinal blockage occurs later. Vomiting occurring within minutes of meal consumption prompts consideration of rumination syndrome. With severe gastric emptying delays, the vomitus may contain food residue ingested hours or days before. Hematemesis raises suspicion of an ulcer, malignancy, or Mallory-Weiss tear. Feculent emesis is noted with distal intestinal or colonic obstruction. Bilious vomiting excludes gastric obstruction, whereas emesis of undigested food is consistent with a Zenker’s diverticulum or achalasia. Vomiting can relieve abdominal pain from a bowel obstruction, but has no effect in pancreatitis or cholecystitis. Profound weight loss raises concern about malignancy or obstruction. Fevers suggest inflammation. An intracranial source is considered if there are headaches or visual field changes. Vertigo or tinnitus indicates labyrinthine disease. The physical examination complements the history. Orthostatic hypotension and reduced skin turgor indicate intravascular fluid loss. Pulmonary abnormalities raise concern for aspiration of vomitus. Abdominal auscultation may reveal absent bowel sounds with ileus. High-pitched rushes suggest bowel obstruction, whereas a succussion splash upon abrupt lateral movement of the patient is found with gastroparesis or pyloric obstruction. Tenderness or involuntary guarding raises suspicion of inflammation, whereas fecal blood suggests mucosal injury from ulcer, ischemia, or tumor. Neurologic disease presents with papilledema, visual field loss, or focal neural abnormalities. Neoplasm is suggested by palpation of masses or adenopathy. For intractable symptoms or an elusive diagnosis, selected screening tests can direct clinical care. Electrolyte replacement is indicated for hypokalemia or metabolic alkalosis. Iron-deficiency anemia mandates a search for mucosal injury. Pancreaticobiliary disease is indicated by abnormal pancreatic or liver biochemistries, whereas endocrinologic, rheumatologic, or paraneoplastic etiologies are suggested by hormone or serologic abnormalities. If bowel obstruction is suspected, supine and upright abdominal radiographs may show intestinal air-fluid levels with reduced colonic air. Ileus is characterized by diffusely dilated air-filled bowel loops. CHAPTER 54 Nausea, Vomiting, and Indigestion Anatomic studies may be indicated if initial testing is nondiagnostic. Upper endoscopy detects ulcers, malignancy, and retained gastric food residue in gastroparesis. Small-bowel barium radiography or computed tomography (CT) diagnoses partial bowel obstruction. Colonoscopy or contrast enema radiography detects colonic obstruction. Ultrasound or CT defines intraperitoneal inflammation; CT and magnetic resonance imaging (MRI) enterography provide superior definition of inflammation in Crohn’s disease. CT or MRI of the head can delineate intracranial disease. Mesenteric angiography, CT, or MRI is useful for suspected ischemia. Gastrointestinal motility testing may detect an underlying motor disorder when anatomic abnormalities are absent. Gastroparesis commonly is diagnosed by gastric scintigraphy, by which emptying of a radiolabeled meal is measured. Isotopic breath tests and wireless motility capsule methods are alternatives tests to define gastroparesis in different regions of the world. Intestinal pseudoobstruction often is suggested by abnormal barium transit and luminal dilation on small-bowel contrast radiography. Delayed small-bowel transit also may be detected by wireless capsule techniques. Small-intestinal manometry can confirm the diagnosis and further characterize the motor abnormality as neuropathic or myopathic based on contractile patterns. Such investigation can obviate the need for surgical intestinal biopsy to evaluate for smooth muscle or neuronal degeneration. Combined ambulatory esophageal pH/impedance testing and high-resolution manometry can facilitate diagnosis of rumination syndrome. PART 2 Cardinal Manifestations and Presentation of Diseases Therapy of vomiting is tailored to correcting remediable abnormalities if possible. Hospitalization is considered for severe dehydration, especially if oral fluid replenishment cannot be sustained. Once oral intake is tolerated, nutrients are restarted with low-fat liquids, because lipids delay gastric emptying. Foods high in indigestible residue are avoided because these prolong gastric retention. Controlling blood glucose in poorly controlled diabetics can reduce hospitalizations in gastroparesis. The most commonly used antiemetic agents act on central nervous system sites (Table 54-2). Antihistamines like dimenhydrinate and meclizine and anticholinergics like scopolamine act on labyrinthine pathways to treat motion sickness and inner ear disorders. Dopamine D2 antagonists treat emesis evoked by area postrema stimuli and are used for medication, toxic, and metabolic etiologies. Dopamine antagonists cross the blood-brain barrier and cause anxiety, movement disorders, and hyperprolactinemic effects (galactorrhea, sexual dysfunction). Other classes exhibit antiemetic properties. 5-HT3 antagonists such as ondansetron and granisetron can prevent postoperative vomiting, radiation therapy–induced symptoms, and cancer chemotherapy–induced emesis, but also are used for other causes of emesis with limited evidence for efficacy. Tricyclic antidepressant agents provide symptomatic benefit in patients with chronic idiopathic nausea and functional vomiting as well as in long-standing diabetic patients with nausea and vomiting. Other antidepressants such as mirtazapine and olanzapine also may exhibit antiemetic effects. Drugs that stimulate gastric emptying are used for gastroparesis (Table 54-2). Metoclopramide, a combined 5-HT4 agonist and D2 antagonist, is effective in gastroparesis, but antidopaminergic side effects, such as dystonias and mood and sleep disturbances, limit use in ∼25% of cases. Erythromycin increases gastroduodenal motility by action on receptors for motilin, an endogenous stimulant of fasting motor activity. Intravenous erythromycin is useful for inpatients with refractory gastroparesis, but oral forms have some utility. Domperidone, a D2 antagonist not available in the United States, exhibits prokinetic and antiemetic effects but does not cross into most brain regions; thus, anxiety and dystonic reactions are rare. The main side effects of domperidone relate to induction of hyperprolactinemia via effects on pituitary regions served by a porous blood-brain barrier. Refractory motility disorders pose significant challenges. Intestinal pseudoobstruction may respond to the somatostatin analogue octreotide, which induces propagative small-intestinal motor complexes. Acetylcholinesterase inhibitors such as pyridostigmine are also observed to benefit some patients with small-bowel dysmotility. Pyloric injections of botulinum toxin are reported in uncontrolled studies to reduce gastroparesis symptoms, but small controlled trials observe benefits no greater than sham treatments. Surgical pyloroplasty has improved symptoms in case series. Placing a feeding jejunostomy reduces hospitalizations and improves overall health in some patients with drug-refractory gastroparesis. Postvagotomy gastroparesis may improve with near-total gastric resection; similar operations are now being tried for other gastroparesis etiologies. Implanted gastric electrical stimulators may reduce symptoms, enhance nutrition, improve quality of life, and decrease health care expenditures in medication-refractory gastroparesis, but small controlled trials do not report convincing benefits. Safety concerns about selected antiemetics have been emphasized. Centrally acting antidopaminergics, especially metoclopramide, can cause irreversible movement disorders such as tardive dyskinesia, particularly in older patients. This complication should be carefully explained and documented in the medical record. Some agents with antiemetic properties including domperidone, erythromycin, tricyclics, and 5-HT3 antagonists can induce dangerous cardiac rhythm disturbances, especially in those with QTc interval prolongation on electrocardiography (ECG). Surveillance ECG testing has been advocated for some of these agents. Some cancer chemotherapies are intensely emetogenic (Chap. 103e). Combining a 5-HT3 antagonist, an NK1 antagonist, and a glucocorticoid provides significant control of both acute and delayed vomiting after highly emetogenic chemotherapy. Unlike other drugs in the same class, the 5-HT3 antagonist palonosetron exhibits efficacy at preventing delayed chemotherapy-induced vomiting. Benzodiazepines such as lorazepam can reduce anticipatory nausea and vomiting. Miscellaneous therapies with benefit in chemotherapy-induced emesis include cannabinoids, olanzapine, and alternative therapies like ginger. Most antiemetic regimens produce greater reductions in vomiting than in nausea. Clinicians should exercise caution in managing pregnant patients with nausea. Studies of the teratogenic effects of antiemetic agents provide conflicting results. Few controlled trials have been performed in nausea of pregnancy. Antihistamines such as meclizine and doxylamine, antidopaminergics such as prochlorperazine, and antiserotonergics such as ondansetron demonstrate limited efficacy. Some obstetricians offer alternative therapies such as pyridoxine, acupressure, or ginger. Managing cyclic vomiting syndrome is a challenge. Prophylaxis with tricyclic antidepressants, cyproheptadine, or β-adrenoceptor antagonists can reduce the severity and frequency of attacks. Intravenous 5-HT3 antagonists combined with the sedating effects of a benzodiazepine like lorazepam are a mainstay of treatment of acute flares. Small studies report benefits with antimigraine agents, including the 5-HT1 agonist sumatriptan, as well as selected anticonvulsants such as zonisamide and levetiracetam. The most common causes of indigestion are gastroesophageal reflux and functional dyspepsia. Other cases are a consequence of organic illness. gastroesophageal Reflux Gastroesophageal reflux results from many 261 physiologic defects. Reduced lower esophageal sphincter (LES) tone contributes to reflux in scleroderma and pregnancy and may be a factor in some patients without systemic illness. Others exhibit frequent transient LES relaxations (TLESRs) that cause bathing of the esophagus by acid or nonacidic fluid. Overeating and aerophagia override the barrier function of the LES, whereas reductions in esophageal body motility or salivary secretion prolong fluid exposure. Increased intragastric pressure promotes gastroesophageal reflux in obesity. The role of hiatal hernias is controversial—most reflux patients have hiatal hernias, but most with hiatal hernias do not have excess heartburn. gastric Motor Dysfunction Disturbed gastric motility may contribute to gastroesophageal reflux in up to one-third of cases. Delayed gastric emptying is also found in ∼30% of functional dyspeptics. Conversely, some dyspeptics exhibit rapid gastric emptying. The relation of these defects to symptom induction is uncertain; studies show poor correlation between symptom severity and degrees of motor dysfunction. Impaired gastric fundus relaxation after eating (i.e., accommodation) may underlie selected dyspeptic symptoms like bloating, nausea, and early satiety in ∼40% of patients. Visceral Afferent Hypersensitivity Disturbed gastric sensation is another pathogenic factor in functional dyspepsia. Visceral hypersensitivity was first reported in IBS with demonstration of heightened perception of rectal balloon inflation without changes in compliance. Similarly, ∼35% of dyspeptic patients note discomfort with fundic distention to lower pressures than healthy controls. Others with dyspepsia exhibit hypersensitivity to chemical stimulation with capsaicin or with acid or lipid exposure in the duodenum. Some individuals with functional heartburn without increased acid or nonacid reflux are believed to have heightened perception of normal esophageal pH and volume. Other Factors Helicobacter pylori has a clear etiologic role in peptic ulcer disease, but ulcers cause a minority of dyspepsia cases. H. pylori is a minor factor in the genesis of functional dyspepsia. Functional dyspepsia is associated with chronic fatigue, produces reduced physical and mental well-being, and is exacerbated by stress. Anxiety, depression, and somatization may have contributing roles in some cases. Functional MRI studies show increased activation of several brain regions, emphasizing contributions from central nervous system factors. Analgesics cause dyspepsia, whereas nitrates, calcium channel blockers, theophylline, and progesterone promote gastroesophageal reflux. Other stimuli that induce reflux include ethanol, tobacco, and caffeine via LES relaxation. Genetic factors may promote development of reflux and dyspepsia. DIFFERENTIAL DIAgNOSIS gastroesophageal Reflux Disease Gastroesophageal reflux disease (GERD) is prevalent. Heartburn is reported once monthly by 40% of Americans and daily by 7–10%. Most cases of heartburn occur because of excess acid reflux, but reflux of nonacidic fluid produces similar symptoms. Alkaline reflux esophagitis produces GERD-like symptoms most often in patients who have had surgery for peptic ulcer disease. Ten percent of patients with heartburn exhibit normal esophageal acid exposure and no increase in nonacidic reflux (functional heartburn). Functional Dyspepsia Nearly 25% of the populace has dyspepsia at least six times yearly, but only 10–20% present to clinicians. Functional dyspepsia, the cause of symptoms in >60% of dyspeptic patients, is defined as ≥3 months of bothersome postprandial fullness, early satiety, or epigastric pain or burning with symptom onset at least 6 months before diagnosis in the absence of organic cause. Functional dyspepsia is subdivided into postprandial distress syndrome, characterized by meal-induced fullness, early satiety, and discomfort, and epigastric pain syndrome, which presents with epigastric burning pain unrelated to meals. Most cases follow a benign course, but some with H. pylori infection or on nonsteroidal anti-inflammatory drugs CHAPTER 54 Nausea, Vomiting, and Indigestion 262 (NSAIDs) develop ulcers. As with idiopathic gastroparesis, some cases of functional dyspepsia result from prior infection. ulcer Disease In most GERD patients, there is no destruction of the esophagus. However, 5% develop esophageal ulcers, and some form strictures. Symptoms cannot distinguish nonerosive from erosive or ulcerative esophagitis. A minority of cases of dyspepsia stem from gastric or duodenal ulcers. The most common causes of ulcer disease are H. pylori infection and use of NSAIDs. Other rare causes of gastroduodenal ulcers include Crohn’s disease (Chap. 351) and Zollinger-Ellison syndrome (Chap. 348), resulting from gastrin overproduction by an endocrine tumor. Malignancy Dyspeptic patients often seek care because of fear of cancer, but few cases result from malignancy. Esophageal squamous cell carcinoma occurs most often with long-standing tobacco or ethanol intake. Other risks include prior caustic ingestion, achalasia, and the hereditary disorder tylosis. Esophageal adenocarcinoma usually complicates prolonged acid reflux. Eight to 20% of GERD patients exhibit esophageal intestinal metaplasia, termed Barrett’s metaplasia, a condition that predisposes to esophageal adenocarcinoma (Chap. 109). Gastric malignancies include adenocarcinoma, which is prevalent in certain Asian societies, and lymphoma. Other Causes Opportunistic fungal or viral esophageal infections may produce heartburn but more often cause odynophagia. Other causes of esophageal inflammation include eosinophilic esophagitis and pill esophagitis. Biliary colic is in the differential diagnosis of unexplained upper abdominal pain, but most patients with biliary colic report discrete acute episodes of right upper quadrant or epigastric pain rather than the chronic burning, discomfort, and fullness of dyspepsia. Twenty percent of patients with gastroparesis report a predominance of pain or discomfort rather than nausea and vomiting. Intestinal lactase deficiency as a cause of gas, bloating, and discomfort occurs in 15–25% of whites of northern European descent but is more common in blacks and Asians. Intolerance of other carbohydrates (e.g., fructose, sorbitol) produces similar symptoms. Small-intestinal bacterial overgrowth may cause dyspepsia, often associated with bowel dysfunction, distention, and malabsorption. Eosinophilic infiltration of the duodenal mucosa is described in some dyspeptics, particularly with postprandial distress syndrome. Celiac disease, pancreatic disease (chronic pancreatitis, malignancy), hepatocellular carcinoma, Ménétrier’s disease, infiltrative diseases (sarcoidosis, eosinophilic gastroenteritis), mesenteric ischemia, thyroid and parathyroid disease, and abdominal wall strain cause dyspepsia. Gluten sensitivity in the absence of celiac disease is reported to evoke unexplained upper abdominal symptoms. Extraperitoneal etiologies of indigestion include congestive heart failure and tuberculosis. APPROACH TO THE PATIENT: PART 2 Cardinal Manifestations and Presentation of Diseases Care of the indigestion patient requires a thorough interview. GERD classically produces heartburn, a substernal warmth that moves toward the neck. Heartburn often is exacerbated by meals and may awaken the patient. Associated symptoms include regurgitation of acid or nonacidic fluid and water brash, the reflex release of salty salivary secretions into the mouth. Atypical symptoms include pharyngitis, asthma, cough, bronchitis, hoarseness, and chest pain that mimics angina. Some patients with acid reflux on esophageal pH testing do not report heartburn, but note abdominal pain or other symptoms. Dyspeptic patients typically report symptoms referable to the upper abdomen that may be meal-related, as with postprandial distress syndrome, or independent of food ingestion, as in epigastric pain syndrome. Functional dyspepsia overlaps with other disorders including GERD, IBS, and idiopathic gastroparesis. Family history of gastrointestinal malignancy The physical exam with GERD and functional dyspepsia usually is normal. In atypical GERD, pharyngeal erythema and wheezing may be noted. Recurrent acid regurgitation may cause poor denti tion. Dyspeptics may exhibit epigastric tenderness or distention. Discriminating functional and organic causes of indigestion man dates excluding certain historic and exam features. Odynophagia suggests esophageal infection. Dysphagia is concerning for a benign or malignant esophageal blockage. Other alarm features include unexplained weight loss, recurrent vomiting, occult or gross bleed ing, jaundice, palpable mass or adenopathy, and a family history of gastrointestinal neoplasm. Because indigestion is prevalent and most cases result from GERD or functional dyspepsia, a general principle is to perform only lim ited and directed diagnostic testing of selected individuals. Once alarm factors are excluded (Table 54-3), patients with typi cal GERD do not need further evaluation and are treated empiri cally. Upper endoscopy is indicated to exclude mucosal injury in cases with atypical symptoms, symptoms unresponsive to acid sup pression, or alarm factors. For heartburn >5 years in duration, espe cially in patients >50 years old, endoscopy is advocated to screen for Barrett’s metaplasia. The benefits and cost-effectiveness of this approach have not been validated in controlled studies. Ambulatory endoscopically attached to the esophageal wall is considered for chest pain. High-resolution esophageal manometry is ordered when surgical treatment of GERD is considered. A low LES pressure predicts failure of drug therapy and provides a rationale to proceed to surgery. Poor esophageal body peristalsis raises concern about postoperative dysphagia and directs the choice of surgical tech nique. Nonacidic reflux may be detected by combined esophageal impedance-pH testing in medication-unresponsive patients. Upper endoscopy is recommended as the initial test in patients with unexplained dyspepsia who are >55 years old or who have alarm factors because of the purported elevated risks of malig nancy and ulcer in these groups. However, endoscopic findings in unexplained dyspepsia include erosive esophagitis in 13%, peptic ulcer in 8%, and gastric or esophageal malignancy in only 0.3%. Management of patients <55 years old without alarm factors depends on the local prevalence of H. pylori infection. In regions with low H. pylori prevalence (<10%), a 4-week trial of an acid- suppressing medication such as a proton pump inhibitor (PPI) is recommended. If this fails, a “test and treat” approach is most commonly applied. H. pylori status is determined with urea breath testing, stool antigen measurement, or blood serology testing. Those who are H. pylori positive are given therapy to eradicate the infection. If symptoms resolve on either regimen, no further inter vention is required. For patients in areas with high H. pylori preva lence (>10%), an initial test and treat approach is advocated, with a subsequent trial of an acid-suppressing regimen offered for those in whom H. pylori treatment fails or for those who are negative for the infection. In each of these patient subsets, upper endoscopy is reserved for those whose symptoms fail to respond to therapy. Further testing is indicated in some settings. If bleeding is noted, a blood count can exclude anemia. Thyroid chemistries or calcium levels screen for metabolic disease, whereas specific serologies may suggest celiac disease. Pancreatic and liver chemistries are obtained for possible pancreaticobiliary causes. Ultrasound, CT, or MRI is performed if abnormalities are found. Gastric emptying testing is considered to exclude gastroparesis for dyspeptic symptoms that resemble postprandial distress when drug therapy fails and in some GERD patients, especially if surgical intervention is an option. Breath testing after carbohydrate ingestion detects lactase deficiency, intolerance to other carbohydrates, or small-intestinal bacterial overgrowth. For mild indigestion, reassurance that a careful evaluation revealed no serious organic disease may be the only intervention needed. Drugs that cause gastroesophageal reflux or dyspepsia should be stopped, if possible. Patients with GERD should limit ethanol, caffeine, chocolate, and tobacco use due to their effects on the LES. Other measures in GERD include ingesting a low-fat diet, avoiding snacks before bedtime, and elevating the head of the bed. Patients with functional dyspepsia also may be advised to reduce intake of fat, spicy foods, caffeine, and alcohol. Specific therapies for organic disease should be offered when possible. Surgery is appropriate for biliary colic, whereas diet changes are indicated for lactase deficiency or celiac disease. Peptic ulcers may be cured by specific medical regimens. However, because most indigestion is caused by GERD or functional dyspepsia, medications that reduce gastric acid, modulate motility, or blunt gastric sensitivity are used. Drugs that reduce or neutralize gastric acid are often prescribed for GERD. Histamine H2 antagonists like cimetidine, ranitidine, famotidine, and nizatidine are useful in mild to moderate GERD. For severe symptoms or for many cases of erosive or ulcerative esophagitis, PPIs such as omeprazole, lansoprazole, rabeprazole, pantoprazole, esomeprazole, or dexlansoprazole are needed. These drugs inhibit gastric H+, K+-ATPase and are more potent than H2 antagonists. Up to one-third of GERD patients do not respond to standard PPI doses; one-third of these patients have nonacidic reflux, whereas 10% have persistent acid-related disease. Furthermore, heartburn typically responds better to PPI therapy than regurgitation or atypical GERD symptoms. Some individuals respond to doubling of the PPI dose or adding an H2 antagonist at bedtime. Infrequent complications of long-term PPI therapy include infection, diarrhea (from Clostridium difficile infection or microscopic colitis), small-intestinal bacterial overgrowth, nutrient deficiency (vitamin B12, iron, calcium), hypomagnesemia, bone demineralization, interstitial nephritis, and impaired medication absorption (e.g., clopidogrel). Many patients started on a PPI can be stepped down to an H2 antagonist or be switched to an on-demand schedule. Acid-suppressing drugs are also effective in selected patients with functional dyspepsia. A meta-analysis of eight controlled trials calculated a risk ratio of 0.86, with a 95% confidence interval of 0.78–0.95, favoring PPI therapy over placebo. H2 antagonists also reportedly improve symptoms in functional dyspepsia; however, findings of trials of this drug class likely are influenced by inclusion of large numbers of GERD patients. Antacids are useful for short-term control of mild GERD but have less benefit in severe cases unless given at high doses that cause side effects (diarrhea and constipation with magnesiumand aluminum-containing agents, respectively). Alginic acid combined with antacids forms a floating barrier to reflux in patients with upright symptoms. Sucralfate, a salt of aluminum hydroxide and sucrose octasulfate that buffers acid and binds pepsin and bile salts, shows efficacy in GERD similar to H2 antagonists. H. pylori eradication is definitively indicated only for peptic ulcer and mucosa-associated lymphoid tissue gastric lymphoma. The utility of eradication therapy in functional dyspepsia is limited, although some cases (particularly with the epigastric pain syndrome subtype) relate to this infection. A meta-analysis of 18 controlled trials calculated a relative risk reduction of 10%, with a 95% confidence interval of 6–14%, favoring H. pylori eradication over placebo. Most drug combinations (Chaps. 188 and 348) include 10–14 days of a PPI or bismuth subsalicylate in concert with two antibiotics. H. pylori infection is associated with reduced prevalence of GERD, especially in the elderly. However, eradication of the infection does not worsen GERD symptoms. No consensus recommendations regarding H. pylori eradication in GERD patients have been offered. Prokinetics like metoclopramide, erythromycin, and domperidone have limited utility in GERD. The γ-aminobutyric acid B (GABA-B) agonist baclofen reduces esophageal exposure to acid and non-acidic fluids by reducing TLESRs by 40%; this drug is proposed for refractory acid and nonacid reflux. Several studies have promoted the efficacy of motor-stimulating drugs in functional dyspepsia, but publication bias and small sample sizes raise questions about reported benefits of these agents. Some clinicians suggest that patients with the postprandial distress subtype may respond preferentially to prokinetic drugs. The 5-HT1 agonist buspirone may improve some functional dyspepsia symptoms by enhancing meal-induced gastric accommodation. Acotiamide promotes gastric emptying and augments accommodation by enhancing gastric acetylcholine release via muscarinic receptor antagonism and acetylcholinesterase inhibition. This agent is approved for functional dyspepsia in Japan and is in testing elsewhere. Antireflux surgery (fundoplication) to increase LES pressure may be offered to GERD patients who are young and require lifelong therapy, have typical heartburn and regurgitation, are responsive to PPIs, and show evidence of acid reflux on pH monitoring. Surgery also is effective for some cases of nonacidic reflux. Individuals who respond less well to fundoplication include those with atypical symptoms or who have esophageal body motor disturbances. Dysphagia, gas-bloat syndrome, and gastroparesis are long-term complications of these procedures; ∼60% develop recurrent GERD symptoms over time. The utility and safety of endoscopic therapies (radiofrequency ablation, transoral incisionless fundoplication) to enhance gastroesophageal barrier function have unproved durable benefits for refractory GERD. Some patients with functional heartburn and functional dyspepsia refractory to standard therapies may respond to antidepressants in tricyclic and selective serotonin reuptake inhibitor classes, although studies are limited. Their mechanism of action may involve blunting of visceral pain processing in the brain. Gas and bloating are among the most troubling symptoms in some patients with indigestion and can be difficult to treat. Dietary exclusion of gas-producing foods such as legumes and use of simethicone or activated charcoal provide benefits in some cases. Low FODMAP (fermentable oligosaccharide, disaccharide, monosaccharide, and polyol) diets and therapies to modify gut flora (nonabsorbable antibiotics, probiotics) reduce gaseous symptoms in some IBS patients. The utility of low-FODMAP diets, antibiotics, and probiotics in functional dyspepsia is unproven. Herbal remedies such as STW 5 (Iberogast, a mixture of nine herbal agents) are useful in some dyspeptic patients. Psychological treatments (e.g., behavioral therapy, psychotherapy, hypnotherapy) may be offered for refractory functional dyspepsia, but no convincing data confirm their efficacy. CHAPTER 54 Nausea, Vomiting, and Indigestion Diarrhea and Constipation Michael Camilleri, Joseph A. Murray Diarrhea and constipation are exceedingly common and, together, exact an enormous toll in terms of mortality, morbidity, social inconvenience, loss of work productivity, and consumption of medi-cal resources. Worldwide, >1 billion individuals suffer one or more 55264 episodes of acute diarrhea each year. Among the 100 million persons affected annually by acute diarrhea in the United States, nearly half must restrict activities, 10% consult physicians, ∼250,000 require hospitalization, and ∼5000 die (primarily the elderly). The annual economic burden to society may exceed $20 billion. Acute infectious diarrhea remains one of the most common causes of mortality in developing countries, particularly among impoverished infants, accounting for 1.8 million deaths per year. Recurrent, acute diarrhea in children in tropical countries results in environmental enteropathy with longterm impacts on physical and intellectual development. Constipation, by contrast, is rarely associated with mortality and is exceedingly common in developed countries, leading to frequent self-medication and, in a third of those, to medical consultation. Population statistics on chronic diarrhea and constipation are more uncertain, perhaps due to variable definitions and reporting, but the frequency of these conditions is also high. United States population surveys put prevalence rates for chronic diarrhea at 2–7% and for chronic constipation at 12–19%, with women being affected twice as often as men. Diarrhea and constipation are among the most common patient complaints presenting to internists and primary care physicians, and they account for nearly 50% of referrals to gastroenterologists. Although diarrhea and constipation may present as mere nuisance symptoms at one extreme, they can be severe or life-threatening at the other. Even mild symptoms may signal a serious underlying gastrointestinal lesion, such as colorectal cancer, or systemic disorder, such as thyroid disease. Given the heterogeneous causes and potential severity of these common complaints, it is imperative for clinicians to appreciate the pathophysiology, etiologic classification, diagnostic strategies, and principles of management of diarrhea and constipation, so that rational and cost-effective care can be delivered. While the primary function of the small intestine is the digestion and assimilation of nutrients from food, the small intestine and colon together perform important functions that regulate the secretion and absorption of water and electrolytes, the storage and subsequent transport of intraluminal contents aborally, and the salvage of some nutrients that are not absorbed in the small intestine after bacterial metabolism of carbohydrate allows salvage of short-chain fatty acids. The main motor functions are summarized in Table 55-1. Alterations in fluid and electrolyte handling contribute significantly to diarrhea. Alterations in motor and sensory functions of the colon result in Accommodation, trituration, mixing, transit Stomach ∼3 h Small bowel ∼3 h Colon: Irregular Mixing, Fermentation, Absorption, Transit Ascending, transverse: reservoirs Descending: conduit Sigmoid/rectum: volitional reservoir Abbreviation: MMC, migrating motor complex. PART 2 Cardinal Manifestations and Presentation of Diseases highly prevalent syndromes such as irritable bowel syndrome (IBS), chronic diarrhea, and chronic constipation. The small intestine and colon have intrinsic and extrinsic innervation. The intrinsic innervation, also called the enteric nervous system, comprises myenteric, submucosal, and mucosal neuronal layers. The function of these layers is modulated by interneurons through the actions of neurotransmitter amines or peptides, including acetylcholine, vasoactive intestinal peptide (VIP), opioids, norepinephrine, serotonin, adenosine triphosphate (ATP), and nitric oxide (NO). The myenteric plexus regulates smooth-muscle function through intermediary pacemaker-like cells called the interstitial cells of Cajal, and the submucosal plexus affects secretion, absorption, and mucosal blood flow. The enteric nervous system receives input from the extrinsic nerves, but it is capable of independent control of these functions. The extrinsic innervations of the small intestine and colon are part of the autonomic nervous system and also modulate motor and secretory functions. The parasympathetic nerves convey visceral sensory pathways from and excitatory pathways to the small intestine and colon. Parasympathetic fibers via the vagus nerve reach the small intestine and proximal colon along the branches of the superior mesenteric artery. The distal colon is supplied by sacral parasympathetic nerves ) via the pelvic plexus; these fibers course through the wall of the (S2–4 colon as ascending intracolonic fibers as far as, and in some instances including, the proximal colon. The chief excitatory neurotransmitters controlling motor function are acetylcholine and the tachykinins, such as substance P. The sympathetic nerve supply modulates motor functions and reaches the small intestine and colon alongside their arterial vessels. Sympathetic input to the gut is generally excitatory to sphincters and inhibitory to non-sphincteric muscle. Visceral afferents convey sensation from the gut to the central nervous system (CNS); initially, they course along sympathetic fibers, but as they approach the spinal cord they separate, have cell bodies in the dorsal root ganglion, and enter the dorsal horn of the spinal cord. Afferent signals are conveyed to the brain along the lateral spinothalamic tract and the nociceptive dorsal column pathway and are then projected beyond the thalamus and brainstem to the insula and cerebral cortex to be perceived. Other afferent fibers synapse in the prevertebral ganglia and reflexly modulate intestinal motility, blood flow, and secretion. On an average day, 9 L of fluid enter the gastrointestinal (GI) tract, ∼1 L of residual fluid reaches the colon, and the stool excretion of fluid constitutes about 0.2 L/d. The colon has a large capacitance and functional reserve and may recover up to four times its usual volume of 0.8 L/d, provided the rate of flow permits reabsorption to occur. Thus, the colon can partially compensate for excess fluid delivery to the colon that may result from intestinal absorptive or secretory disorders. In the small intestine and colon, sodium absorption is predominantly electrogenic (i.e., it can be measured as an ionic current across the membrane because there is not an equivalent loss of a cation from the cell), and uptake takes place at the apical membrane; it is compensated for by the export functions of the basolateral sodium pump. There are several active transport proteins at the apical membrane, especially in the small intestine, whereby sodium ion entry is coupled to monosaccharides (e.g., glucose through the transporter SGLT1, or fructose through GLUT-5). Glucose then exits the basal membrane through a specific transport protein, GLUT-5, creating a glucose concentration gradient between the lumen and the intercellular space, drawing water and electrolytes passively from the lumen. A variety of neural and nonneural mediators regulate colonic fluid and electrolyte balance, including cholinergic, adrenergic, and serotonergic mediators. Angiotensin and aldosterone also influence colonic absorption, reflecting the common embryologic development of the distal colonic epithelium and the renal tubules. During the fasting period, the motility of the small intestine is characterized by a cyclical event called the migrating motor complex (MMC), which serves to clear nondigestible residue from the small intestine (the intestinal “housekeeper”). This organized, propagated series of contractions lasts, on average, 4 min, occurs every 60–90 min, and usually involves the entire small intestine. After food ingestion, the small intestine produces irregular, mixing contractions of relatively low amplitude, except in the distal ileum where more powerful contractions occur intermittently and empty the ileum by bolus transfers. The distal ileum acts as a reservoir, emptying intermittently by bolus movements. This action allows time for salvage of fluids, electrolytes, and nutrients. Segmentation by haustra compartmentalizes the colon and facilitates mixing, retention of residue, and formation of solid stools. There is increased appreciation of the intimate interaction between the colonic function and the luminal ecology. The resident microorganisms, predominantly anaerobic bacteria, in the colon are necessary for the digestion of unabsorbed carbohydrates that reach the colon even in health, thereby providing a vital source of nutrients to the mucosa. Normal colonic flora also keeps pathogens at bay by a variety of mechanisms. In health, the ascending and transverse regions of colon function as reservoirs (average transit time, 15 h), and the descending colon acts as a conduit (average transit time, 3 h). The colon is efficient at conserving sodium and water, a function that is particularly important in sodium-depleted patients in whom the small intestine alone is unable to maintain sodium balance. Diarrhea or constipation may result from alteration in the reservoir function of the proximal colon or the propulsive function of the left colon. Constipation may also result from disturbances of the rectal or sigmoid reservoir, typically as a result of dysfunction of the pelvic floor, the anal sphincters, the coordination of defecation, or dehydration. The small intestinal MMC only rarely continues into the colon. However, short duration or phasic contractions mix colonic contents, and high-amplitude (>75 mmHg) propagated contractions (HAPCs) are sometimes associated with mass movements through the colon and normally occur approximately five times per day, usually on awakening in the morning and postprandially. Increased frequency of HAPCs may result in diarrhea or urgency. The predominant phasic contractions in the colon are irregular and non-propagated and serve a “mixing” function. Colonic tone refers to the background contractility upon which phasic contractile activity (typically contractions lasting <15 s) is superimposed. It is an important cofactor in the colon’s capacitance (volume accommodation) and sensation. After meal ingestion, colonic phasic and tonic contractility increase for a period of ∼2 h. The initial phase (∼10 min) is mediated by the vagus nerve in response to mechanical distention of the stomach. The subsequent response of the colon requires caloric stimulation (e.g., intake of at least 500 kcal) and is mediated, at least in part, by hormones (e.g., gastrin and serotonin). Tonic contraction of the puborectalis reflex sympathetic innervation. As sigmoid and rectal contractions, as 265 well as straining (Valsalva maneuver), which increases intraabdominal pressure, increase the pressure within the rectum, the rectosigmoid angle opens by >15°. Voluntary relaxation of the external anal sphincter (striated muscle innervated by the pudendal nerve) in response to the sensation produced by distention permits the evacuation of feces. Defecation can also be delayed voluntarily by contraction of the external anal sphincter. Diarrhea is loosely defined as passage of abnormally liquid or unformed stools at an increased frequency. For adults on a typical Western diet, stool weight >200 g/d can generally be considered diarrheal. Diarrhea may be further defined as acute if <2 weeks, persistent if 2–4 weeks, and chronic if >4 weeks in duration. Two common conditions, usually associated with the passage of stool totaling <200 g/d, must be distinguished from diarrhea, because diagnostic and therapeutic algorithms differ. Pseudodiarrhea, or the frequent passage of small volumes of stool, is often associated with rectal urgency, tenesmus, or a feeling of incomplete evacuation, and accompanies IBS or proctitis. Fecal incontinence is the involuntary discharge of rectal contents and is most often caused by neuromuscular disorders or structural anorectal problems. Diarrhea and urgency, especially if severe, may aggravate or cause incontinence. Pseudodiarrhea and fecal incontinence occur at prevalence rates comparable to or higher than that of chronic diarrhea and should always be considered in patients complaining of “diarrhea.” Overflow diarrhea may occur in nursing home patients due to fecal impaction that is readily detectable by rectal examination. A careful history and physical examination generally allow these conditions to be discriminated from true diarrhea. More than 90% of cases of acute diarrhea are caused by infectious agents; these cases are often accompanied by vomiting, fever, and abdominal pain. The remaining 10% or so are caused by medications, toxic ingestions, ischemia, food indiscretions, and other conditions. Infectious Agents Most infectious diarrheas are acquired by fecal-oral transmission or, more commonly, via ingestion of food or water contaminated with pathogens from human or animal feces. In the immunocompetent person, the resident fecal microflora, containing >500 taxonomically distinct species, are rarely the source of diarrhea muscle, which forms a sling around the rectoanal junction, is important AB Descent of the pelvic floor to maintain continence; during def-FIguRE 55-1 Sagittal view of the anorectum (A) at rest and (B) during straining to defecate. ecation, sacral parasympathetic nerves Continence is maintained by normal rectal sensation and tonic contraction of the internal anal relax this muscle, facilitating the sphincter and the puborectalis muscle, which wraps around the anorectum, maintaining an anorectal straightening of the rectoanal angle angle between 80° and 110°. During defecation, the pelvic floor muscles (including the puborectalis) (Fig. 55-1). Distention of the rectum relax, allowing the anorectal angle to straighten by at least 15°, and the perineum descends by 1–3.5 cm. results in transient relaxation of the The external anal sphincter also relaxes and reduces pressure on the anal canal. (Reproduced with internal anal sphincter via intrinsic and permission from A Lembo, M Camilleri: N Engl J Med 349:1360, 2003.) 266 and may actually play a role in suppressing the growth of ingested pathogens. Disturbances of flora by antibiotics can lead to diarrhea by reducing the digestive function or by allowing the overgrowth of pathogens, such as Clostridium difficile (Chap. 161). Acute infection or injury occurs when the ingested agent overwhelms or bypasses the host’s mucosal immune and nonimmune (gastric acid, digestive enzymes, mucus secretion, peristalsis, and suppressive resident flora) defenses. Established clinical associations with specific enteropathogens may offer diagnostic clues. In the United States, five high-risk groups are recognized: 1. Travelers. Nearly 40% of tourists to endemic regions of Latin America, Africa, and Asia develop so-called traveler’s diarrhea, most commonly due to enterotoxigenic or enteroaggregative Escherichia coli as well as to Campylobacter, Shigella, Aeromonas, norovirus, Coronavirus, and Salmonella. Visitors to Russia (especially St. Petersburg) may have increased risk of Giardia-associated diarrhea; visitors to Nepal may acquire Cyclospora. Campers, backpackers, and swimmers in wilderness areas may become infected with Giardia. Cruise ships may be affected by outbreaks of gastroenteritis caused by agents such as norovirus. 2. Consumers of certain foods. Diarrhea closely following food consumption at a picnic, banquet, or restaurant may suggest infection with Salmonella, Campylobacter, or Shigella from chicken; enterohemorrhagic E. coli (O157:H7) from undercooked hamburger; Bacillus cereus from fried rice or other reheated food; Staphylococcus aureus or Salmonella from mayonnaise or creams; Salmonella from eggs; Listeria from uncooked foods or soft cheeses; and Vibrio species, Salmonella, or acute hepatitis A from seafood, especially if raw. State departments of public health issue communications regarding food-related illnesses, which may have originated domestically or been imported, but ultimately cause epidemics in the United States (e.g., the Cyclospora epidemic of 2013 in midwestern states that resulted from bagged salads). 3. Immunodeficient persons. Individuals at risk for diarrhea include those with either primary immunodeficiency (e.g., IgA deficiency, common variable hypogammaglobulinemia, chronic granulomatous disease) or the much more common secondary immunodeficiency states PART 2 Cardinal Manifestations and Presentation of Diseases (e.g., AIDS, senescence, pharmacologic suppression). Common enteric pathogens often cause a more severe and protracted diarrheal illness, and, particularly in persons with AIDS, opportunistic infections, such as by Mycobacterium species, certain viruses (cytomegalovirus, adenovirus, and herpes simplex), and protozoa (Cryptosporidium, Isospora belli, Microsporida, and Blastocystis hominis) may also play a role (Chap. 226). In patients with AIDS, agents transmitted venereally per rectum (e.g., Neisseria gonorrhoeae, Treponema pallidum, Chlamydia) may contribute to proctocolitis. Persons with hemochromatosis are especially prone to invasive, even fatal, enteric infections with Vibrio species and Yersinia infections and should avoid raw fish. 4. Daycare attendees and their family members. Infections with Shigella, Giardia, Cryptosporidium, rotavirus, and other agents are very common and should be considered. 5. Institutionalized persons. Infectious diarrhea is one of the most frequent categories of nosocomial infections in many hospitals and long-term care facilities; the causes are a variety of microorganisms but most commonly C. difficile. C. difficile can affect those with no history of antibiotic use and may be acquired in the community. The pathophysiology underlying acute diarrhea by infectious agents produces specific clinical features that may also be helpful in diagnosis (Table 55-2). Profuse, watery diarrhea secondary to small-bowel hypersecretion occurs with ingestion of preformed bacterial toxins, enterotoxin-producing bacteria, and enteroadherent pathogens. Diarrhea associated with marked vomiting and minimal or no fever may occur abruptly within a few hours after ingestion of the former two types; vomiting is usually less, abdominal cramping or bloating is greater, and fever is higher with the latter. Cytotoxin-producing and invasive microorganisms all cause high fever and abdominal pain. Invasive bacteria and Entamoeba histolytica often cause bloody diarrhea (referred to as dysentery). Yersinia invades the terminal ileal and proximal colon mucosa and may cause especially severe abdominal pain with tenderness mimicking acute appendicitis. Finally, infectious diarrhea may be associated with systemic manifestations. Reactive arthritis (formerly known as Reiter’s syndrome), arthritis, urethritis, and conjunctivitis may accompany or follow 1–2+, watery, mushy 1–3+, usually watery, occasionally 1–3+, initially watery, quickly bloody 1–4+, watery or bloody Source: Adapted from DW Powell, in T Yamada (ed): Textbook of Gastroenterology and Hepatology, 4th ed. Philadelphia, Lippincott Williams & Wilkins, 2003. infections by Salmonella, Campylobacter, Shigella, and Yersinia. Yersiniosis may also lead to an autoimmune-type thyroiditis, pericarditis, and glomerulonephritis. Both enterohemorrhagic E. coli (O157:H7) and Shigella can lead to the hemolytic-uremic syndrome with an attendant high mortality rate. The syndrome of postinfectious IBS has now been recognized as a complication of infectious diarrhea. Similarly, acute gastroenteritis may precede the diagnosis of celiac disease or Crohn’s disease. Acute diarrhea can also be a major symptom of several systemic infections including viral hepatitis, listeriosis, legionellosis, and toxic shock syndrome. Other Causes Side effects from medications are probably the most common noninfectious causes of acute diarrhea, and etiology may be suggested by a temporal association between use and symptom onset. Although innumerable medications may produce diarrhea, some of the more frequently incriminated include antibiotics, cardiac antidysrhythmics, antihypertensives, nonsteroidal anti-inflammatory drugs (NSAIDs), certain antidepressants, chemotherapeutic agents, bronchodilators, antacids, and laxatives. Occlusive or nonocclusive ischemic colitis typically occurs in persons >50 years; often presents as acute lower abdominal pain preceding watery, then bloody diarrhea; and generally results in acute inflammatory changes in the sigmoid or left colon while sparing the rectum. Acute diarrhea may accompany colonic diverticulitis and graft-versus-host disease. Acute diarrhea, often associated with systemic compromise, can follow ingestion of toxins including organophosphate insecticides; amanita and other mushrooms; arsenic; and preformed environmental toxins in seafood, such as ciguatera and scombroid. Acute anaphylaxis to food ingestion can have a similar presentation. Conditions causing chronic diarrhea can also be confused with acute diarrhea early in their course. This confusion may occur with inflammatory bowel disease (IBD) and some of the other inflammatory chronic diarrheas that may have an abrupt rather than insidious onset and exhibit features that mimic infection. APPROACH TO THE PATIENT: The decision to evaluate acute diarrhea depends on its severity and duration and on various host factors (Fig. 55-2). Most episodes of acute diarrhea are mild and self-limited and do not justify the cost and potential morbidity rate of diagnostic or pharmacologic interventions. Indications for evaluation include profuse diarrhea with dehydration, grossly bloody stools, fever ≥38.5°C (≥101°F), duration >48 h without improvement, recent antibiotic use, new community outbreaks, associated severe abdominal pain in patients >50 years, and elderly (≥70 years) or immunocompromised patients. In some cases of moderately severe febrile diarrhea associated with fecal leukocytes (or increased fecal levels of the leukocyte proteins, such as calprotectin) or with gross blood, a diagnostic evaluation might be avoided in favor of an empirical antibiotic trial (see below). The cornerstone of diagnosis in those suspected of severe acute infectious diarrhea is microbiologic analysis of the stool. Workup includes cultures for bacterial and viral pathogens, direct inspection for ova and parasites, and immunoassays for certain bacterial toxins (C. difficile), viral antigens (rotavirus), and protozoal antigens (Giardia, E. histolytica). The aforementioned clinical and epidemiologic associations may assist in focusing the evaluation. If a particular pathogen or set of possible pathogens is so implicated, then either the whole panel of routine studies may not be necessary or, in some instances, special cultures may be appropriate as for enterohemorrhagic and other types of E. coli, Vibrio species, and Yersinia. Molecular diagnosis of pathogens in stool can be made by identification of unique DNA sequences; and evolving microarray technologies have led to more rapid, sensitive, specific, and cost-effective diagnosis. CHAPTER 55 Diarrhea and Constipation History and physical exam Moderate (activities altered) Mild (unrestricted) Observe Resolves Persists* Severe (incapacitated) Institute fluid and electrolyte replacement Antidiarrheal agents Resolves Persists* Stool microbiology studies Pathogen found Fever ˜38.5°C, bloody stools, fecal WBCs, immunocompromised or elderly host Evaluate and treat accordingly Acute Diarrhea Likely noninfectious Likely infectious Yes†No Yes†No Select specific treatment Empirical treatment + further evaluation FIguRE 55-2 Algorithm for the management of acute diarrhea. Consider empirical treatment before evaluation with (*) metronida-zole and with (†) quinolone. WBCs, white blood cells. Persistent diarrhea is commonly due to Giardia (Chap. 247), but additional causative organisms that should be considered include C. difficile (especially if antibiotics had been administered), E. histolytica, Cryptosporidium, Campylobacter, and others. If stool studies are unrevealing, flexible sigmoidoscopy with biopsies and upper endoscopy with duodenal aspirates and biopsies may be indicated. Brainerd diarrhea is an increasingly recognized entity characterized by an abrupt-onset diarrhea that persists for at least 4 weeks, but may last 1–3 years, and is thought to be of infectious origin. It may be associated with subtle inflammation of the distal small intestine or proximal colon. Structural examination by sigmoidoscopy, colonoscopy, or abdominal computed tomography (CT) scanning (or other imaging approaches) may be appropriate in patients with uncharacterized persistent diarrhea to exclude IBD or as an initial approach in patients with suspected noninfectious acute diarrhea such as might be caused by ischemic colitis, diverticulitis, or partial bowel obstruction. Fluid and electrolyte replacement are of central importance to all forms of acute diarrhea. Fluid replacement alone may suffice for mild cases. Oral sugar-electrolyte solutions (iso-osmolar sport drinks or designed formulations) should be instituted promptly with severe diarrhea to limit dehydration, which is the major cause of death. Profoundly dehydrated patients, especially infants and the elderly, require IV rehydration. In moderately severe nonfebrile and nonbloody diarrhea, anti-motility and antisecretory agents such as loperamide can be useful adjuncts to control symptoms. Such agents should be avoided with 268 febrile dysentery, which may be exacerbated or prolonged by them. Bismuth subsalicylate may reduce symptoms of vomiting and diarrhea but should not be used to treat immunocompromised patients or those with renal impairment because of the risk of bismuth encephalopathy. Judicious use of antibiotics is appropriate in selected instances of acute diarrhea and may reduce its severity and duration (Fig. 55-2). Many physicians treat moderately to severely ill patients with febrile dysentery empirically without diagnostic evaluation using a quinolone, such as ciprofloxacin (500 mg bid for 3–5 d). Empirical treatment can also be considered for suspected giardiasis with metronidazole (250 mg qid for 7 d). Selection of antibiotics and dosage regimens are otherwise dictated by specific pathogens, geographic patterns of resistance, and conditions found (Chaps. 160, 186, and 190–196). Antibiotic coverage is indicated, whether or not a causative organism is discovered, in patients who are immunocompromised, have mechanical heart valves or recent vascular grafts, or are elderly. Bismuth subsalicylate may reduce the frequency of traveler’s diarrhea. Antibiotic prophylaxis is only indicated for certain patients traveling to high-risk countries in whom the likelihood or seriousness of acquired diarrhea would be especially high, including those with immunocompromise, IBD, hemochromatosis, or gastric achlorhydria. Use of ciprofloxacin, azithromycin, or rifaximin may reduce bacterial diarrhea in such travelers by 90%, though rifaximin is not suitable for invasive disease, but rather as treatment for uncomplicated traveler’s diarrhea. Finally, physicians should be vigilant to identify if an outbreak of diarrheal illness is occurring and to alert the public health authorities promptly. This may reduce the ultimate size of the affected population. PART 2 Cardinal Manifestations and Presentation of Diseases Diarrhea lasting >4 weeks warrants evaluation to exclude serious underlying pathology. In contrast to acute diarrhea, most of the causes of chronic diarrhea are noninfectious. The classification of chronic diarrhea by pathophysiologic mechanism facilitates a rational approach to management, although many diseases cause diarrhea by more than one mechanism (Table 55-3). Secretory Causes Secretory diarrheas are due to derangements in fluid and electrolyte transport across the enterocolonic mucosa. They are characterized clinically by watery, large-volume fecal outputs that are typically painless and persist with fasting. Because there is no malabsorbed solute, stool osmolality is accounted for by normal endogenous electrolytes with no fecal osmotic gap. MEdICATIONS Side effects from regular ingestion of drugs and toxins are the most common secretory causes of chronic diarrhea. Hundreds of prescription and over-the-counter medications (see earlier section, “Acute Diarrhea, Other Causes”) may produce diarrhea. Surreptitious or habitual use of stimulant laxatives (e.g., senna, cascara, bisacodyl, ricinoleic acid [castor oil]) must also be considered. Chronic ethanol consumption may cause a secretory-type diarrhea due to enterocyte injury with impaired sodium and water absorption as well as rapid transit and other alterations. Inadvertent ingestion of certain environmental toxins (e.g., arsenic) may lead to chronic rather than acute forms of diarrhea. Certain bacterial infections may occasionally persist and be associated with a secretory-type diarrhea. BOwEL RESECTION, MUCOSAL dISEASE, OR ENTEROCOLIC FISTULA These conditions may result in a secretory-type diarrhea because of inadequate surface for reabsorption of secreted fluids and electrolytes. Unlike other secretory diarrheas, this subset of conditions tends to worsen with eating. With disease (e.g., Crohn’s ileitis) or resection of <100 cm of terminal ileum, dihydroxy bile acids may escape absorption and stimulate colonic secretion (cholerheic diarrhea). This mechanism may contribute to so-called idiopathic secretory diarrhea or bile acid diarrhea (BAD), in which bile acids are functionally malabsorbed from a normal-appearing terminal ileum. This idiopathic bile acid malabsorption (BAM) may account for an average of 40% of unexplained chronic diarrhea. Reduced negative feedback regulation of bile acid Exogenous stimulant laxatives Chronic ethanol ingestion Other drugs and toxins Endogenous laxatives (dihydroxy bile acids) Idiopathic secretory diarrhea or bile acid diarrhea Certain bacterial infections Bowel resection, disease, or fistula (↓ absorption) Partial bowel obstruction or fecal impaction Hormone-producing tumors (carcinoid, VIPoma, medullary cancer of thyroid, mastocytosis, gastrinoma, colorectal villous adenoma) Addison’s disease Congenital electrolyte absorption defects Osmotic laxatives (Mg2+, PO4–3, SO4 Lactase and other disaccharide deficiencies Nonabsorbable carbohydrates (sorbitol, lactulose, polyethylene glycol) Gluten and FODMAP intolerance Intraluminal maldigestion (pancreatic exocrine insufficiency, bacterial overgrowth, bariatric surgery, liver disease) Mucosal malabsorption (celiac sprue, Whipple’s disease, infections, abetalipoproteinemia, ischemia, drug-induced enteropathy) Idiopathic inflammatory bowel disease (Crohn’s, chronic ulcerative colitis) Lymphocytic and collagenous colitis Immune-related mucosal disease (1° or 2° immunodeficiencies, food allergy, eosinophilic gastroenteritis, graft-versus-host disease) Infections (invasive bacteria, viruses, and parasites, Brainerd diarrhea) Radiation injury Gastrointestinal malignancies Vagotomy, fundoplication Abbreviation: FODMAP, fermentable oligosaccharides, disaccharides, monosaccharides, and polyols. synthesis in hepatocytes by fibroblast growth factor 19 (FGF-19) produced by ileal enterocytes results in a degree of bile-acid synthesis that exceeds the normal capacity for ileal reabsorption, producing BAD. An alternative cause of BAD is a genetic variation in the receptor proteins (β-klotho and fibroblast growth factor 4) on the hepatocyte that normally mediate the effect of FGF-19. Dysfunction of these proteins prevents FGF-19 inhibition of hepatocyte bile acid synthesis. Partial bowel obstruction, ostomy stricture, or fecal impaction may paradoxically lead to increased fecal output due to fluid hypersecretion. HORMONES Although uncommon, the classic examples of secretory diarrhea are those mediated by hormones. Metastatic gastrointestinal carcinoid tumors or, rarely, primary bronchial carcinoids may produce watery diarrhea alone or as part of the carcinoid syndrome that comprises episodic flushing, wheezing, dyspnea, and right-sided valvular heart disease. Diarrhea is due to the release into the circulation of potent intestinal secretagogues including serotonin, histamine, prostaglandins, and various kinins. Pellagra-like skin lesions may rarely occur as the result of serotonin overproduction with niacin depletion. Gastrinoma, one of the most common neuroendocrine tumors, most typically presents with refractory peptic ulcers, but diarrhea occurs in up to one-third of cases and may be the only clinical manifestation in 10%. While other secretagogues released with gastrin may play a role, the diarrhea most often results from fat maldigestion owing to pancreatic enzyme inactivation by low intraduodenal pH. The watery diarrhea hypokalemia achlorhydria syndrome, also called pancreatic cholera, is due to a non-β cell pancreatic adenoma, referred to as a VIPoma, that secretes VIP and a host of other peptide hormones including pancreatic polypeptide, secretin, gastrin, gastrin-inhibitory polypeptide (also called glucose-dependent insulinotropic peptide), neurotensin, calcitonin, and prostaglandins. The secretory diarrhea is often massive with stool volumes >3 L/d; daily volumes as high as 20 L have been reported. Life-threatening dehydration; neuromuscular dysfunction from associated hypokalemia, hypomagnesemia, or hypercalcemia; flushing; and hyperglycemia may accompany a VIPoma. Medullary carcinoma of the thyroid may present with watery diarrhea caused by calcitonin, other secretory peptides, or prostaglandins. Prominent diarrhea is often associated with metastatic disease and poor prognosis. Systemic mastocytosis, which may be associated with the skin lesion urticaria pigmentosa, may cause diarrhea that is either secretory and mediated by histamine or inflammatory due to intestinal infiltration by mast cells. Large colorectal villous adenomas may rarely be associated with a secretory diarrhea that may cause hypokalemia, can be inhibited by NSAIDs, and are apparently mediated by prostaglandins. CONgENITAL dEFECTS IN ION ABSORPTION Rarely, defects in specific carriers associated with ion absorption cause watery diarrhea from birth. These disorders include defective Cl–/HCO3 exchange (congenital chloridorrhea) with alkalosis (which results from a mutated DRA [down-regulated in adenoma] gene) and defective Na+/H+ exchange (congenital sodium diarrhea), which results from a mutation in the NHE3 (sodium-hydrogen exchanger) gene and results in acidosis. Some hormone deficiencies may be associated with watery diarrhea, such as occurs with adrenocortical insufficiency (Addison’s disease) that may be accompanied by skin hyperpigmentation. Osmotic Causes Osmotic diarrhea occurs when ingested, poorly absorbable, osmotically active solutes draw enough fluid into the lumen to exceed the reabsorptive capacity of the colon. Fecal water output increases in proportion to such a solute load. Osmotic diarrhea characteristically ceases with fasting or with discontinuation of the causative agent. OSMOTIC LAXATIVES Ingestion of magnesium-containing antacids, health supplements, or laxatives may induce osmotic diarrhea typified by a stool osmotic gap (>50 mosmol/L): serum osmolarity (typically 290 mosmol/kg) – (2 × [fecal sodium + potassium concentration]). Measurement of fecal osmolarity is no longer recommended because, even when measured immediately after evacuation, it may be erroneous because carbohydrates are metabolized by colonic bacteria, causing an increase in osmolarity. CARBOHYdRATE MALABSORPTION Carbohydrate malabsorption due to acquired or congenital defects in brush-border disaccharidases and other enzymes leads to osmotic diarrhea with a low pH. One of the most common causes of chronic diarrhea in adults is lactase deficiency, which affects three-fourths of nonwhites worldwide and 5–30% of persons in the United States; the total lactose load at any one time influences the symptoms experienced. Most patients learn to avoid milk products without requiring treatment with enzyme supplements. Some sugars, such as sorbitol, lactulose, or fructose, are frequently malabsorbed, and diarrhea ensues with ingestion of medications, gum, or candies sweetened with these poorly or incompletely absorbed sugars. wHEAT ANd FOdMAP INTOLERANCE Chronic diarrhea, bloating, and 269 abdominal pain are recognized as symptoms of nonceliac gluten intolerance (which is associated with impaired intestinal or colonic barrier function) and intolerance of fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs). The latter’s effects represent the interaction between the GI microbiome and the nutrients. Steatorrheal Causes Fat malabsorption may lead to greasy, foul-smelling, difficult-to-flush diarrhea often associated with weight loss and nutritional deficiencies due to concomitant malabsorption of amino acids and vitamins. Increased fecal output is caused by the osmotic effects of fatty acids, especially after bacterial hydroxylation, and, to a lesser extent, by the neutral fat. Quantitatively, steatorrhea is defined as stool fat exceeding the normal 7 g/d; rapid-transit diarrhea may result in fecal fat up to 14 g/d; daily fecal fat averages 15–25 g with small-intestinal diseases and is often >32 g with pancreatic exocrine insufficiency. Intraluminal maldigestion, mucosal malabsorption, or lymphatic obstruction may produce steatorrhea. INTRALUMINAL MALdIgESTION This condition most commonly results from pancreatic exocrine insufficiency, which occurs when >90% of pancreatic secretory function is lost. Chronic pancreatitis, usually a sequel of ethanol abuse, most frequently causes pancreatic insufficiency. Other causes include cystic fibrosis; pancreatic duct obstruction; and, rarely, somatostatinoma. Bacterial overgrowth in the small intestine may deconjugate bile acids and alter micelle formation, impairing fat digestion; it occurs with stasis from a blind-loop, small-bowel diverticulum or dysmotility and is especially likely in the elderly. Finally, cirrhosis or biliary obstruction may lead to mild steatorrhea due to deficient intraluminal bile acid concentration. MUCOSAL MALABSORPTION Mucosal malabsorption occurs from a variety of enteropathies, but it most commonly occurs from celiac disease. This gluten-sensitive enteropathy affects all ages and is characterized by villous atrophy and crypt hyperplasia in the proximal small bowel and can present with fatty diarrhea associated with multiple nutritional deficiencies of varying severity. Celiac disease is much more frequent than previously thought; it affects ∼1% of the population, frequently presents without steatorrhea, can mimic IBS, and has many other GI and extraintestinal manifestations. Tropical sprue may produce a similar histologic and clinical syndrome but occurs in residents of or travelers to tropical climates; abrupt onset and response to antibiotics suggest an infectious etiology. Whipple’s disease, due to the bacillus Tropheryma whipplei and histiocytic infiltration of the small-bowel mucosa, is a less common cause of steatorrhea that most typically occurs in young or middle-aged men; it is frequently associated with arthralgias, fever, lymphadenopathy, and extreme fatigue, and it may affect the CNS and endocardium. A similar clinical and histologic picture results from Mycobacterium avium-intracellulare infection in patients with AIDS. Abetalipoproteinemia is a rare defect of chylomicron formation and fat malabsorption in children, associated with acanthocytic erythrocytes, ataxia, and retinitis pigmentosa. Several other conditions may cause mucosal malabsorption including infections, especially with protozoa such as Giardia; numerous medications (e.g., olmesartan, mycophenolate mofetil, colchicine, cholestyramine, neomycin); amyloidosis; and chronic ischemia. POSTMUCOSAL LYMPHATIC OBSTRUCTION The pathophysiology of this condition, which is due to the rare congenital intestinal lymphangiectasia or to acquired lymphatic obstruction secondary to trauma, tumor, cardiac disease or infection, leads to the unique constellation of fat malabsorption with enteric losses of protein (often causing edema) and lymphocytopenia. Carbohydrate and amino acid absorption are preserved. Inflammatory Causes Inflammatory diarrheas are generally accompanied by pain, fever, bleeding, or other manifestations of inflammation. The mechanism of diarrhea may not only be exudation but, depending on lesion site, may include fat malabsorption, disrupted fluid/ electrolyte absorption, and hypersecretion or hypermotility from 270 release of cytokines and other inflammatory mediators. The unifying feature on stool analysis is the presence of leukocytes or leukocyte-derived proteins such as calprotectin. With severe inflammation, exudative protein loss can lead to anasarca (generalized edema). Any middle-aged or older person with chronic inflammatory-type diarrhea, especially with blood, should be carefully evaluated to exclude a colorectal tumor. IdIOPATHIC INFLAMMATORY BOwEL dISEASE The illnesses in this category, which include Crohn’s disease and chronic ulcerative colitis, are among the most common organic causes of chronic diarrhea in adults and range in severity from mild to fulminant and life-threatening. They may be associated with uveitis, polyarthralgias, cholestatic liver disease (primary sclerosing cholangitis), and skin lesions (erythema nodosum, pyoderma gangrenosum). Microscopic colitis, including both lymphocytic and collagenous colitis, is an increasingly recognized cause of chronic watery diarrhea, especially in middle-aged women and those on NSAIDs, statins, proton pump inhibitors (PPIs), and selective serotonin reuptake inhibitors (SSRIs); biopsy of a normal-appearing colon is required for histologic diagnosis. It may coexist with symptoms suggesting IBS or with celiac sprue or drug-induced enteropathy. It typically responds well to anti-inflammatory drugs (e.g., bismuth), to the opioid agonist loperamide, or to budesonide. PRIMARY OR SECONdARY FORMS OF IMMUNOdEFICIENCY Immunodeficiency may lead to prolonged infectious diarrhea. With selective IgA deficiency or common variable hypogammaglobulinemia, diarrhea is particularly prevalent and often the result of giardiasis, bacterial overgrowth, or sprue. EOSINOPHILIC gASTROENTERITIS Eosinophil infiltration of the mucosa, muscularis, or serosa at any level of the GI tract may cause diarrhea, pain, vomiting, or ascites. Affected patients often have an atopic history, Charcot-Leyden crystals due to extruded eosinophil contents may be seen on microscopic inspection of stool, and peripheral eosinophilia is present in 50–75% of patients. While hypersensitivity to certain foods occurs in adults, true food allergy causing chronic diarrhea is rare. OTHER CAUSES Chronic inflammatory diarrhea may be caused by radiation enterocolitis, chronic graft-versus-host disease, Behçet’s syndrome, and Cronkhite-Canada syndrome, among others. Dysmotility Causes Rapid transit may accompany many diarrheas as a secondary or contributing phenomenon, but primary dysmotility is an unusual etiology of true diarrhea. Stool features often suggest a secretory diarrhea, but mild steatorrhea of up to 14 g of fat per day can be produced by maldigestion from rapid transit alone. Hyperthyroidism, carcinoid syndrome, and certain drugs (e.g., prostaglandins, prokinetic agents) may produce hypermotility with resultant diarrhea. Primary visceral neuromyopathies or idiopathic acquired intestinal pseudoobstruction may lead to stasis with secondary bacterial overgrowth causing diarrhea. Diabetic diarrhea, often accompanied by peripheral and generalized autonomic neuropathies, may occur in part because of intestinal dysmotility. The exceedingly common IBS (10% point prevalence, 1–2% per year incidence) is characterized by disturbed intestinal and colonic motor and sensory responses to various stimuli. Symptoms of stool frequency typically cease at night, alternate with periods of constipation, are accompanied by abdominal pain relieved with defecation, and rarely result in weight loss. Factitial Causes Factitial diarrhea accounts for up to 15% of unexplained diarrheas referred to tertiary care centers. Either as a form of Munchausen syndrome (deception or self-injury for secondary gain) or eating disorders, some patients covertly self-administer laxatives alone or in combination with other medications (e.g., diuretics) or surreptitiously add water or urine to stool sent for analysis. Such patients are typically women, often with histories of psychiatric illness, and disproportionately from careers in health care. Hypotension and hypokalemia are common co-presenting features. The evaluation of such patients may be difficult: contamination of the stool with water or urine is suggested by very low or high stool osmolarity, respectively. PART 2 Cardinal Manifestations and Presentation of Diseases Such patients often deny this possibility when confronted, but they do benefit from psychiatric counseling when they acknowledge their behavior. APPROACH TO THE PATIENT: The laboratory tools available to evaluate the very common problem of chronic diarrhea are extensive, and many are costly and invasive. As such, the diagnostic evaluation must be rationally directed by a careful history, including medications, and physical examination (Fig. 55-3A). When this strategy is unrevealing, simple triage tests are often warranted to direct the choice of more complex investigations (Fig. 55-3B). The history, physical examination (Table 55-4), and routine blood studies should attempt to characterize the mechanism of diarrhea, identify diagnostically helpful associations, and assess the patient’s fluid/electrolyte and nutritional status. Patients should be questioned about the onset, duration, pattern, aggravating (especially diet) and relieving factors, and stool characteristics of their diarrhea. The presence or absence of fecal incontinence, fever, weight loss, pain, certain exposures (travel, medications, contacts with diarrhea), and common extraintestinal manifestations (skin changes, arthralgias, oral aphthous ulcers) should be noted. A family history of IBD or sprue may indicate those possibilities. Physical findings may offer clues such as a thyroid mass, wheezing, heart murmurs, edema, hepatomegaly, abdominal masses, lymphadenopathy, mucocutaneous abnormalities, perianal fistulas, or anal sphincter laxity. Peripheral blood leukocytosis, elevated sedimentation rate, or C-reactive protein suggests inflammation; anemia reflects blood loss or nutritional deficiencies; or eosinophilia may occur with parasitoses, neoplasia, collagen-vascular disease, allergy, or eosinophilic gastroenteritis. Blood chemistries may demonstrate electrolyte, hepatic, or other metabolic disturbances. Measuring IgA tissue transglutaminase antibodies may help detect celiac disease. Bile acid diarrhea is confirmed by a scintigraphic radiolabeled bile acid retention test; however, this is not available in many countries. Alternative approaches are a screening blood test (serum C4 or FGF-19), measurement of fecal bile acids, or a therapeutic trial with a bile acid sequestrant (e.g., cholestyramine or colesevelam). A therapeutic trial is often appropriate, definitive, and highly cost-effective when a specific diagnosis is suggested on the initial physician encounter. For example, chronic watery diarrhea, which ceases with fasting in an otherwise healthy young adult, may justify a trial of a lactose-restricted diet; bloating and diarrhea persisting since a mountain backpacking trip may warrant a trial of metronidazole for likely giardiasis; and postprandial diarrhea persisting following resection of terminal ileum might be due to bile acid malabsorption and be treated with cholestyramine or colesevelam before further evaluation. Persistent symptoms require additional investigation. Certain diagnoses may be suggested on the initial encounter (e.g., idiopathic IBD); however, additional focused evaluations may be necessary to confirm the diagnosis and characterize the severity or extent of disease so that treatment can be best guided. Patients suspected of having IBS should be initially evaluated with flexible sigmoidoscopy with colorectal biopsies to exclude IBD, or particularly microscopic colitis, which is clinically indistinguishable from IBS with diarrhea; those with normal findings might be reassured and, as indicated, treated empirically with antispasmodics, antidiarrheals, or antidepressants (e.g., tricyclic agents). Any patient who presents with chronic diarrhea and hematochezia should be evaluated with stool microbiologic studies and colonoscopy. In an estimated two-thirds of cases, the cause for chronic diarrhea remains unclear after the initial encounter, and further testing is required. Quantitative stool collection and analyses can yield Exclude iatrogenic problem: medication, surgery Blood pr Colonoscopy + biopsy Features, e.g., stool, suggest malabsorption Small bowel: Imaging, biopsy, aspirate Pain aggravated before bm, relieved with bm, sense incomplete evacuation Suspect IBS Limited screen for organic disease No blood, features of malabsorption Consider functional diarrhea Dietary exclusion, e.g., lactose, sorbitol Chronic diarrhea Limited screen for organic disease Chronic diarrhea Stool vol, OSM, pH; Laxative screen; Hormonal screen Persistent chronic diarrhea Stool fat >20 g/day Pancreatic function Titrate Rx to speed of transit Opioid Rx + follow-up Low K+ Screening tests all normal Colonoscopy + biopsy Normal and stool fat <14 g/day Small bowel: X-ray, biopsy, aspirate; stool 48-h fat Full gut transit Low Hb, Alb; abnormal MCV, MCH; excess fat in stool FIguRE 55-3 Chronic diarrhea. A. Initial management based on accompanying symptoms or features. B. Evaluation based on findings from a limited age-appropriate screen for organic disease. Alb, albumin; bm, bowel movement; Hb, hemoglobin; IBS, irritable bowel syndrome; MCH, mean corpuscular hemoglobin; MCV, mean corpuscular volume; OSM, osmolality; pr, per rectum. (Reprinted from M Camilleri: Clin Gastroenterol Hepatol. 2:198, 2004.) important objective data that may establish a diagnosis or characterize the type of diarrhea as a triage for focused additional studies (Fig. 55-3B). If stool weight is >200 g/d, additional stool analyses should be performed that might include electrolyte concentration, 1. Are there general features to suggest malabsorption or inflammatory bowel disease (IBD) such as anemia, dermatitis herpetiformis, edema, or clubbing? 2. Are there features to suggest underlying autonomic neuropathy or collagen-vascular disease in the pupils, orthostasis, skin, hands, or joints? 3. Is there an abdominal mass or tenderness? 4. Are there any abnormalities of rectal mucosa, rectal defects, or altered anal sphincter functions? 5. Are there any mucocutaneous manifestations of systemic disease such as dermatitis herpetiformis (celiac disease), erythema nodosum (ulcerative colitis), flushing (carcinoid), or oral ulcers for IBD or celiac disease? pH, occult blood testing, leukocyte inspection (or leukocyte protein assay), fat quantitation, and laxative screens. For secretory diarrheas (watery, normal osmotic gap), possible medication-related side effects or surreptitious laxative use should be reconsidered. Microbiologic studies should be done including fecal bacterial cultures (including media for Aeromonas and Plesiomonas), inspection for ova and parasites, and Giardia antigen assay (the most sensitive test for giardiasis). Small-bowel bacterial overgrowth can be excluded by intestinal aspirates with quantitative cultures or with glucose or lactulose breath tests involving measurement of breath hydrogen, methane, or other metabolite. However, interpretation of these breath tests may be confounded by disturbances of intestinal transit. Upper endoscopy and colonoscopy with biopsies and small-bowel x-rays (formerly barium, but increasingly CT with enterography or magnetic resonance with enteroclysis) are helpful to rule out structural or occult inflammatory disease. When suggested by history or other findings, screens for peptide hormones should be pursued (e.g., serum gastrin, VIP, calcitonin, and thyroid hormone/thyroid-stimulating hormone, urinary 5-hydroxyindolacetic acid, histamine). Further evaluation of osmotic diarrhea should include tests for lactose intolerance and magnesium ingestion, the two most common causes. Low fecal pH suggests carbohydrate malabsorption; lactose malabsorption can be confirmed by lactose breath testing or by a therapeutic trial with lactose exclusion and observation of the effect of lactose challenge (e.g., a liter of milk). Lactase determination on small-bowel biopsy is not generally available. If fecal magnesium or laxative levels are elevated, inadvertent or surreptitious ingestion should be considered and psychiatric help should be sought. For those with proven fatty diarrhea, endoscopy with small-bowel biopsy (including aspiration for Giardia and quantitative cultures) should be performed; if this procedure is unrevealing, a small-bowel radiograph is often an appropriate next step. If small-bowel studies are negative or if pancreatic disease is suspected, pancreatic exocrine insufficiency should be excluded with direct tests, such as the secretin-cholecystokinin stimulation test or a variation that could be performed endoscopically. In general, indirect tests such as assay of fecal elastase or chymotrypsin activity or a bentiromide test have fallen out of favor because of low sensitivity and specificity. Chronic inflammatory-type diarrheas should be suspected by the presence of blood or leukocytes in the stool. Such findings warrant stool cultures; inspection for ova and parasites; C. difficile toxin assay; colonoscopy with biopsies; and, if indicated, small-bowel contrast studies. Treatment of chronic diarrhea depends on the specific etiology and may be curative, suppressive, or empirical. If the cause can be eradicated, treatment is curative as with resection of a colorectal cancer, antibiotic administration for Whipple’s disease or tropical sprue, or discontinuation of a drug. For many chronic conditions, diarrhea can be controlled by suppression of the underlying mechanism. Examples include elimination of dietary lactose for lactase deficiency or gluten for celiac sprue, use of glucocorticoids or other anti-inflammatory agents for idiopathic IBDs, bile acid sequestrants for bile acid malabsorption, PPIs for the gastric hypersecretion of gastrinomas, somatostatin analogues such as octreotide for malignant carcinoid syndrome, prostaglandin inhibitors such as indomethacin for medullary carcinoma of the thyroid, and pancreatic enzyme replacement for pancreatic insufficiency. When the specific cause or mechanism of chronic diarrhea evades diagnosis, empirical therapy may be beneficial. Mild opiates, such as diphenoxylate or loperamide, are often helpful in mild or moderate watery diarrhea. For those with more severe diarrhea, codeine or tincture of opium may be beneficial. Such antimotility agents should be avoided with severe IBD, because toxic megacolon may be precipitated. Clonidine, an α2-adrenergic agonist, may allow control of diabetic diarrhea, although the medication may be poorly tolerated because it causes postural hypotension. The 5-HT3 receptor antagonists (e.g., alosetron) may relieve diarrhea and urgency in patients with IBS diarrhea. For all patients with chronic diarrhea, fluid and electrolyte repletion is an important component of management (see “Acute Diarrhea,” earlier). Replacement of fat-soluble vitamins may also be necessary in patients with chronic steatorrhea. PART 2 Cardinal Manifestations and Presentation of Diseases Constipation is a common complaint in clinical practice and usually refers to persistent, difficult, infrequent, or seemingly incomplete defecation. Because of the wide range of normal bowel habits, constipation is difficult to define precisely. Most persons have at least three bowel movements per week; however, low stool frequency alone is not the sole criterion for the diagnosis of constipation. Many constipated patients have a normal frequency of defecation but complain of excessive straining, hard stools, lower abdominal fullness, or a sense of incomplete evacuation. The individual patient’s symptoms must be analyzed in detail to ascertain what is meant by “constipation” or “difficulty” with defecation. Stool form and consistency are well correlated with the time elapsed from the preceding defecation. Hard, pellety stools occur with slow transit, whereas loose, watery stools are associated with rapid transit. Both small pellety or very large stools are more difficult to expel than normal stools. The perception of hard stools or excessive straining is more difficult to assess objectively, and the need for enemas or digital disimpaction is a clinically useful way to corroborate the patient’s perceptions of difficult defecation. Psychosocial or cultural factors may also be important. A person whose parents attached great importance to daily defecation will become greatly concerned when he or she misses a daily bowel movement; some children withhold stool to gain attention or because of fear of pain from anal irritation; and some adults habitually ignore or delay the call to have a bowel movement. Pathophysiologically, chronic constipation generally results from inadequate fiber or fluid intake or from disordered colonic transit or anorectal function. These result from neurogastroenterologic disturbance, certain drugs, advancing age, or in association with a large number of systemic diseases that affect the GI tract (Table 55-5). Constipation of recent onset may be a symptom of significant organic disease such as tumor or stricture. In idiopathic constipation, a subset of patients exhibit delayed emptying of the ascending and transverse colon with prolongation of transit (often in the proximal colon) and a reduced frequency of propulsive HAPCs. Outlet obstruction to defecation (also called evacuation disorders) accounts for about a quarter of cases presenting with constipation in tertiary care and may cause delayed colonic transit, which is usually corrected by biofeedback retraining of the disordered defecation. Constipation of any cause may be exacerbated by hospitalization or chronic illnesses that lead to physical or mental impairment and result in inactivity or physical immobility. Types of Constipation and Causes Examples Colonic obstruction Neoplasm; stricture: ischemic, diverticular, inflammatory Anal sphincter spasm Anal fissure, painful hemorrhoids Medications Disorders of rectal evacuation Constipation-predominant, alternating Ca2+ blockers, antidepressants Slow-transit constipation, megacolon (rare Hirschsprung’s, Chagas’ diseases) Pelvic floor dysfunction; anismus; descending perineum syndrome; rectal mucosal prolapse; rectocele Hypothyroidism, hypercalcemia, pregnancy Depression, eating disorders, drugs Parkinsonism, multiple sclerosis, spinal APPROACH TO THE PATIENT: A careful history should explore the patient’s symptoms and confirm whether he or she is indeed constipated based on frequency (e.g., fewer than three bowel movements per week), consistency (lumpy/hard), excessive straining, prolonged defecation time, or need to support the perineum or digitate the anorectum to facilitate stool evacuation. In the vast majority of cases (probably >90%), there is no underlying cause (e.g., cancer, depression, or hypothyroidism), and constipation responds to ample hydration, exercise, and supplementation of dietary fiber (15–25 g/d). A good diet and medication history and attention to psychosocial issues are key. Physical examination and, particularly, a rectal examination should exclude fecal impaction and most of the important diseases that present with constipation and possibly indicate features suggesting an evacuation disorder (e.g., high anal sphincter tone, failure of perineal descent, or paradoxical puborectalis contraction during straining to simulate stool evacuation). The presence of weight loss, rectal bleeding, or anemia with constipation mandates either flexible sigmoidoscopy plus barium enema or colonoscopy alone, particularly in patients >40 years, to exclude structural diseases such as cancer or strictures. Colonoscopy alone is most cost-effective in this setting because it provides an opportunity to biopsy mucosal lesions, perform polypectomy, or dilate strictures. Barium enema has advantages over colonoscopy in the patient with isolated constipation because it is less costly and identifies colonic dilation and all significant mucosal lesions or strictures that are likely to present with constipation. Melanosis coli, or pigmentation of the colon mucosa, indicates the use of anthraquinone laxatives such as cascara or senna; however, this is usually apparent from a careful history. An unexpected disorder such as megacolon or cathartic colon may also be detected by colonic radiographs. Measurement of serum calcium, potassium, and thyroid-stimulating hormone levels will identify rare patients with metabolic disorders. Patients with more troublesome constipation may not respond to fiber alone and may be helped by a bowel-training regimen, which involves taking an osmotic laxative (e.g., magnesium salts, lactulose, sorbitol, polyethylene glycol) and evacuating with enema or suppository (e.g., glycerine or bisacodyl) as needed. After breakfast, a distraction-free 15–20 min on the toilet without straining is encouraged. Excessive straining may lead to development of hemorrhoids, and, if there is weakness of the pelvic floor or injury to the pudendal nerve, may result in obstructed defecation from descending perineum syndrome several years later. Those few who do not benefit from the simple measures delineated above or require long-term treatment or fail to respond to potent laxatives should undergo further investigation (Fig. 55-4). Novel agents that induce secretion (e.g., lubiprostone, a chloride channel activator, or linaclotide, a guanylate cyclase C agonist that activates chloride secretion) are also available. A small minority (probably <5%) of patients have severe or “intractable” constipation; about 25% have evacuation disorders. These are the patients most likely to require evaluation by gastroenterologists or in referral centers. Further observation of the patient may occasionally reveal a previously unrecognized cause, such as an evacuation disorder, laxative abuse, malingering, or psychological disorder. In these patients, evaluations of the physiologic function of the colon and pelvic floor and of psychological status aid in the rational choice of treatment. Even among these highly selected patients with severe constipation, a cause can be identified in only about one-third of tertiary referral patients, with the others being diagnosed with normal transit constipation. Clinical and basic laboratory tests Bloods, chest and abd x-ray Exclude mechanical obstruction, e.g., colonoscopy Colonic transit Consider functional bowel disease Known disorder Rx No known underlying disorder Anorectal manometry and balloon expulsion Slow colonic transit Normal Rectoanal angle measurement, defecation proctography? Appropriate Rx: Rehabilitation program, surgery, or other Chronic Constipation Normal Abnormal FIguRE 55-4 Algorithm for the management of constipation. abd, abdominal. Measurement of Colonic Transit Radiopaque marker transit tests are easy, repeatable, generally safe, inexpensive, reliable, and highly applicable in evaluating constipated patients in clinical practice. Several validated methods are very simple. For example, radiopaque markers are ingested; an abdominal flat film taken 5 days later should indicate passage of 80% of the markers out of the colon without the use of laxatives or enemas. This test does not provide useful information about the transit profile of the stomach and small bowel. Radioscintigraphy with a delayed-release capsule containing radio-labeled particles has been used to noninvasively characterize normal, accelerated, or delayed colonic function over 24–48 h with low radiation exposure. This approach simultaneously assesses gastric, small bowel (which may be important in ∼20% of patients with delayed colonic transit because they reflect a more generalized GI motility disorder), and colonic transit. The disadvantages are the greater cost and the need for specific materials prepared in a nuclear medicine laboratory. Anorectal and Pelvic Floor Tests Pelvic floor dysfunction is suggested by the inability to evacuate the rectum, a feeling of persistent rectal fullness, rectal pain, the need to extract stool from the rectum digitally, application of pressure on the posterior wall of the vagina, support of the perineum during straining, and excessive straining. These significant symptoms should be contrasted with the simple sense of incomplete rectal evacuation, which is common in IBS. Formal psychological evaluation may identify eating disorders, “control issues,” depression, or post-traumatic stress disorders that may respond to cognitive or other intervention and may be important in restoring quality of life to patients who might present with chronic constipation. A simple clinical test in the office to document a non-relaxing puborectalis muscle is to have the patient strain to expel the index finger during a digital rectal examination. Motion of the puborectalis posteriorly during straining indicates proper coordination of the pelvic floor muscles. Motion anteriorly with paradoxical contraction during simulated evacuation indicates pelvic floor dysfunction. Measurement of perineal descent is relatively easy to gauge clinically by placing the patient in the left decubitus position and watching 274 the perineum to detect inadequate descent (<1.5 cm, a sign of pelvic floor dysfunction) or perineal ballooning during straining relative to bony landmarks (>4 cm, suggesting excessive perineal descent). A useful overall test of evacuation is the balloon expulsion test. A balloon-tipped urinary catheter is placed and inflated with 50 mL of water. Normally, a patient can expel it while seated on a toilet or in the left lateral decubitus position. In the lateral position, the weight needed to facilitate expulsion of the balloon is determined; normally, expulsion occurs with <200 g added or unaided within 2 min. Anorectal manometry, when used in the evaluation of patients with severe constipation, may find an excessively high resting (>80 mmHg) or squeeze anal sphincter tone, suggesting anismus (anal sphincter spasm). This test also identifies rare syndromes, such as adult Hirschsprung’s disease, by the absence of the rectoanal inhibitory reflex. Defecography (a dynamic barium enema including lateral views obtained during barium expulsion or a magnetic resonance defecogram) reveals “soft abnormalities” in many patients; the most relevant findings are the measured changes in rectoanal angle, anatomic defects of the rectum such as internal mucosal prolapse, and enteroceles or rectoceles. Surgically remediable conditions are identified in only a few patients. These include severe, whole-thickness intussusception with complete outlet obstruction due to funnel-shaped plugging at the anal canal or an extremely large rectocele that fills preferentially during attempts at defecation instead of expulsion of the barium through the anus. In summary, defecography requires an interested and experienced radiologist, and abnormalities are not pathognomonic for pelvic floor dysfunction. The most common cause of outlet obstruction is failure of the puborectalis muscle to relax; this is not identified by barium defecography, but can be demonstrated by magnetic resonance defecography, which provides more information about the structure and function of the pelvic floor, distal colorectum, and anal sphincters. Neurologic testing (electromyography) is more helpful in the evaluation of patients with incontinence than of those with symptoms suggesting obstructed defecation. The absence of neurologic signs in the lower extremities suggests that any documented denervation of the puborectalis results from pelvic (e.g., obstetric) injury or from stretching of the pudendal nerve by chronic, long-standing straining. Constipation is common among patients with spinal cord injuries, neurologic diseases such as Parkinson’s disease, multiple sclerosis, and diabetic neuropathy. Spinal-evoked responses during electrical rectal stimulation or stimulation of external anal sphincter contraction by applying magnetic stimulation over the lumbosacral cord identify patients with limited sacral neuropathies with sufficient residual nerve conduction to attempt biofeedback training. In summary, a balloon expulsion test is an important screening test for anorectal dysfunction. Rarely, an anatomic evaluation of the rectum or anal sphincters and an assessment of pelvic floor relaxation are the tools for evaluating patients in whom obstructed defecation is suspected and is associated with symptoms of rectal mucosal prolapse, pressure of the posterior wall of the vagina to facilitate defecation (suggestive of anterior rectocele), or prior pelvic surgery that may be complicated by enterocele. After the cause of constipation is characterized, a treatment decision can be made. Slow-transit constipation requires aggressive medical or surgical treatment; anismus or pelvic floor dysfunction usually responds to biofeedback management (Fig. 40-4). The remaining ∼60% of patients with constipation has normal colonic transit and can be treated symptomatically. Patients with spinal cord injuries or other neurologic disorders require a dedicated bowel regimen that often includes rectal stimulation, enema therapy, and carefully timed laxative therapy. Patients with constipation are treated with bulk, osmotic, pro- kinetic, secretory, and stimulant laxatives including fiber, psyllium, milk of magnesia, lactulose, polyethylene glycol (colonic lavage PART 2 Cardinal Manifestations and Presentation of Diseases solution), lubiprostone, linaclotide, and bisacodyl, or, in some countries, prucalopride, a 5-HT4 agonist. If a 3-to 6-month trial of medical therapy fails, unassociated with obstructed defecation, the patients should be considered for laparoscopic colectomy with ileorectostomy; however, this should not be undertaken if there is continued evidence of an evacuation disorder or a generalized GI dysmotility. Referral to a specialized center for further tests of colonic motor function is warranted. The decision to resort to surgery is facilitated in the presence of megacolon and megarectum. The complications after surgery include small-bowel obstruction (11%) and fecal soiling, particularly at night during the first postoperative year. Frequency of defecation is 3–8 per day during the first year, dropping to 1–3 per day from the second year after surgery. Patients who have a combined (evacuation and transit/motility) disorder should pursue pelvic floor retraining (biofeedback and muscle relaxation), psychological counseling, and dietetic advice first. If symptoms are intractable despite biofeedback and optimized medical therapy, colectomy and ileorectostomy could be considered as long as the evacuation disorder is resolved and optimized medical therapy is unsuccessful. In patients with pelvic floor dysfunction alone, biofeedback training has a 70–80% success rate, measured by the acquisition of comfortable stool habits. Attempts to manage pelvic floor dysfunction with operations (internal anal sphincter or puborectalis muscle division) or injections with botulinum toxin have achieved only mediocre success and have been largely abandoned. Russell G. Robertson, J. Larry Jameson Involuntary weight loss (IWL) is frequently insidious and can have important implications, often serving as a harbinger of serious underlying disease. Clinically important weight loss is defined as the loss of 10 pounds (4.5 kg) or >5% of one’s body weight over a period of 6–12 months. IWL is encountered in up to 8% of all adult outpatients and 27% of frail persons age 65 years and older. There is no identifiable cause in up to one-quarter of patients despite extensive investigation. Conversely, up to half of people who claim to have lost weight have no documented evidence of weight loss. People with no known cause of weight loss generally have a better prognosis than do those with known causes, particularly when the source is neoplastic. Weight loss in older persons is associated with a variety of deleterious effects, including hip fracture, pressure ulcers, impaired immune function, and decreased functional status. Not surprisingly, significant weight loss is associated with increased mortality, which can range from 9% to as high as 38% within 1 to 2.5 years in the absence of clinical awareness and attention. (See also Chaps. 94e and 415e) Among healthy aging people, total body weight peaks in the sixth decade of life and generally remains stable until the ninth decade, after which it gradually falls. In contrast, lean body mass (fat-free mass) begins to decline at a rate of 0.3 kg per year in the third decade, and the rate of decline increases further beginning at age 60 in men and age 65 in women. These changes in lean body mass largely reflect the age-dependent decline in growth hormone secretion and, consequently, circulating levels of insulin-like growth factor type I (IGF-I) that occur with normal aging. Loss of sex steroids, at menopause in women and more gradually with aging in men, also contributes to these changes in body composition. In the healthy elderly, an increase in fat tissue balances the loss in lean body mass until very old age, when loss of both fat and skeletal muscle occurs. Age-dependent changes also occur at the cellular level. Telomeres shorten, and body cell mass—the fat-free portion of cells— declines steadily with aging. Between ages 20 and 80, mean energy intake is reduced by up to 1200 kcal/d in men and 800 kcal/d in women. Decreased hunger is a reflection of reduced physical activity and loss of lean body mass, producing lower demand for calories and food intake. Several important age-associated physiologic changes also predispose elderly persons to weight loss, such as declining chemosensory function (smell and taste), reduced efficiency of chewing, slowed gastric emptying, and alterations in the neuroendocrine axis, including changes in levels of leptin, cholecystokinin, neuropeptide Y, and other hormones and peptides. These changes are associated with early satiety and a decline in both appetite and the hedonistic appreciation of food. Collectively, they contribute to the “anorexia of aging.” Most causes of IWL belong to one of four categories: (1) malignant neoplasms, (2) chronic inflammatory or infectious diseases, (3) metabolic disorders (e.g., hyperthyroidism and diabetes), or (4) psychiatric disorders (Table 56-1). Not infrequently, more than one of these causes can be responsible for IWL. In most series, IWL is caused by malignant disease in a quarter of patients and by organic disease in one-third, with the remainder due to psychiatric disease, medications, or uncertain causes. The most common malignant causes of IWL are gastrointestinal, hepatobiliary, hematologic, lung, breast, genitourinary, ovarian, and prostate. Half of all patients with cancer lose some body weight; CAuSES of invoLunTARy wEigHT LoSS Disorders of the mouth and teeth one-third lose more than 5% of their original body weight, and up to 20% 275 of all cancer deaths are caused directly by cachexia (through immobility and/or cardiac/respiratory failure). The greatest incidence of weight loss is seen among patients with solid tumors. Malignancy that reveals itself through significant weight loss usually has a very poor prognosis. In addition to malignancies, gastrointestinal causes are among the most prominent causes of IWL. Peptic ulcer disease, inflammatory bowel disease, dysmotility syndromes, chronic pancreatitis, celiac disease, constipation, and atrophic gastritis are some of the more common entities. Oral and dental problems are easily overlooked and may manifest with halitosis, poor oral hygiene, xerostomia, inability to chew, reduced masticatory force, nonocclusion, temporomandibular joint syndrome, edentulousness, and pain due to caries or abscesses. Tuberculosis, fungal diseases, parasites, subacute bacterial endocarditis, and HIV are well-documented causes of IWL. Cardiovascular and pulmonary diseases cause unintentional weight loss through increased metabolic demand and decreased appetite and caloric intake. Uremia produces nausea, anorexia, and vomiting. Connective tissue diseases may increase metabolic demand and disrupt nutritional balance. As the incidence of diabetes mellitus increases with aging, the associated glucosuria can contribute to weight loss. Hyperthyroidism in the elderly may have less prominent sympathomimetic features and may present as “apathetic hyperthyroidism” or T3 toxicosis (Chap. 405). Neurologic injuries such as stroke, quadriplegia, and multiple sclerosis may lead to visceral and autonomic dysfunction that can impair caloric intake. Dysphagia from these neurologic insults is a common mechanism. Functional disability that compromises activities of daily living (ADLs) is a common cause of undernutrition in the elderly. Visual impairment from ophthalmic or central nervous system disorders such as a tremor can limit the ability of people to prepare and eat meals. IWL may be one of the earliest manifestations of Alzheimer’s dementia. Isolation and depression are significant causes of IWL that may manifest as an inability to care for oneself, including nutritional needs. A cytokine-mediated inflammatory metabolic cascade can be both a cause of and a manifestation of depression. Bereavement can be a cause of IWL and, when present, is more pronounced in men. More intense forms of mental illness such as paranoid disorders may lead to delusions about food and cause weight loss. Alcoholism can be a significant source of weight loss and malnutrition. Elderly persons living in poverty may have to choose whether to purchase food or use the money for other expenses, including medications. Institutionalization is an independent risk factor, as up to 30–50% of nursing home patients have inadequate food intake. Medications can cause anorexia, nausea, vomiting, gastrointestinal distress, diarrhea, dry mouth, and changes in taste. This is particularly an issue in the elderly, many of whom take five or more medications. The four major manifestations of IWL are (1) anorexia (loss of appetite), (2) sarcopenia (loss of muscle mass), (3) cachexia (a syndrome that combines weight loss, loss of muscle and adipose tissue, anorexia, and weakness), and (4) dehydration. The current obesity epidemic adds complexity, as excess adipose tissue can mask the development of sarcopenia and delay awareness of the development of cachexia. If it is not possible to measure weight directly, a change in clothing size, corroboration of weight loss by a relative or friend, and a numeric estimate of weight loss provided by the patient are suggestive of true weight loss. Initial assessment includes a comprehensive history and physical, a complete blood count, tests of liver enzyme levels, C-reactive protein, erythrocyte sedimentation rate, renal function studies, thyroid function tests, chest radiography, and an abdominal ultrasound (Table 56-2). Age, sex, and risk factor–specific cancer screening tests, such as mammography and colonoscopy, should be performed (Chap. 100). Patients at risk should have HIV testing. All elderly patients with weight loss should undergo screening for dementia and depression by using instruments such as the PART 2 Cardinal Manifestations and Presentation of Diseases 10% weight loss in 180 d Comprehensive electrolyte and metabolic panel, including liver and renal function tests Body mass index <21 Thyroid function tests 25% of food left uneaten after 7 d Erythrocyte sedimentation rate Change in fit of clothing C-reactive protein Change in appetite, smell, or taste Ferritin Abdominal pain, nausea, vomiting, HIV testing, if indicated diarrhea, constipation, dysphagia aMay be more specific to assess weight loss in the elderly. Mini-Mental State Examination and the Geriatric Depression Scale, respectively (Chap. 11). The Mini Nutritional Assessment (www .mna-elderly.com) and the Nutrition Screening Initiative (http:// www.ncbi.nlm.nih.gov/pmc/articles/PMC1694757/) are also available for the nutritional assessment of elderly patients. Almost all patients with a malignancy and >90% of those with other organic diseases have at least one laboratory abnormality. In patients presenting with substantial IWL, major organic and malignant diseases are unlikely when a baseline evaluation is completely normal. Careful follow-up rather than undirected testing is advised since the prognosis of weight loss of undetermined cause is generally favorable. The first priority in managing weight loss is to identify and treat the underlying causes systematically. Treatment of underlying metabolic, psychiatric, infectious, or other systemic disorders may be sufficient to restore weight and functional status gradually. Medications that cause nausea or anorexia should be withdrawn or changed, if possible. For those with unexplained IWL, oral nutritional supplements such as high-energy drinks sometimes reverse weight loss. Advising patients to consume supplements between meals rather than with a meal may help minimize appetite suppression and facilitate increased overall intake. Orexigenic, anabolic, and anticytokine agents are under investigation. In selected patients, the antidepressant mirtazapine results in a significant increase in body weight, body fat mass, and leptin concentration. Patients with wasting conditions who can comply with an appropriate exercise program gain muscle protein mass, strength, and endurance and may be more capable of performing ADLs. Gastrointestinal bleeding (GIB) accounts for ~150 hospitalizations per 100,000 population annually in the United States, with upper GIB (UGIB) ~1.5–2 times more common than lower GIB (LGIB) The incidence of GIB has decreased in recent decades, primarily due to a reduction in UGIB, and the mortality has also decreased to <5%. Patients today rarely die from exsanguination, but rather die due to decompensation of other underlying illnesses. GIB presents as either overt or occult bleeding. Overt GIB is manifested by hematemesis, vomitus of red blood or “coffee-grounds” material; melena, black, tarry, foul-smelling stool; and/or hematochezia, passage of bright red or maroon blood from the rectum. Occult GIB may be identified in the absence of overt bleeding when patients present with symptoms of blood loss or anemia such as lightheadedness, syncope, angina, or dyspnea; or when routine diagnostic evaluation reveals iron deficiency anemia or a positive fecal occult blood test. GIB is also categorized by the site of bleeding as UGIB, LGIB, or obscure GIB if the source is unclear. SOuRCES OF gASTROINTESTINAL BLEEDINg upper gastrointestinal Sources of Bleeding (Table 57-1) Peptic ulcers are the most common cause of UGIB, accounting for ∼50% of cases. Mallory-Weiss tears account for ~5–10% of cases. The proportion of patients bleeding from varices varies widely from ~5–40%, depending on the population. Hemorrhagic or erosive gastropathy (e.g., due to nonsteroidal anti-inflammatory drugs [NSAIDs] or alcohol) and erosive esophagitis often cause mild UGIB, but major bleeding is rare. PEPTIC ULCERS Characteristics of an ulcer at endoscopy provide important prognostic information. One-third of patients with active bleeding or a nonbleeding visible vessel have further bleeding that requires urgent surgery if they are treated conservatively. These patients benefit from endoscopic therapy with bipolar electrocoagulation, heater probe, injection therapy (e.g., absolute alcohol, 1:10,000 epinephrine), and/or clips with reductions in bleeding, hospital stay, mortality, and costs. In contrast, patients with clean-based ulcers have rates of recurrent bleeding approaching zero. If stable with no other reason for hospitalization, such patients may be discharged home after endoscopy. Patients without clean-based ulcers usually remain in the hospital for 3 days because most episodes of recurrent bleeding occur within 3 days. Randomized controlled trials document that high-dose, constant-infusion IV proton pump inhibitor (PPI) (80-mg bolus and 8-mg/h infusion), designed to sustain intragastric pH >6 and enhance clot stability, decreases further bleeding and mortality in patients with high-risk ulcers (active bleeding, nonbleeding visible vessel, adherent clot) when given after endoscopic therapy. Patients with lower-risk findings (flat pigmented spot or clean base) do not require endoscopic Sources of Bleeding Proportion of Patients, % Source: Data on hospitalizations from year 2000 onward from Am J Gastroenterol 98:1494, 2003; Gastrointest Endosc 57:AB147, 2003; 60;875, 2004; Eur J Gastroenterol Hepatol 16:177, 2004; 17:641, 2005; J Clin Gastroenterol 42:128, 2008; World J Gastroenterol 14:5046, 2008; Dig Dis Sci 54:333, 2009; Gut 60:1327, 2011; Endoscopy 44:998, 2012; J Clin Gastroenterol 48:113, 2014. therapy and receive standard doses of oral PPI. Approximately one-third of patients with bleeding ulcers will rebleed within the next 1–2 years if no preventive strategies are employed. Prevention of recurrent bleeding focuses on the three main factors in ulcer pathogenesis, Helicobacter pylori, NSAIDs, and acid. Eradication of H. pylori in patients with bleeding ulcers decreases rates of rebleeding to <5%. If a bleeding ulcer develops in a patient taking NSAIDs, the NSAIDs should be discontinued. If NSAIDs must be given, a cyclooxygenase 2 (COX-2) selective inhibitor (coxib) plus a PPI should be used. PPI co-therapy alone or a coxib alone is associated with an annual rebleeding rate of ~10% in patients with a recent bleeding ulcer, whereas the combination of a coxib and PPI provides a further significant decrease in recurrent ulcer bleeding. Patients with established cardiovascular disease who develop bleeding ulcers while taking low-dose aspirin should restart aspirin as soon as possible after their bleeding episode (1–7 days). A randomized trial showed that failure to restart aspirin was associated with no significant difference in rebleeding (5% vs. 10% at 30 days) but a significant increase in mortality at 30 days (9% vs. 1%) and 8 weeks (13% vs. 1%) compared with immediate reinstitution of aspirin. Patients with bleeding ulcers unrelated to H. pylori or NSAIDs should remain on PPI therapy indefinitely. Peptic ulcers are discussed in Chap. 348. MALLORY-wEISS TEARS The classic history is vomiting, retching, or coughing preceding hematemesis, especially in an alcoholic patient. Bleeding from these tears, which are usually on the gastric side of the gastroesophageal junction, stops spontaneously in 80–90% of patients and recurs in only 0–10%. Endoscopic therapy is indicated for actively bleeding Mallory-Weiss tears. Angiographic therapy with embolization and operative therapy with oversewing of the tear are rarely required. Mallory-Weiss tears are discussed in Chap. 347. ESOPHAgEAL VARICES Patients with variceal hemorrhage have poorer outcomes than patients with other sources of UGIB. Urgent endoscopy within 12 h is recommended in cirrhotics with UGIB, and if esophageal varices are present, endoscopic ligation is performed and an IV vasoactive medication (e.g., octreotide 50 μg bolus and 50 μg/h infusion) is given for 2–5 days. Combination endoscopic and medical therapy appears to be superior to either therapy alone in decreasing rebleeding. In patients with advanced liver disease (e.g., Child-Pugh class C with score 10–13), a transjugular intrahepatic portosystemic shunt (TIPS) should be strongly considered within the first 1–2 days of hospitalization because randomized trials show significant decreases in rebleeding and mortality compared with standard endoscopic and medical therapy. Over the long term, treatment with nonselective beta blockers plus endoscopic ligation is recommended because the combination of endoscopic and medical therapy is more effective than either alone in reduction of recurrent esophageal variceal bleeding. In patients who have persistent or recurrent bleeding despite endoscopic and medical therapy, TIPS is recommended. Decompressive surgery (e.g., distal splenorenal shunt) may be considered instead of TIPS in patients with well-compensated cirrhosis. Portal hypertension is also responsible for bleeding from gastric varices, varices in the small and large intestine, and portal hypertensive gastropathy and enterocolopathy. Bleeding gastric varices due to cirrhosis are treated with endoscopic injection of tissue adhesive (e.g., n-butyl cyanoacrylate), if available; if not, TIPS is performed. HEMORRHAgIC ANd EROSIVE gASTROPATHY (“gASTRITIS”) Hemorrhagic and erosive gastropathy, often labeled gastritis, refers to endoscopically visualized subepithelial hemorrhages and erosions. These are mucosal lesions and do not cause major bleeding due to the absence of arteries and veins in the mucosa. Erosions develop in various clinical settings, the most important of which are NSAID use, alcohol intake, and stress. Half of patients who chronically ingest NSAIDs have erosions, whereas up to 20% of actively drinking alcoholic patients with symptoms of UGIB have evidence of subepithelial hemorrhages or erosions. Stress-related gastric mucosal injury occurs only in extremely sick patients, such as those who have experienced serious trauma, major surgery, burns covering more than one-third of the body surface area, 277 major intracranial disease, or severe medical illness (i.e., ventilator dependence, coagulopathy). Severe bleeding should not develop unless ulceration occurs. The mortality rate in these patients is quite high because of their serious underlying illnesses. The incidence of bleeding from stress-related gastric mucosal injury has decreased dramatically in recent years, most likely due to better care of critically ill patients. Pharmacologic prophylaxis for bleeding may be considered in the high-risk patients mentioned above. Meta-analyses of randomized trials indicate that PPIs are more effective than H2 receptor antagonists in reduction of overt and clinically important UGIB without differences in mortality or nosocomial pneumonia. OTHER CAUSES Other less frequent causes of UGIB include erosive duodenitis, neoplasms, aortoenteric fistulas, vascular lesions (including hereditary hemorrhagic telangiectasias [Osler-Weber-Rendu] and gastric antral vascular ectasia [“watermelon stomach”]), Dieulafoy’s lesion (in which an aberrant vessel in the mucosa bleeds from a pinpoint mucosal defect), prolapse gastropathy (prolapse of proximal stomach into esophagus with retching, especially in alcoholics), and hemobilia or hemosuccus pancreaticus (bleeding from the bile duct or pancreatic duct). Small-Intestinal Sources of Bleeding Small-intestinal sources of bleeding (bleeding from sites beyond the reach of the standard upper endoscope) are often difficult to diagnose and are responsible for the majority of cases of obscure GIB. Fortunately, small-intestinal bleeding is uncommon. The most common causes in adults are vascular ectasias, tumors (e.g., GI stromal tumor, carcinoid, adenocarcinoma, lymphoma, metastases), and NSAID-induced erosions and ulcers. Other less common causes in adults include Crohn’s disease, infection, ischemia, vasculitis, small-bowel varices, diverticula, Meckel’s diverticulum, duplication cysts, and intussusception. Meckel’s diverticulum is the most common cause of significant LGIB in children, decreasing in frequency as a cause of bleeding with age. In adults <40–50 years, small-bowel tumors often account for obscure GIB; in patients >50–60 years, vascular ectasias and NSAID-induced lesions are more commonly responsible. Vascular ectasias should be treated with endoscopic therapy if possible. Although estrogen/progesterone compounds have been used for vascular ectasias, a large double-blind trial found no benefit in prevention of recurrent bleeding. Octreotide is also used, based on case series but no randomized trials. A randomized trial reported significant benefit of thalidomide and awaits further confirmation. Other isolated lesions, such as tumors, are generally treated with surgical resection. Colonic Sources of Bleeding Hemorrhoids are probably the most common cause of LGIB; anal fissures also cause minor bleeding and pain. If these local anal processes, which rarely require hospitalization, are excluded, the most common causes of LGIB in adults are diverticula, vascular ectasias (especially in the proximal colon of patients >70 years), neoplasms (primarily adenocarcinoma), colitis (ischemic, infectious, idiopathic inflammatory bowel disease), and postpolypectomy bleeding. Less common causes include NSAID-induced ulcers or colitis, radiation proctopathy, solitary rectal ulcer syndrome, trauma, varices (most commonly rectal), lymphoid nodular hyperplasia, vasculitis, and aortocolic fistulas. In children and adolescents, the most common colonic causes of significant GIB are inflammatory bowel disease and juvenile polyps. Diverticular bleeding is abrupt in onset, usually painless, sometimes massive, and often from the right colon; chronic or occult bleeding is not characteristic. Clinical reports suggest that bleeding colonic diverticula stop bleeding spontaneously in ~80% of patients and, on long-term follow-up, rebleed in ~15–25% of patients. Case series suggest endoscopic therapy may decrease recurrent bleeding in the uncommon case when colonoscopy identifies the specific bleeding diverticulum. When diverticular bleeding is found at angiography, transcatheter arterial embolization by superselective technique stops bleeding in a majority of patients. If bleeding persists or recurs, segmental surgical resection is indicated. Bleeding from right colonic vascular ectasias in the elderly may be overt or occult; it tends to be chronic and only occasionally is hemodynamically significant. Endoscopic hemostatic therapy may be useful in the treatment of vascular ectasias, as well as discrete bleeding ulcers and postpolypectomy bleeding. Surgical therapy is generally required for major, persistent, or recurrent bleeding from the wide variety of colonic sources of GIB that cannot be treated medically, angiographically, or endoscopically. APPROACH TO THE PATIENT: Measurement of the heart rate and blood pressure is the best way to initially assess a patient with GIB. Clinically significant bleeding leads to postural changes in heart rate or blood pressure, tachycardia, and, finally, recumbent hypotension. In contrast, the hemoglobin does not fall immediately with acute GIB, due to proportionate reductions in plasma and red cell volumes (i.e., “people bleed whole blood”). Thus, hemoglobin may be normal or only minimally decreased at the initial presentation of a severe bleeding episode. As extravascular fluid enters the vascular space to restore volume, the hemoglobin falls, but this process may take up to 72 h. Transfusion is recommended when the hemoglobin drops below 7 g/dL, based on a large randomized trial showing this restrictive transfusion strategy decreases rebleeding and death in acute UGIB compared with a transfusion threshold of 9 g/dL. Patients with slow, chronic GIB may have very low hemoglobin values despite normal blood pressure and heart rate. With the development of iron-deficiency anemia, the mean corpuscular volume will be low and red blood cell distribution width will increase. Hematemesis indicates an upper GI source of bleeding (above the ligament of Treitz). Melena indicates blood has been present in the GI tract for at least 14 h, and as long as 3–5 days. The more proximal the bleeding site, the more likely melena will occur. Hematochezia usually represents a lower GI source of bleeding, although an upper GI lesion may bleed so briskly that blood transits the bowel before melena develops. When hematochezia is the presenting symptom of UGIB, it is associated with hemodynamic instability and dropping hemoglobin. Bleeding lesions of the small bowel may present as melena or hematochezia. Other clues to UGIB include hyperactive bowel sounds and an elevated blood urea nitrogen (due to volume depletion and blood proteins absorbed in the small intestine). PART 2 Cardinal Manifestations and Presentation of Diseases A nonbloody nasogastric aspirate may be seen in up to ~18% of patients with UGIB, usually from a duodenal source. Even a bile-stained appearance does not exclude a bleeding postpyloric lesion because reports of bile in the aspirate are incorrect in ~50% of cases. Testing of aspirates that are not grossly bloody for occult blood is not useful. EVALuATION AND MANAgEMENT OF ugIB (FIg. 57-1) At presentation, patients are generally stratified as higher or lower risk for further bleeding and death. Baseline characteristics predictive of rebleeding and death include hemodynamic compromise (tachycardia or hypotension), increasing age, and comorbidities. PPI infusion may be considered at presentation: it decreases high-risk ulcer stigmata (e.g., active bleeding) and need for endoscopic therapy but does not improve clinical outcomes such as further bleeding, surgery, or death. Treatment to improve endoscopic visualization with the promotility agent erythromycin, 250 mg intravenously ~30 min before endoscopy, also may be considered: it provides a small but significant increase in diagnostic yield and decrease in second endoscopies but is not documented to decrease further bleeding or death. Cirrhotic patients presenting with UGIB should be placed on antibiotics (e.g., quinolone, ceftriaxone) and started on a vasoactive medication (octreotide, terlipressin, somatostatin, vapreotide) upon presentation, even before endoscopy. Antibiotics decrease bacterial infections, rebleeding, and mortality in this population, and vasoactive medications appear to improve control of bleeding in the first 12 h after presentation. Upper endoscopy should be performed within 24 h in most patients with UGIB. Patients at higher risk (e.g., hemodynamic instability, cirrhosis) may benefit from more urgent endoscopy within 12 h. Early endoscopy is also beneficial in low-risk patients for management decisions. Patients with major bleeding and high-risk endoscopic findings (e.g., varices, ulcers with active bleeding or a visible vessel) benefit from endoscopic hemostatic therapy, whereas patients with low-risk lesions (e.g., clean-based ulcers, nonbleeding Mallory-Weiss tears, erosive or hemorrhagic gastropathy) who have stable vital signs and hemoglobin and no other medical problems can be discharged home. EVALuATION AND MANAgEMENT OF LgIB (FIg. 57-2) Patients with hematochezia and hemodynamic instability should have upper endoscopy to rule out an upper GI source before evaluation of the lower GI tract. Colonoscopy after an oral lavage solution is the procedure of choice in most patients admitted with LGIB unless bleeding is too massive, in which case angiography is recommended. Sigmoidoscopy is used primarily in patients <40 years old with minor bleeding. In patients with no source identified on colonoscopy, imaging studies may be employed. 99mTc-labeled red cell scan allows repeated imaging for up to 24 h and may identify the general location of bleeding. However, radionuclide scans should be interpreted with caution because results, especially from later images, are highly variable. Multidector computed tomography (CT) “angiography” is an increasingly used technique that is likely superior to nuclear scintigraphy. In active LGIB, angiography can detect the site of bleeding (extravasation of contrast into the gut) and permits treatment with embolization. Even after bleeding has stopped, angiography may identify lesions with abnormal vasculature, such as vascular ectasias or tumors. Acute upper GI bleeding ICU for 1–2 days; ward for 2–3 days Ligation + IV vasoactive drug (e.g., octreotide) Esophageal varicesUlcer Mallory-Weiss tear Clean base Discharge No IV PPI or endoscopic therapy Active bleeding or visible vessel ICU for 1 day; ward for 2 days IV PPI therapy + endoscopic therapy Adherent clot Ward for 2–3 days IV PPI therapy +/– endoscopic therapy Flat, pigmented spot No IV PPI or endoscopic therapy Ward for 3 days Active bleeding No active bleeding No endoscopic therapy Discharge Endoscopic therapy Ward for 1–2 days FIguRE 57-1 Suggested algorithm for patients with acute upper gastrointestinal (GI) bleeding. Recommendations on level of care and time of discharge assume patient is stabilized without further bleeding or other concomitant medical problems. ICU, intensive care unit; PPI, proton pump inhibitor. Hemodynamic instability Site identified; bleeding stops Angiography Obscure bleeding work-up Flexible sigmoidoscopy (colonoscopy if iron-deficiency anemia, familial colon cancer, or copious bleeding)* Bleeding persists Surgery Acute lower GI bleeding No hemodynamic instability Age ˜40 yrs Upper endoscopy^Age <40 yrs Colonoscopy Colonoscopy† Site identified; bleeding persists Site not identified; bleeding persists FIguRE 57-2 Suggested algorithm for patients with acute lower gastrointestinal (GI) bleeding. *Some suggest colonoscopy for any degree of rectal bleeding in patients <40 years as well. ^If upper GI endoscopy reveals definite source, no further evaluation is needed. †If massive bleeding does not allow time for colonic lavage, proceed to angiography. Obscure GIB is defined as persistent or recurrent bleeding for which no source has been identified by routine endoscopic and contrast x-ray studies; it may be overt (melena, hematochezia) or occult (iron-deficiency anemia). Current guidelines suggest angiography as the initial test for massive obscure bleeding, and video capsule endoscopy, which allows examination of the entire small intestine, for all others. Push enteroscopy, usually performed with a pediatric colonoscope, to inspect the entire duodenum and proximal jejunum also may be considered as an initial evaluation. A systematic review of 14 trials comparing push enteroscopy to capsule revealed “clinically significant findings” in 26% and 56% of patients, respectively. However, in contrast to enteroscopy, lack of control of the capsule prevents its manipulation and full visualization of the intestine; in addition, tissue cannot be sampled and therapy cannot be applied. If capsule endoscopy is positive, management is dictated by the finding. If capsule endoscopy is negative, current recommendations suggest patients may either be observed or, if their clinical course mandates (e.g., recurrent bleeding, need for transfusions or hospitalization), undergo further testing. “Deep” enteroscopy (e.g., double-balloon, single-balloon, and spiral enteroscopy) is commonly the next test undertaken in patients with clinically important obscure GIB because it allows the endoscopist to examine, obtain specimens from, and provide therapy to much or all of the small intestine. CT and magnetic resonance enterography also are used to examine the small intestine. Other imaging techniques sometimes used in evaluation of obscure GIB include 99mTc-labeled red blood cell scintigraphy, multidetector CT “angiography,” angiography, and 99mTc-pertechnetate scintigraphy for Meckel’s diverticulum (especially in young patients). If all tests are unrevealing, intraoperative endoscopy is indicated in patients with severe recurrent or persistent bleeding requiring repeated transfusions. Fecal occult blood testing is recommended only for colorectal cancer screening and may be used beginning at age 50 in average-risk adults and beginning at age 40 in adults with a first-degree relative with colorectal neoplasm at ≥60 years or two second-degree relatives with colorectal cancer. A positive test necessitates colonoscopy. If evaluation of the colon is negative, further workup is not recommended unless iron-deficiency anemia or GI symptoms are present. Savio John, Daniel S. Pratt Jaundice, or icterus, is a yellowish discoloration of tissue resulting from the deposition of bilirubin. Tissue deposition of bilirubin occurs only in the presence of serum hyperbilirubinemia and is a sign of either liver disease or, less often, a hemolytic disorder. The degree of serum bilirubin elevation can be estimated by physical examination. Slight increases in serum bilirubin level are best detected by examining the sclerae, which have a particular affinity for bilirubin due to their high elastin content. The presence of scleral icterus indicates a serum bilirubin level of at least 51 μmol/L (3 mg/dL). The ability to detect scleral icterus is made more difficult if the examining room has fluorescent lighting. If the examiner suspects scleral icterus, a second site to examine is underneath the tongue. As serum bilirubin levels rise, the skin will eventually become yellow in light-skinned patients and even green if the process is long-standing; the green color is produced by oxidation of bilirubin to biliverdin. The differential diagnosis for yellowing of the skin is limited. In addition to jaundice, it includes carotenoderma, the use of the drug quinacrine, and excessive exposure to phenols. Carotenoderma is the yellow color imparted to the skin of healthy individuals who ingest excessive amounts of vegetables and fruits that contain carotene, such as carrots, leafy vegetables, squash, peaches, and oranges. In jaundice the yellow coloration of the skin is uniformly distributed over the body, whereas in carotenoderma the pigment is concentrated on the palms, soles, forehead, and nasolabial folds. Carotenoderma can be distinguished from jaundice by the sparing of the sclerae. Quinacrine causes a yellow discoloration of the skin in 4–37% of patients treated with it. Another sensitive indicator of increased serum bilirubin is darkening of the urine, which is due to the renal excretion of conjugated bilirubin. Patients often describe their urine as teaor cola-colored. Bilirubinuria indicates an elevation of the direct serum bilirubin fraction and, therefore, the presence of liver disease. Serum bilirubin levels increase when an imbalance exists between bilirubin production and clearance. A logical evaluation of the patient who is jaundiced requires an understanding of bilirubin production and metabolism. (See also Chap. 359) Bilirubin, a tetrapyrrole pigment, is a breakdown product of heme (ferroprotoporphyrin IX). About 70–80% of the 250– 300 mg of bilirubin produced each day is derived from the breakdown of hemoglobin in senescent red blood cells. The remainder comes from prematurely destroyed erythroid cells in bone marrow and from the turnover of hemoproteins such as myoglobin and cytochromes found in tissues throughout the body. The formation of bilirubin occurs in reticuloendothelial cells, primarily in the spleen and liver. The first reaction, catalyzed by the microsomal enzyme heme oxygenase, oxidatively cleaves the α bridge of the porphyrin group and opens the heme ring. The end products of this reaction are biliverdin, carbon monoxide, and iron. The second reaction, catalyzed by the cytosolic enzyme biliverdin reductase, reduces the central methylene bridge of biliverdin and converts it to bilirubin. Bilirubin formed in the reticuloendothelial cells is virtually insoluble in water due to tight internal hydrogen bonding between the water-soluble moieties of bilirubin—i.e., the bonding of the proprionic acid carboxyl groups of one dipyrrolic half of the molecule with the imino and lactam groups of the opposite half. This configuration blocks solvent access to the polar residues of bilirubin and places the hydrophobic residues on the outside. To be transported in blood, bilirubin must be solubilized. Solubilization is accomplished by the reversible, noncovalent binding of bilirubin to albumin. Unconjugated bilirubin bound to albumin is transported to the liver. There, the bilirubin—but not the albumin—is taken up by hepatocytes via a process that at least partly involves carrier-mediated membrane transport. No specific bilirubin transporter has yet been identified (Chap. 359, Fig. 359-1). After entering the hepatocyte, unconjugated bilirubin is bound in the cytosol to a number of proteins including proteins in the glutathione-S-transferase superfamily. These proteins serve both to reduce efflux of bilirubin back into the serum and to present the bilirubin for conjugation. In the endoplasmic reticulum, bilirubin is solubilized by conjugation to glucuronic acid, a process that disrupts the internal hydrogen bonds and yields bilirubin monoglucuronide and diglucuronide. The conjugation of glucuronic acid to bilirubin is catalyzed by bilirubin uridine diphosphate-glucuronosyl transferase (UDPGT). The now-hydrophilic bilirubin conjugates diffuse from the endoplasmic reticulum to the canalicular membrane, where bilirubin monoglucuronide and diglucuronide are actively transported into canalicular bile by an energy-dependent mechanism involving the multidrug resistance–associated protein 2 (MRP2). The conjugated bilirubin excreted into bile drains into the duodenum and passes unchanged through the proximal small bowel. Conjugated bilirubin is not taken up by the intestinal mucosa. When the conjugated bilirubin reaches the distal ileum and colon, it is hydrolyzed to unconjugated bilirubin by bacterial β-glucuronidases. The unconjugated bilirubin is reduced by normal gut bacteria to form a group of colorless tetrapyrroles called urobilinogens. About 80–90% of these products are excreted in feces, either unchanged or oxidized to orange derivatives called urobilins. The remaining 10–20% of the urobilinogens are passively absorbed, enter the portal venous blood, and are re-excreted by the liver. A small fraction (usually <3 mg/dL) escapes hepatic uptake, filters across the renal glomerulus, and is excreted in urine. The terms direct and indirect bilirubin—i.e., conjugated and unconjugated bilirubin, respectively—are based on the original van den Bergh reaction. This assay, or a variation of it, is still used in most clinical chemistry laboratories to determine the serum bilirubin level. In this assay, bilirubin is exposed to diazotized sulfanilic acid and splits into two relatively stable dipyrrylmethene azopigments that absorb PART 2 Cardinal Manifestations and Presentation of Diseases maximally at 540 nm, allowing photometric analysis. The direct fraction is that which reacts with diazotized sulfanilic acid in the absence of an accelerator substance such as alcohol. The direct fraction provides an approximation of the conjugated bilirubin level in serum. The total serum bilirubin is the amount that reacts after the addition of alcohol. The indirect fraction is the difference between the total and the direct bilirubin levels and provides an estimate of the unconjugated bilirubin in serum. With the van den Bergh method, the normal serum bilirubin concentration usually is 17 μmol/L (<1 mg/dL). Up to 30%, or 5.1 μmol/L (0.3 mg/dL), of the total may be direct-reacting (conjugated) bilirubin. Total serum bilirubin concentrations are between 3.4 and 15.4 μmol/L (0.2 and 0.9 mg/dL) in 95% of a normal population. Several new techniques, although less convenient to perform, have added considerably to our understanding of bilirubin metabolism. First, studies using these methods demonstrate that, in normal persons or those with Gilbert’s syndrome, almost 100% of the serum bilirubin is unconjugated; <3% is monoconjugated bilirubin. Second, in jaundiced patients with hepatobiliary disease, the total serum bilirubin concentration measured by these new, more accurate methods is lower than the values found with diazo methods. This finding suggests that there are diazo-positive compounds distinct from bilirubin in the serum of patients with hepatobiliary disease. Third, these studies indicate that, in jaundiced patients with hepatobiliary disease, monoglucuronides of bilirubin predominate over diglucuronides. Fourth, part of the direct-reacting bilirubin fraction includes conjugated bilirubin that is covalently linked to albumin. This albumin-linked bilirubin fraction (delta fraction, or biliprotein) represents an important fraction of total serum bilirubin in patients with cholestasis and hepatobiliary disorders. The delta fraction (delta bilirubin) is formed in serum when hepatic excretion of bilirubin glucuronides is impaired and the glucuronides accumulate in serum. By virtue of its tight binding to albumin, the clearance rate of delta bilirubin from serum approximates the half-life of albumin (12–14 days) rather than the short half-life of bilirubin (about 4 h). The prolonged half-life of albumin-bound conjugated bilirubin accounts for two previously unexplained enigmas in jaundiced patients with liver disease: (1) that some patients with conjugated hyperbilirubinemia do not exhibit bilirubinuria during the recovery phase of their disease because the bilirubin is covalently bound to albumin and therefore not filtered by the renal glomeruli, and (2) that the elevated serum bilirubin level declines more slowly than expected in some patients who otherwise appear to be recovering satisfactorily. Late in the recovery phase of hepatobiliary disorders, all the conjugated bilirubin may be in the albumin-linked form. Unconjugated bilirubin is always bound to albumin in the serum, is not filtered by the kidney, and is not found in the urine. Conjugated bilirubin is filtered at the glomerulus, and the majority is reabsorbed by the proximal tubules; a small fraction is excreted in the urine. Any bilirubin found in the urine is conjugated bilirubin. The presence of bilirubinuria implies the presence of liver disease. A urine dipstick test (Ictotest) gives the same information as fractionation of the serum bilirubin and is very accurate. A false-negative result is possible in patients with prolonged cholestasis due to the predominance of delta bilirubin, which is covalently bound to albumin and therefore not filtered by the renal glomeruli. APPROACH TO THE PATIENT: The goal of this chapter is not to provide an encyclopedic review of all of the conditions that can cause jaundice. Rather, the chapter is intended to offer a framework that helps a physician to evaluate the patient with jaundice in a logical way (Fig. 58-1). History (focus on medication/drug exposure) Physical examination Lab tests: Bilirubin with fractionation, ALT, AST, alkaline phosphatase, prothrombin time, and albumin Isolated elevation of the bilirubin Indirect hyperbilirubinemia (direct < 15%) See Table 58-1 Direct hyperbilirubinemia (direct > 15%) See Table 58-1 Drugs Rifampicin Probenecid Inherited disorders Dubin-Johnson syndrome Rotor syndrome 1. Viral serologies Hepatitis A IgM Hepatitis B surface antigen and core antibody (IgM) Hepatitis C RNA 2. Toxicology screen Acetaminophen level 3. Ceruloplasmin (if patient < 40 years of age) 4. ANA, SMA, SPEP Inherited disorders Gilbert's syndrome Crigler-Najjar syndromes Hemolytic disorders Ineffective erythropoiesis Bilirubin and other liver tests elevated Hepatocellular pattern: ALT/AST elevated out of proportion to alkaline phosphatase See Table 58-2 Cholestatic pattern: Alkaline phosphatase out of proportion ALT/AST See Table 58-3 Dilated ducts Extrahepatic cholestasis CT/MRCP/ERCP Liver biopsy Liver biopsy MRCP/Liver biopsy Results negativeResults negative Additional virologic testing CMV DNA, EBV capsid antigen Hepatitis D antibody (if indicated) Hepatitis E IgM (if indicated) Results negative AMA positive Serologic testing AMA Hepatitis serologies Hepatitis A, CMV, EBV Review drugs (see Table 58-3) Ultrasound Ducts not dilated Intrahepatic cholestasis FIguRE 58-1 Evaluation of the patient with jaundice. ALT, alanine aminotransferase; AMA, antimitochondrial antibody; ANA, antinuclear antibody; AST, aspartate aminotransferase; CMV, cytomegalovirus; EBV, Epstein-Barr virus; LKM, liver-kidney microsomal antibody; MRCP, magnetic resonance cholangiopancreatography; SMA, smooth-muscle antibody; SPEP, serum protein electrophoresis. Simply stated, the initial step is to perform appropriate blood tests in order to determine whether the patient has an isolated elevation of serum bilirubin. If so, is the bilirubin elevation due to an increased unconjugated or conjugated fraction? If the hyperbilirubinemia is accompanied by other liver test abnormalities, is the disorder hepatocellular or cholestatic? If cholestatic, is it intraor extrahepatic? All of these questions can be answered with a thoughtful history, physical examination, and interpretation of laboratory and radiologic tests and procedures. The bilirubin present in serum represents a balance between input from the production of bilirubin and hepatic/biliary removal of the pigment. Hyperbilirubinemia may result from (1) overproduction of bilirubin; (2) impaired uptake, conjugation, or excretion of bilirubin; or (3) regurgitation of unconjugated or conjugated bilirubin from damaged hepatocytes or bile ducts. An increase in unconjugated bilirubin in serum results from overproduction, impaired uptake, or conjugation of bilirubin. An increase in conjugated bilirubin is due to decreased excretion into the bile ductules or backward leakage of the pigment. The initial steps in evaluating the patient with jaundice are to determine (1) whether the hyperbilirubinemia is predominantly conjugated or unconjugated in nature and (2) whether other biochemical liver tests are abnormal. The thoughtful interpretation of limited data permits a rational evaluation of the patient (Fig. 58-1). The following discussion will focus solely on the evaluation of the adult patient with jaundice. ISOLATED ELEVATION OF SERuM BILIRuBIN unconjugated Hyperbilirubinemia The differential diagnosis of isolated unconjugated hyperbilirubinemia is limited (Table 58-1). The critical determination is whether the patient is suffering from a hemolytic process resulting in an overproduction of bilirubin (hemolytic disorders and ineffective erythropoiesis) or from impaired hepatic uptake/conjugation of bilirubin (drug effect or genetic disorders). Hemolytic disorders that cause excessive heme production may be either inherited or acquired. Inherited disorders include spherocytosis, sickle cell anemia, thalassemia, and deficiency of red cell enzymes such as pyruvate kinase and glucose-6-phosphate dehydrogenase. In these conditions, the serum bilirubin level rarely exceeds 86 μmol/L (5 mg/dL). Higher levels may occur when there is coexistent renal or hepatocellular dysfunction or in acute hemolysis, such as a sickle cell crisis. In evaluating jaundice in patients with chronic hemolysis, it is important to remember the high incidence of pigmented (calcium bilirubinate) gallstones found in these CAuSES of iSoLATED HyPERBiLiRuBinEMiA PART 2 Cardinal Manifestations and Presentation of Diseases I. Indirect hyperbilirubinemia A. Hemolytic disorders 1. Inherited a. Spherocytosis, elliptocytosis, glucose-6-phosphate dehydrogenase and pyruvate kinase deficiencies b. 2. Acquired a. b. c. d. e. B. 1. Cobalamin, folate, and severe iron deficiencies 2. C. Increased bilirubin production 1. 2. Resorption of hematoma D. Drugs 1. 2. 3. E. Inherited conditions 1. 2. II. A. B. patients, which increases the likelihood of choledocholithiasis as an alternative explanation for hyperbilirubinemia. lytic anemia (e.g., hemolytic-uremic syndrome), paroxysmal noc turnal hemoglobinuria, spur cell anemia, immune hemolysis, and parasitic infections (e.g., malaria and babesiosis). Ineffective erythro poiesis occurs in cobalamin, folate, and iron deficiencies. Resorption increased hemoglobin release and overproduction of bilirubin. In the absence of hemolysis, the physician should consider a problem with the hepatic uptake or conjugation of bilirubin. Certain drugs, including rifampin and probenecid, may cause of bilirubin. Impaired bilirubin conjugation occurs in three genetic conditions: Crigler-Najjar syndrome types I and II and Gilbert’s syndrome. Crigler-Najjar type I is an exceptionally rare condition >342 μmol/L [>20 mg/dL]) and neurologic impairment due to kernicterus, frequently leading to death in infancy or childhood. These patients have a complete absence of bilirubin UDPGT activ ity, usually due to mutations in the critical 3′ domain of the UDPGT gene; are totally unable to conjugate bilirubin; and hence cannot excrete it. Crigler-Najjar type II is somewhat more common than type I. Patients live into adulthood with serum bilirubin levels of 103–428 μmol/L (6–25 mg/dL). In these patients, mutations in the bilirubin UDPGT gene cause the reduction—but not the complete eradica tion—of the enzyme’s activity. Bilirubin UDPGT activity can be induced by the administration of phenobarbital, which can reduce serum bilirubin levels in these patients. Despite marked jaundice, these patients usually survive into adulthood, although they may be susceptible to kernicterus under the stress of intercurrent illness or surgery. Gilbert’s syndrome is also marked by the impaired conjugation of bilirubin (to approximately one-third of normal) due to reduced bilirubin UDPGT activity. Patients with Gilbert’s syndrome have mild unconjugated hyperbilirubinemia, with serum levels almost always <103 μmol/L (6 mg/dL). The serum levels may fluctuate, and jaundice is often identified only during periods of fasting. The molecular defect in Gilbert’s syndrome is linked to a reduction in transcription of the bilirubin UDPGT gene due to mutations in the promoter and, rarely, in the coding region. Unlike both Crigler-Najjar syndromes, Gilbert’s syndrome is very common. The reported incidence is 3–7% of the population, with males predominating over females by a ratio of 2–7:1. Conjugated Hyperbilirubinemia Elevated conjugated hyperbilirubinemia is found in two rare inherited conditions: Dubin-Johnson syndrome and Rotor syndrome (Table 58-1). Patients with either condition present with asymptomatic jaundice. The defect in Dubin-Johnson syndrome is the presence of mutations in the gene for MRP2. These patients have altered excretion of bilirubin into the bile ducts. Rotor syndrome may represent a deficiency of the major hepatic drug uptake transporters OATP1B1 and OATP1B3. Differentiating between these syndromes is possible but is clinically unnecessary due to their benign nature. The remainder of this chapter will focus on the evaluation of patients with conjugated hyperbilirubinemia in the setting of other liver test abnormalities. This group of patients can be divided into those with a primary hepatocellular process and those with intraor extrahepatic cholestasis. This distinction, which is based on the history and physical examination as well as the pattern of liver test abnormalities, guides the clinician’s evaluation (Fig. 58-1). History A complete medical history is perhaps the single most important part of the evaluation of the patient with unexplained jaundice. Important considerations include the use of or exposure to any chemical or medication, whether physician-prescribed, overthe-counter, complementary, or alternative medicines (e.g., herbal and vitamin preparations) or other drugs such as anabolic steroids. The patient should be carefully questioned about possible parenteral exposures, including transfusions, intravenous and intranasal drug use, tattooing, and sexual activity. Other important points include recent travel history; exposure to people with jaundice; exposure to possibly contaminated foods; occupational exposure to hepatotoxins; alcohol consumption; the duration of jaundice; and the presence of any accompanying signs and symptoms, such as arthralgias, myalgias, rash, anorexia, weight loss, abdominal pain, fever, pruritus, and changes in the urine and stool. While none of the latter manifestations is specific for any one condition, any of them can suggest a particular diagnosis. A history of arthralgias and myalgias predating jaundice suggests hepatitis, either viral or drug-related. Jaundice associated with the sudden onset of severe right-upper-quadrant pain and shaking chills suggests choledocholithiasis and ascending cholangitis. Physical Examination The general assessment should include evaluation of the patient’s nutritional status. Temporal and proximal muscle wasting suggests long-standing disease such as pancreatic cancer or cirrhosis. Stigmata of chronic liver disease, including spider nevi, palmar erythema, gynecomastia, caput medusae, Dupuytren’s contractures, parotid gland enlargement, and testicular atrophy, are commonly seen in advanced alcoholic (Laennec’s) cirrhosis and occasionally in other types of cirrhosis. An enlarged left supraclavicular node (Virchow’s node) or a periumbilical nodule (Sister Mary Joseph’s nodule) suggests an abdominal malignancy. Jugular venous distention, a sign of right-sided heart failure, suggests hepatic congestion. Right pleural effusion in the absence of clinically apparent ascites may be seen in advanced cirrhosis. The abdominal examination should focus on the size and consistency of the liver, on whether the spleen is palpable and hence enlarged, and on whether ascites is present. Patients with cirrhosis may have an enlarged left lobe of the liver, which is felt below the xiphoid, and an enlarged spleen. A grossly enlarged nodular liver or an obvious abdominal mass suggests malignancy. An enlarged tender liver could signify viral or alcoholic hepatitis; an infiltrative process such as amyloidosis; or, less often, an acutely congested liver secondary to right-sided heart failure. Severe right-upper-quadrant tenderness with respiratory arrest on inspiration (Murphy’s sign) suggests cholecystitis. Ascites in the presence of jaundice suggests either cirrhosis or malignancy with peritoneal spread. Laboratory Tests A battery of tests are helpful in the initial evaluation of a patient with unexplained jaundice. These include total and direct serum bilirubin measurement with fractionation; determination of serum aminotransferase, alkaline phosphatase, and albumin concentrations; and prothrombin time tests. Enzyme tests (alanine aminotransferase [ALT], aspartate aminotransferase [AST], and alkaline phosphatase [ALP]) are helpful in differentiating between a hepatocellular process and a cholestatic process (Table 358-1; Fig. 58-1)—a critical step in determining what additional workup is indicated. Patients with a hepatocellular process generally have a rise in the aminotransferases that is disproportionate to that in ALP, whereas patients with a cholestatic process have a rise in ALP that is disproportionate to that of the aminotransferases. The serum bilirubin can be prominently elevated in both hepatocellular and cholestatic conditions and therefore is not necessarily helpful in differentiating between the two. In addition to enzyme tests, all jaundiced patients should have additional blood tests—specifically, an albumin level and a prothrombin time—to assess liver function. A low albumin level suggests a chronic process such as cirrhosis or cancer. A normal albumin level is suggestive of a more acute process such as viral hepatitis or choledocholithiasis. An elevated prothrombin time indicates either vitamin K deficiency due to prolonged jaundice and malabsorption of vitamin K or significant hepatocellular dysfunction. The failure of the prothrombin time to correct with parenteral administration of vitamin K indicates severe hepatocellular injury. The results of the bilirubin, enzyme, albumin, and prothrombin time tests will usually indicate whether a jaundiced patient has a hepatocellular or a cholestatic disease and offer some indication of the duration and severity of the disease. The causes and evaluations of hepatocellular and cholestatic diseases are quite different. Hepatocellular Conditions Hepatocellular diseases that can cause jaundice include viral hepatitis, drug or environmental toxicity, alcohol, and end-stage cirrhosis from any cause (Table 58-2). Wilson’s disease occurs primarily in young adults. Autoimmune hepatitis is typically seen in young to middle-aged women but may affect men and women of any age. Alcoholic hepatitis can be differentiated from viral and toxin-related hepatitis by the pattern of the aminotransferases: patients with alcoholic hepatitis typically have an AST-to-ALT ratio of at least 2:1, and the AST level rarely exceeds 300 U/L. Patients with acute viral hepatitis and toxin-related injury severe enough to produce jaundice typically have aminotransferase levels >500 U/L, with the ALT greater than or equal to the AST. While ALT and AST values <8 times normal may be seen in either hepatocellular or cholestatic liver disease, values 25 times normal or higher are seen primarily in acute hepatocellular diseases. Patients with jaundice from cirrhosis can have normal or only slightly elevated aminotransferase levels. When the clinician determines that a patient has a hepatocellular disease, appropriate testing for acute viral hepatitis includes Hepatitis A, B, C, D, and E Predictable, dose-dependent (e.g., acetaminophen) Unpredictable, idiosyncratic (e.g., isoniazid) Wild mushrooms—Amanita phalloides, A. verna a hepatitis A IgM antibody assay, a hepatitis B surface antigen and core IgM antibody assay, a hepatitis C viral RNA test, and, depend ing on the circumstances, a hepatitis E IgM antibody assay. Because it can take many weeks for hepatitis C antibody to become detect able, its assay is an unreliable test if acute hepatitis C is suspected. Studies for hepatitis D and E viruses, Epstein-Barr virus (EBV), and cytomegalovirus (CMV) may also be indicated. Ceruloplasmin is the initial screening test for Wilson’s disease. Testing for autoim measurement of specific immunoglobulins. Drug-induced hepatocellular injury can be classified as either predictable or unpredictable. Predictable drug reactions are dose- dependent and affect all patients who ingest a toxic dose of the drug in question. The classic example is acetaminophen hepatotoxicity. Unpredictable or idiosyncratic drug reactions are not dose-depen dent and occur in a minority of patients. A great number of drugs can cause idiosyncratic hepatic injury. Environmental toxins are also an important cause of hepatocellular injury. Examples include industrial chemicals such as vinyl chloride, herbal preparations containing pyrrolizidine alkaloids (Jamaica bush tea) or Kava Kava, and the mushrooms Amanita phalloides and A. verna, which con tain highly hepatotoxic amatoxins. Cholestatic Conditions When the pattern of the liver tests suggests a cholestatic disorder, the next step is to determine whether it is intra or extrahepatic cholestasis (Fig. 58-1). Distinguishing intrahepatic from extrahepatic cholestasis may be difficult. History, physical examination, and laboratory tests often are not helpful. The next appropriate test is an ultrasound. The ultrasound is inexpensive, does not expose the patient to ionizing radiation, and can detect dilation of the intraand extrahepatic biliary tree with a high degree of sensitivity and specificity. The absence of biliary dilation suggests intrahepatic cholestasis, while its presence indicates extrahepatic cholestasis. False-negative results occur in patients with partial obstruction of the common bile duct or in patients with cirrhosis or primary sclerosing cholangitis (PSC), in which scarring prevents the intrahepatic ducts from dilating. sis, it rarely identifies the site or cause of obstruction. The distal common bile duct is a particularly difficult area to visual ize by ultrasound because of overlying bowel gas. Appropriate next tests include CT, magnetic resonance cholangiopancreatog raphy (MRCP), endoscopic retrograde cholangiopancreatography (ERCP), and endoscopic ultrasound (EUS). CT scanning and MRCP are better than ultrasonography for assessing the head of the pancreas and for identifying choledocholithiasis in the distal common bile duct, particularly when the ducts are not dilated. ERCP is the “gold standard” for identifying choledocholithiasis. Beyond its diagnostic capabilities, ERCP allows therapeutic interventions, including the removal of common bile duct stones and the placement of stents. MRCP has replaced ERCP as the initial diagnostic test in cases where the need for intervention is thought to be small. EUS displays sensitivity and specificity comparable to that of MRCP in the detection of bile duct obstruction. EUS also allows biopsy of suspected malignant lesions, but is invasive and requires sedation. In patients with apparent intrahepatic cholestasis, the diagnosis is often made by serologic testing in combination with percutaneous liver biopsy. The list of possible causes of intrahepatic cholestasis is long and varied (Table 58-3). A number of conditions that typically cause a hepatocellular pattern of injury can also present as a cholestatic variant. Both hepatitis B and C viruses can cause cholestatic hepatitis (fibrosing cholestatic hepatitis). This disease variant has been reported in patients who have undergone solid organ transplantation. Hepatitis A and E, alcoholic hepatitis, and EBV or CMV infections may also present as cholestatic liver disease. Drugs may cause intrahepatic cholestasis that is usually reversible after discontinuation of the offending agent, although it may take many months for cholestasis to resolve. Drugs most commonly associated with cholestasis are the anabolic and contraceptive steroids. Cholestatic hepatitis has been reported with chlorpromazine, imipramine, tolbutamide, sulindac, cimetidine, and erythromycin estolate. It also occurs in patients taking trimethoprim; sulfamethoxazole; and penicillin-based antibiotics such as ampicillin, dicloxacillin, and clavulanic acid. Rarely, cholestasis may be chronic and associated with progressive fibrosis despite early discontinuation of the offending drug. Chronic cholestasis has been associated with chlorpromazine and prochlorperazine. Primary biliary cirrhosis is an autoimmune disease predominantly affecting middle-aged women and characterized by progressive destruction of interlobular bile ducts. The diagnosis is made by the detection of antimitochondrial antibody, which is found in 95% of patients. Primary sclerosing cholangitis is characterized by the destruction and fibrosis of larger bile ducts. The diagnosis of PSC is made with cholangiography (either MRCP or ERCP), which demonstrates the pathognomonic segmental strictures. Approximately 75% of patients with PSC have inflammatory bowel disease. The vanishing bile duct syndrome and adult bile ductopenia are rare conditions in which a decreased number of bile ducts are seen in liver biopsy specimens. The histologic picture is similar to that in primary biliary cirrhosis. This picture is seen in patients who develop chronic rejection after liver transplantation and in those who develop graft-versus-host disease after bone marrow transplantation. Vanishing bile duct syndrome also occurs in rare cases of sarcoidosis, in patients taking certain drugs (including chlorpromazine), and idiopathically. There are also familial forms of intrahepatic cholestasis. The familial intrahepatic cholestatic syndromes include progressive familial intrahepatic cholestasis (PFIC) types 1–3 and benign recurrent cholestasis (BRC). PFIC1 and BRC are autosomal recessive diseases that result from mutations in the ATP8B1 gene that encodes a protein belonging to the subfamily of P-type ATPases; the exact function of this protein remains poorly defined. While PFIC1 is a progressive condition that manifests in childhood, BRC presents later and is marked by recurrent episodes of jaundice and pruritus; the episodes are self-limited but can be debilitating. PFIC2 is caused by mutations in the ABCB11 gene, which encodes the bile salt export pump, and PFIC3 is caused by mutations in the multidrugresistant P-glycoprotein 3. Cholestasis of pregnancy occurs in the second and third trimesters and resolves after delivery. Its cause is unknown, but the condition is probably inherited, and cholestasis can be triggered by estrogen administration. PART 2 Cardinal Manifestations and Presentation of Diseases I. Intrahepatic A. Viral hepatitis 1. 2. Hepatitis A, Epstein-Barr virus infection, cytomegalovirus infection B. Alcoholic hepatitis C. Drug toxicity 1. 2. Cholestatic hepatitis—chlorpromazine, erythromycin estolate 3. D. Primary biliary cirrhosis E. Primary sclerosing cholangitis F. Vanishing bile duct syndrome 1. Chronic rejection of liver transplants 2. 3. G. Congestive hepatopathy and ischemic hepatitis H. Inherited conditions 1. 2. I. Cholestasis of pregnancy J. Total parenteral nutrition K. Nonhepatobiliary sepsis L. Benign postoperative cholestasis M. Paraneoplastic syndrome N. Veno-occlusive disease O. Graft-versus-host disease P. Infiltrative disease 1. 2. 3. Q. Infections 1. 2. II. A. 1. 2. 3. 4. 5. Malignant involvement of the porta hepatis lymph nodes B. Benign 1. 2. 3. 4. 5. 6. 7. Other causes of intrahepatic cholestasis include total parenteral nutrition (TPN); nonhepatobiliary sepsis; benign postoperative cholestasis; and a paraneoplastic syndrome associated with a number of different malignancies, including Hodgkin’s disease, medullary thyroid cancer, renal cell cancer, renal sarcoma, T cell lymphoma, prostate cancer, and several gastrointestinal malignancies. The term Stauffer’s syndrome has been used for intrahepatic cholestasis specifically associated with renal cell cancer. In patients developing cholestasis in the intensive care unit, the major considerations should be sepsis, ischemic hepatitis (“shock liver”), and TPN jaundice. Jaundice occurring after bone marrow transplantation is most likely due to veno-occlusive disease or graft-versus-host disease. In addition to hemolysis, sickle cell disease may cause intrahepatic and extrahepatic cholestasis. Jaundice is a late finding in heart failure caused by hepatic congestion and hepatocellular hypoxia. Ischemic hepatitis is a distinct entity of acute hypoperfusion characterized by an acute and dramatic elevation in the serum aminotransferases followed by a gradual peak in serum bilirubin. Jaundice with associated liver dysfunction can be seen in severe cases of Plasmodium falciparum malaria. The jaundice in these cases is due to a combination of indirect hyperbilirubinemia from hemolysis and both cholestatic and hepatocellular jaundice. Weil’s disease, a severe presentation of leptospirosis, is marked by jaundice with renal failure, fever, headache, and muscle pain. Causes of extrahepatic cholestasis can be split into malignant and benign (Table 58-3). Malignant causes include pancreatic, gallbladder, and ampullary cancers as well as cholangiocarcinoma. This last malignancy is most commonly associated with PSC and is exceptionally difficult to diagnose because its appearance is often identical to that of PSC. Pancreatic and gallbladder tumors as well as cholangiocarcinoma are rarely resectable and have poor prognoses. Ampullary carcinoma has the highest surgical cure rate of all the tumors that present as painless jaundice. Hilar lymphadenopathy due to metastases from other cancers may cause obstruction of the extrahepatic biliary tree. Choledocholithiasis is the most common cause of extrahepatic cholestasis. The clinical presentation can range from mild right-upper-quadrant discomfort with only minimal elevations of enzyme test values to ascending cholangitis with jaundice, sepsis, and circulatory collapse. PSC may occur with clinically important strictures limited to the extrahepatic biliary tree. IgG4-associated cholangitis is marked by stricturing of the biliary tree. It is critical that the clinician differentiate this condition from PSC as it is responsive to glucocorticoid therapy. In rare instances, chronic pancreatitis causes strictures of the distal common bile duct, where it passes through the head of the pancreas. AIDS cholangiopathy is a condition that is usually due to infection of the bile duct epithelium with CMV or cryptosporidia and has a cholangiographic appearance similar to that of PSC. The affected patients usually present with greatly elevated serum alkaline phosphatase levels (mean, 800 IU/L), but the bilirubin level is often near normal. These patients do not typically present with jaundice. gLOBAL CONSIDERATIONS While extrahepatic biliary obstruction and drugs are common causes of new-onset jaundice in developed countries, infections remain the leading cause in developing countries. Liver involvement and jaundice are observed with numerous infections, particularly malaria, babesiosis, severe leptospirosis, infections due to Mycobacterium tuberculosis and the Mycobacterium avium complex, typhoid fever, viral hepatitis secondary to infection with hepatitis viruses A–E, EBV and CMV infections, late phases of yellow fever, dengue hemorrhagic fever, schistosomiasis, fascioliasis, clonorchiasis, opisthorchiasis, ascariasis, echinococcosis, hepatosplenic candidiasis, disseminated histoplasmosis, cryptococcosis, coccidioimycosis, ehrlichiosis, chronic Q fever, yersiniosis, brucellosis, syphilis, and lep rosy. Bacterial infections that do not necessarily involve the liver and bile ducts may also lead to jaundice, as in cholestasis of sepsis. This chapter is a revised version of chapters that have appeared in prior editions of Harrison’s in which Marshall M. Kaplan was a co-author together with Daniel Pratt. Kathleen E. Corey, Lawrence S. Friedman Abdominal swelling is a manifestation of numerous diseases. Patients may complain of bloating or abdominal fullness and may note increasing abdominal girth on the basis of increased clothing or belt size. Abdominal discomfort is often reported, but pain is less frequent. When abdominal pain does accompany swelling, it is frequently the result of an intraabdominal infection, peritonitis, or pancreatitis. Patients with abdominal distention from ascites (fluid in the abdomen) may report the new onset of an inguinal or umbilical hernia. Dyspnea may result from pressure against the diaphragm and the inability to expand the lungs fully. The causes of abdominal swelling can be remembered conveniently as the six Fs: flatus, fat, fluid, fetus, feces, or a “fatal growth” (often a neoplasm). Flatus Abdominal swelling may be the result of increased intestinal gas. The normal small intestine contains approximately 200 mL of gas made up of nitrogen, oxygen, carbon dioxide, hydrogen, and methane. Nitrogen and oxygen are consumed (swallowed), whereas carbon dioxide, hydrogen, and methane are produced intraluminally by bacterial fermentation. Increased intestinal gas can occur in a number of conditions. Aerophagia, the swallowing of air, can result in increased amounts of oxygen and nitrogen in the small intestine and lead to abdominal swelling. Aerophagia typically results from gulping food; chewing gum; smoking; or as a response to anxiety, which can lead to repetitive belching. In some cases, increased intestinal gas is the consequence of bacterial metabolism of excess fermentable substances such as lactose and other oligosaccharides, which can lead to production of hydrogen, carbon dioxide, or methane. In many cases, the precise cause of abdominal distention cannot be determined. In some persons, particularly those with irritable bowel syndrome and bloating, the subjective sense of abdominal pressure is attributable to impaired intestinal transit of gas rather than increased gas volume. Abdominal distention—an objective increase in girth—is the result of a lack of coordination between diaphragmatic contraction and anterior abdominal wall relaxation, a response in some cases to an increase in intraabdominal volume loads. Occasionally, increased lumbar lordosis accounts for apparent abdominal distention. Fat Weight gain with an increase in abdominal fat can result in an increase in abdominal girth and can be perceived as abdominal swelling. Abdominal fat may be caused by an imbalance between caloric intake and energy expenditure associated with a poor diet and sedentary lifestyle; it also can be a manifestation of certain diseases, such as Cushing’s syndrome. Excess abdominal fat has been associated with an increased risk of insulin resistance and cardiovascular disease. Fluid The accumulation of fluid within the abdominal cavity (ascites) often results in abdominal distention and is discussed in detail below. Fetus Pregnancy results in increased abdominal girth. Typically, an increase in abdominal size is first noted at 12–14 weeks of gestation, when the uterus moves from the pelvis into the abdomen. Abdominal distention may be seen before this point as a result of fluid retention and relaxation of the abdominal muscles. Feces In the setting of severe constipation or intestinal obstruction, increased stool in the colon leads to increased abdominal girth. These conditions are often accompanied by abdominal discomfort or pain, nausea, and vomiting and can be diagnosed by imaging studies. Fatal growth An abdominal mass can result in abdominal swelling. Enlargement of the intraabdominal organs, specifically the liver (hepatomegaly) or spleen (splenomegaly), or an abdominal aortic aneurysm 286 can result in abdominal distention. Bladder distention also may result in abdominal swelling. In addition, malignancies, abscesses, or cysts can grow to sizes that lead to increased abdominal girth. APPROACH TO THE PATIENT: PART 2 Cardinal Manifestations and Presentation of Diseases Determining the etiology of abdominal swelling begins with history-taking and a physical examination. Patients should be questioned regarding symptoms suggestive of malignancy, including weight loss, night sweats, and anorexia. Inability to pass stool or flatus together with nausea or vomiting suggests bowel obstruction, severe constipation, or an ileus (lack of peristalsis). Increased eructation and flatus may point toward aerophagia or increased intestinal production of gas. Patients should be questioned about risk factors for or symptoms of chronic liver disease, including excessive alcohol use and jaundice, which suggest ascites. Patients should also be asked about other symptoms of medical conditions, including heart failure and tuberculosis, which may cause ascites. Physical examination should include an assessment for signs of systemic disease. The presence of lymphadenopathy, especially supraclavicular lymphadenopathy (Virchow’s node), suggests metastatic abdominal malignancy. Care should be taken during the cardiac examination to evaluate for elevation of jugular venous pressure (JVP); Kussmaul’s sign (elevation of the JVP during inspiration); a pericardial knock, which may be seen in heart failure or constrictive pericarditis; or a murmur of tricuspid regurgitation. Spider angiomas, palmar erythema, dilated superficial veins around the umbilicus (caput medusae), and gynecomastia suggest chronic liver disease. The abdominal examination should begin with inspection for the presence of uneven distention or an obvious mass. Auscultation should follow. The absence of bowel sounds or the presence of high-pitched localized bowel sounds points toward an ileus or intestinal obstruction. An umbilical venous hum may suggest the presence of portal hypertension, and a harsh bruit over the liver is heard rarely in patients with hepatocellular carcinoma or alcoholic hepatitis. Abdominal swelling caused by intestinal gas can be differentiated from swelling caused by fluid or a solid mass by percussion; an abdomen filled with gas is tympanic, whereas an abdomen containing a mass or fluid is dull to percussion. The absence of abdominal dullness, however, does not exclude ascites, because a minimum of 1500 mL of ascitic fluid is required for detection on physical examination. Finally, the abdomen should be palpated to assess for tenderness, a mass, enlargement of the spleen or liver, or presence of a nodular liver suggesting cirrhosis or tumor. Light palpation of the liver may detect pulsations suggesting retrograde vascular flow from the heart in patients with right-sided heart failure, particularly tricuspid regurgitation. Abdominal x-rays can be used to detect dilated loops of bowel sug gesting intestinal obstruction or ileus. Abdominal ultrasonography can detect as little as 100 mL of ascitic fluid, hepatosplenomegaly, a nodular liver, or a mass. Ultrasonography is often inadequate to detect retroperitoneal lymphadenopathy or a pancreatic lesion because of overlying bowel gas. If malignancy or pancreatic disease is suspected, CT can be performed. CT may also detect changes associated with advanced cirrhosis and portal hypertension (Fig. 59-1). Laboratory evaluation should include liver biochemical testing, serum albumin level measurement, and prothrombin time determina tion (international normalized ratio) to assess hepatic function as well as a complete blood count to evaluate for the presence of cytopenias that may result from portal hypertension or of leukocytosis, anemia, and thrombocytosis that may result from systemic infection. Serum amylase and lipase levels should be checked to evaluate the patient for acute pancreatitis. Urinary protein quantitation is indicated when nephrotic syndrome, which may cause ascites, is suspected. FIguRE 59-1 CT of a patient with a cirrhotic, nodular liver (white arrow), splenomegaly (yellow arrow), and ascites (arrowheads). In selected cases, the hepatic venous pressure gradient (pressure across the liver between the portal and hepatic veins) can be measured via cannulation of the hepatic vein to confirm that ascites is caused by cirrhosis (Chap. 365). In some cases, a liver biopsy may be necessary to confirm cirrhosis. Ascites in patients with cirrhosis is the result of portal hypertension and renal salt and water retention. Similar mechanisms contribute to ascites formation in heart failure. Portal hypertension signifies elevation of the pressure within the portal vein. According to Ohm’s law, pressure is the product of resistance and flow. Increased hepatic resistance occurs by several mechanisms. First, the development of hepatic fibrosis, which defines cirrhosis, disrupts the normal architecture of the hepatic sinusoids and impedes normal blood flow through the liver. Second, activation of hepatic stellate cells, which mediate fibrogenesis, leads to smooth-muscle contraction and fibrosis. Finally, cirrhosis is associated with a decrease in endothelial nitric oxide synthetase (eNOS) production, which results in decreased nitric oxide production and increased intrahepatic vasoconstriction. The development of cirrhosis is also associated with increased systemic circulating levels of nitric oxide (contrary to the decrease seen intrahepatically) as well as increased levels of vascular endothelial growth factor and tumor necrosis factor that result in splanchnic arterial vasodilation. Vasodilation of the splanchnic circulation results in pooling of blood and a decrease in the effective circulating volume, which is perceived by the kidneys as hypovolemia. Compensatory vasoconstriction via release of antidiuretic hormone ensues; the consequences are free water retention and activation of the sympathetic nervous system and the renin angiotensin aldosterone system, which lead in turn to renal sodium and water retention. Ascites in the absence of cirrhosis generally results from peritoneal carcinomatosis, peritoneal infection, or pancreatic disease. Peritoneal carcinomatosis can result from primary peritoneal malignancies such as mesothelioma or sarcoma, abdominal malignancies such as gastric or colonic adenocarcinoma, or metastatic disease from breast or lung carcinoma or melanoma (Fig. 59-2). The tumor cells lining the peritoneum produce a protein-rich fluid that contributes to the development of ascites. Fluid from the extracellular space is drawn into the peritoneum, further contributing to the development of ascites. Tuberculous peritonitis causes ascites via a similar mechanism; tubercles deposited on the peritoneum exude a proteinaceous fluid. Pancreatic ascites results from leakage of pancreatic enzymes into the peritoneum. FIguRE 59-2 CT of a patient with peritoneal carcinomatosis (white arrow) and ascites (yellow arrow). Cirrhosis accounts for 84% of cases of ascites. Cardiac ascites, peritoneal carcinomatosis, and “mixed” ascites resulting from cirrhosis and a second disease account for 10–15% of cases. Less common causes of ascites include massive hepatic metastasis, infection (tuberculosis, Chlamydia infection), pancreatitis, and renal disease (nephrotic syndrome). Rare causes of ascites include hypothyroidism and familial Mediterranean fever. Once the presence of ascites has been confirmed, the etiology of the ascites is best determined by paracentesis, a bedside procedure in which a needle or small catheter is passed transcutaneously to extract ascitic fluid from the peritoneum. The lower quadrants are the most frequent sites for paracentesis. The left lower quadrant is preferred because of the greater depth of ascites and the thinner abdominal wall. Paracentesis is a safe procedure even in patients with coagulopathy; 287 complications, including abdominal wall hematomas, hypotension, hepatorenal syndrome, and infection, are infrequent. Once ascitic fluid has been extracted, its gross appearance should be examined. Turbid fluid can result from the presence of infection or tumor cells. White, milky fluid indicates a triglyceride level >200 mg/ dL (and often >1000 mg/dL), which is the hallmark of chylous ascites. Chylous ascites results from lymphatic disruption that may occur with trauma, cirrhosis, tumor, tuberculosis, or certain congenital abnormalities. Dark brown fluid can reflect a high bilirubin concentration and indicates biliary tract perforation. Black fluid may indicate the presence of pancreatic necrosis or metastatic melanoma. The ascitic fluid should be sent for measurement of albumin and total protein levels, cell and differential counts, and, if infection is suspected, Gram’s stain and culture, with inoculation into blood culture bottles at the patient’s bedside to maximize the yield. A serum albumin level should be measured simultaneously to permit calculation of the serum-ascites albumin gradient (SAAG). The SAAG is useful for distinguishing ascites caused by portal hypertension from nonportal hypertensive ascites (Fig. 59-3). The SAAG reflects the pressure within the hepatic sinusoids and correlates with the hepatic venous pressure gradient. The SAAG is calculated by subtracting the ascitic albumin concentration from the serum albumin level and does not change with diuresis. A SAAG ≥1.1 g/dL reflects the presence of portal hypertension and indicates that the ascites is due to increased pressure in the hepatic sinusoids. According to Starling’s law, a high SAAG reflects the oncotic pressure that counterbalances the portal pressure. Possible causes include cirrhosis, cardiac ascites, hepatic vein thrombosis (Budd-Chiari syndrome), sinusoidal obstruction syndrome (veno-occlusive disease), or massive liver metastases. A SAAG <1.1 g/dL indicates that the ascites is not related to portal hypertension, as in tuberculous peritonitis, peritoneal carcinomatosis, or pancreatic ascites. For high-SAAG (≥1.1) ascites, the ascitic protein level can provide further clues to the etiology (Fig. 59-3). An ascitic protein level of ≥2.5 g/dL indicates that the hepatic sinusoids are normal and are allowing passage of protein into the ascites, as occurs in cardiac ascites, early Budd-Chiari syndrome, or sinusoidal obstruction syndrome. An ascitic protein level <2.5 g/dL indicates that the hepatic sinusoids have been damaged and scarred and no longer allow passage of protein, as occurs with cirrhosis, late Budd-Chiari syndrome, or massive liver metastases. Pro-brain-type natriuretic peptide (BNP) is a natriuretic hormone released by the heart as a result of increased volume and ventricular wall stretch. High levels of BNP in serum occur in heart failure and may be useful in identifying heart failure as the cause of high-SAAG ascites. Further tests are indicated only in specific clinical circumstances. When secondary peritonitis resulting from a perforated hollow viscus is suspected, ascitic glucose and lactate dehydrogenase (LDH) levels can < 1.1 g/dL Ascitic protein ˜ 2.5 g/dL Cirrhosis Late Budd-Chiari syndrome Massive liver metastases ˜ 1.1 g/dL Heart failure/constrictive pericarditis Early Budd-Chiari syndrome IVC obstruction Sinusoidal obstruction syndrome Biliary leak Nephrotic syndrome Pancreatitis Peritoneal carcinomatosis Tuberculosis Ascitic protein < 2.5 g/dL SAAG FIguRE 59-3 Algorithm for the diagnosis of ascites according to the serum-ascites albumin gradient (SAAG). IVC, inferior vena cava. 288 be measured. In contrast to “spontaneous” bacterial peritonitis, which may complicate cirrhotic ascites (see “Complications,” below), secondary peritonitis is suggested by an ascitic glucose level <50 mg/dL, an ascitic LDH level higher than the serum LDH level, and the detection of multiple pathogens on ascitic fluid culture. When pancreatic ascites is suspected, the ascitic amylase level should be measured and is typically >1000 mg/dL. Cytology can be useful in the diagnosis of peritoneal carcinomatosis. At least 50 mL of fluid should be obtained and sent for immediate processing. Tuberculous peritonitis is typically associated with ascitic fluid lymphocytosis but can be difficult to diagnose by paracentesis. A smear for acid-fast bacilli has a diagnostic sensitivity of only 0 to 3%; a culture increases the sensitivity to 35–50%. In patients without cirrhosis, an elevated ascitic adenosine deaminase level has a sensitivity of >90% when a cut-off value of 30–45 U/L is used. When the cause of ascites remains uncertain, laparotomy or laparoscopy with peritoneal biopsies for histology and culture remains the gold standard. The initial treatment for cirrhotic ascites is restriction of sodium intake to 2 g/d. When sodium restriction alone is inadequate to control ascites, oral diuretics—typically the combination of spironolactone and furosemide—are used. Spironolactone is an aldosterone antagonist that inhibits sodium resorption in the distal convoluted tubule of the kidney. Use of spironolactone may be limited by hyponatremia, hyperkalemia, and painful gynecomastia. If the gynecomastia is distressing, amiloride (5–40 mg/d) may be substituted for spironolactone. Furosemide is a loop diuretic that is generally combined with spironolactone in a ratio of 40:100; maximal daily doses of spironolactone and furosemide are 400 mg and 160 mg, respectively. Refractory cirrhotic ascites is defined by the persistence of ascites despite sodium restriction and maximal (or maximally tolerated) diuretic use. Pharmacologic therapy for refractory ascites includes the addition of midodrine, an α1-adrenergic antagonist, or clonidine, an α2-adrenergic antagonist, to diuretic therapy. These agents act as vasoconstrictors, counteracting splanchnic vasodilation. Midodrine alone or in combination with clonidine improves systemic hemodynamics and control of ascites over that obtained with diuretics alone. Although β-adrenergic blocking agents (beta blockers) are often prescribed to prevent variceal hemorrhage in patients with cirrhosis, the use of beta blockers in patients with refractory ascites is associated with decreased survival rates. When medical therapy alone is insufficient, refractory ascites can be managed by repeated large-volume paracentesis (LVP) or a transjugular intrahepatic peritoneal shunt (TIPS)—a radiologically placed portosystemic shunt that decompresses the hepatic sinusoids. Intravenous infusion of albumin accompanying LVP decreases the risk of “post-paracentesis circulatory dysfunction” and death. Patients undergoing LVP should receive IV albumin infusions of Part 2 Cardinal Manifestations and Presentation of Diseases 6–8 g/L of ascitic fluid removed. TIPS placement is superior to LVP in reducing the reaccumulation of ascites but is associated with an increased frequency of hepatic encephalopathy, with no difference in mortality rates. Malignant ascites does not respond to sodium restriction or diuretics. Patients must undergo serial LVPs, transcutaneous drainage catheter placement, or, rarely, creation of a peritoneovenous shunt (a shunt from the abdominal cavity to the vena cava). Ascites caused by tuberculous peritonitis is treated with standard antituberculosis therapy. Noncirrhotic ascites of other causes is treated by correction of the precipitating condition. Spontaneous bacterial peritonitis (SBP; Chap. 159) is a common and potentially lethal complication of cirrhotic ascites. Occasionally, SBP also complicates ascites caused by nephrotic syndrome, heart failure, acute hepatitis, and acute liver failure but is rare in malignant ascites. Patients with SBP generally note an increase in abdominal girth; however, abdominal tenderness is found in only 40% of patients, and rebound tenderness is uncommon. Patients may present with fever, nausea, vomiting, or the new onset of or exacerbation of preexisting hepatic encephalopathy. SBP is defined by a polymorphonuclear neutrophil (PMN) count of ffi250/ L in the ascitic fluid. Cultures of ascitic fluid typically reveal one bacterial pathogen. The presence of multiple pathogens in the setting of an elevated ascitic PMN count suggests secondary peritonitis from a ruptured viscus or abscess (Chap. 159). The presence of multiple pathogens without an elevated PMN count suggests bowel perforation from the paracentesis needle. SBP is generally the result of enteric bacteria that have translocated across an edematous bowel wall. The most common pathogens are gram-negative rods, including Escherichia coli and Klebsiella, as well as streptococci and enterococci. Treatment of SBP with an antibiotic such as IV cefotaxime is effective against gram-negative and gram-positive aerobes. A 5-day course of treatment is sufficient if the patient improves clinically. Nosocomial or health care–acquired SBP is frequently caused by multidrug-resistant bacteria, and initial antibiotic therapy should be guided by the local bacterial epidemiology. Cirrhotic patients with a history of SBP, an ascitic fluid total protein concentration <1 g/dL, or active gastrointestinal bleeding should receive prophylactic antibiotics to prevent SBP; oral daily norfloxacin is commonly used. Diuresis increases the activity of ascitic fluid protein opsonins and may decrease the risk of SBP. Hepatic hydrothorax occurs when ascites, often caused by cirrhosis, migrates via fenestrae in the diaphragm into the pleural space. This condition can result in shortness of breath, hypoxia, and infection. Treatment is similar to that for cirrhotic ascites and includes sodium restriction, diuretics, and, if needed, thoracentesis or TIPS placement. Chest tube placement should be avoided. Dysuria, Bladder Pain, and the Interstitial Cystitis/Bladder Pain Syndrome John W. Warren Dysuria and bladder pain are two symptoms that commonly call atten-60e SEC TION 7 tion to the lower urinary tract. Dysuria, or pain that occurs during urination, is commonly perceived as burning or stinging in the urethra and is a symptom of several syndromes. The presence or absence of other symptoms is often helpful in distinguishing among these conditions. Some of these syndromes differ in men and women. Approximately 50% of women experience dysuria at some time in their lives; ∼20% report having had dysuria within the past year. Most dysuria syndromes in women can be categorized into two broad groups: bacterial cystitis and lower genital tract infections. Bacterial cystitis is usually caused by Escherichia coli; a few other gram-negative rods and Staphylococcus saprophyticus can also be responsible. Bacterial cystitis is acute in onset and manifests not only as dysuria but also as urinary frequency, urinary urgency, suprapubic pain, and/or hematuria. The lower genital tract infections include vaginitis, urethritis, and ulcerative lesions; many of these infections are caused by sexually transmitted organisms and should be considered particularly in young women who have new or multiple sexual partners or whose partner(s) do not use condoms. The onset of dysuria associated with these syndromes is more gradual than in bacterial cystitis and is thought (but not proven) to result from the flow of urine over damaged epithelium. Frequency, urgency, suprapubic pain, and hematuria are reported less frequently than in bacterial cystitis. Vaginitis, caused by Candida albicans or Trichomonas vaginalis, presents as vaginal discharge or irritation. Urethritis is a consequence of infection by Chlamydia trachomatis or Neisseria gonorrhoeae. Ulcerative genital lesions may be caused by herpes simplex virus and several other specific organisms. Among women presenting with dysuria, the probability of bacterial cystitis is ∼50%. This figure rises to >90% if four criteria are fulfilled: dysuria and frequency without vaginal discharge or irritation. Present standards suggest that women meeting these four criteria, if they are otherwise healthy, are not pregnant, and have an apparently normal urinary tract, can be diagnosed with uncomplicated bacterial cystitis and treated empirically with appropriate antibiotics. Other women with dysuria should be further evaluated by urine dipstick, urine culture, and a pelvic examination. Dysuria is less common among men. The syndromes presenting as dysuria are similar to those in women but with some important distinctions. In the majority of men with dysuria, frequency, urgency, and/or suprapubic, penile, and/or perineal pain, the prostate is involved, either as the source of infection or as an obstruction to urine flow. Bacterial prostatitis is usually caused by E. coli or another gram-negative rod, with one of two presentations. Acute bacterial prostatitis presents with fever and chills; prostate examination should be gentle or not performed at all, as massage may result in a wave of bacteremia. Chronic bacterial prostatitis presents as recurrent episodes of bacterial cystitis; prostate examination with massage demonstrates prostatic bacteria and leukocytes. Benign prostatic hyperplasia (BPH) can obstruct urine flow, with consequent symptoms of weak stream, hesitancy, and dribbling. If a bacterial infection develops behind the obstructing prostate, dysuria and other symptoms of cystitis will occur. Men whose symptoms are consistent with bacterial cystitis should be evaluated with urinalysis and urine culture. Several sexually transmitted infections can manifest as dysuria. Urethritis (usually without urinary frequency) presents as a urethral discharge and can be caused by C. trachomatis, N. gonorrhoeae, Mycoplasma genitalium, Ureaplasma urealyticum, or T. vaginalis. Herpes simplex, chancroid, and other ulcerous lesions may present as dysuria, again without urinary frequency. For further discussion, see Chaps. 162 and 163. Other causes of dysuria may be found in patients of either sex. Some cases are acute and include lower urinary tract stones, trauma, and urethral exposure to topical chemicals. Others may be relatively chronic and attributable to lower urinary tract cancers, certain medications, Behçet’s syndrome, reactive arthritis, a poorly understood entity known as chronic urethral syndrome, and interstitial cystitis/bladder pain syndrome (see below). Studies indicate that patients perceive pain as coming from the urinary bladder if it is suprapubic in location, alters with bladder filling or emptying, and/or is associated with urinary symptoms such as urgency and frequency. Bladder pain occurring acutely (i.e., over hours or a day or two) is helpful in distinguishing bacterial cystitis from urethritis, vaginitis, and other genital infections. Chronic or recurrent bladder pain may accompany lower urinary tract stones; bladder, uterine, cervical, vaginal, urethral, or prostate cancer; urethral diverticulum; cystitis induced by radiation or certain medications; tuberculous cystitis; bladder neck obstruction; neurogenic bladder; urogenital prolapse; or BPH. In the absence of these conditions, the diagnosis of interstitial cystitis/bladder pain syndrome (IC/BPS) should be considered. Most clinicians with outpatient practices see undiagnosed cases of IC/BPS. This chronic condition is characterized by pain perceived to be from the urinary bladder, urinary urgency and frequency, and nocturia. The majority of cases are diagnosed in women. Symptoms wax and wane for months or years or possibly even for the rest of the patient’s life. The spectrum of symptom intensity is broad. The pain can be excruciating, urgency can be distressing, frequency can be up to 60 times per 24 h, and nocturia can cause sleep deprivation. These symptoms can disrupt daily activities, work schedules, and personal relationships; patients with IC/BPS report less life satisfaction than do those with end-stage renal disease. IC/BPS is not a new disease, having first been described in the late nineteenth century in a patient with the symptoms mentioned above and a single ulcer visible on cystoscopy (now called a Hunner’s lesion after the urologist who first reported it). Over the ensuing decades, it became clear that many patients with similar symptoms had no ulcer. It is now appreciated that only up to 10% of patients with IC/BPS have a Hunner’s lesion. The definition of IC/BPS, its diagnostic features, and even its name continue to evolve. The American Urological Association has defined IC/BPS as “an unpleasant sensation (pain, pressure, discomfort) perceived to be related to the urinary bladder, associated with lower urinary tract symptoms of more than six weeks’ duration, in the absence of infection or other identifiable causes.” CHAPTER 60e Dysuria, Bladder Pain, and the Interstitial Cystitis/Bladder Pain Syndrome 60e-2 Many patients with IC/BPS also have other syndromes, such as fibromyalgia, chronic fatigue syndrome, irritable bowel syndrome, and migraine. These syndromes collectively are known as functional somatic syndromes (FSSs): chronic conditions in which pain and fatigue are prominent features but laboratory tests and histologic findings are normal. Like IC/BPS, the FSSs often are associated with depression and anxiety. The majority of FSSs affect more women than men, and more than one FSS can affect a single patient. Because of its similar features and comorbidity, IC/BPS sometimes is considered an FSS. Contemporary population studies of IC/BPS in the United States indicate a prevalence of 3–6% among women and 2–4% among men. For decades, it was thought that IC/BPS occurred mostly in women. These prevalence findings, however, have generated research aimed at determining the proportion of men who have symptoms usually diagnosed as chronic prostatitis (now known as chronic prostatitis/chronic pelvic pain syndrome) but who actually have IC/BPS. Among women, the average age at onset of IC/BPS symptoms is the early forties, but the range is from childhood through the early sixties. Risk factors (antecedent features that distinguish cases from controls) primarily have been FSSs. Indeed, the odds of IC/BPS increase with the number of such syndromes present. Surgery was long thought to be a risk factor for IC/BPS, but analyses adjusting for FSSs refuted that association. About one-third of patients appear to have bacterial cystitis at the onset of IC/BPS. The natural history of IC/BPS is not known. Although studies from urology and urogynecology practices have been interpreted as showing that IC/BPS lasts for the lifetime of the patient, population studies suggest that some individuals with IC/BPS do not consult specialists and may not seek medical care at all, and most prevalence studies do not show an upward trend with age—a pattern that would be expected with incident cases throughout adulthood followed by lifetime persistence of a nonfatal disease. It may be reasonable to conclude that patients in a urology practice represent those with the most severe and recalcitrant IC/BPS. For the ≤10% of IC/BPS patients who have a Hunner’s lesion, the term interstitial cystitis may indeed describe the histopathologic picture. Most of these patients have substantive inflammation, mast cells, and granulation tissue. However, in the 90% of patients without such lesions, the bladder mucosa and interstitium are relatively normal, with scant inflammation. Numerous hypotheses about the pathogenesis of IC/BPS have been put forward. It is not surprising that most early theories focused on the bladder. For instance, IC/BPS has been investigated as a chronic bladder infection. Sophisticated technologies have not identified a causative organism in urine or in bladder tissue; however, the patients studied by these methods had IC/BPS of long duration, and the results do not preclude the possibility that infection may trigger the syndrome or may be a feature of early IC/BPS. Other inflammatory factors, including a role for mast cells, have been postulated, but (as noted above) the 90% of patients without a Hunner’s ulcer have little bladder inflammation and do not have a prominence of mast cells in bladder tissue. Autoimmunity has been considered, but autoantibodies are low in titer, nonspecific, and thought to be a result rather than a cause of IC/BPS. Increased permeability of the bladder mucosa due to defective epithelium or glycosaminoglycan (the bladder’s mucous coating) has been studied frequently, but the findings have been inconclusive. Investigations of causes outside the bladder have been prompted by the presence of comorbid FSSs. Many patients with FSSs have abnormal pain sensitivity as evidenced by (1) low pain thresholds in body areas unrelated to the diagnosed syndrome, (2) dysfunctional descending neurologic control of tactile signals, and (3) enhanced brain responses to touch in functional neuroimaging studies. Moreover, in patients with IC/BPS, body surfaces remote from the bladder are more sensitive to pain than is the case in individuals without IC/BPS. All PART 2 Cardinal Manifestations and Presentation of Diseases these findings are consistent with upregulation of sensory processing in the brain. Indeed, a prevailing theory is that these concomitantly occurring syndromes have in common an abnormality of brain processing of sensory input. However, antecedence is a critical criterion for causality, and no study has demonstrated that abnormal pain sensitivity precedes either IC/BPS or the FSSs. In some patients, IC/BPS has a gradual onset, and/or the cardinal symptoms of pain, urgency, frequency, and nocturia appear sequentially in no consistent order. Other patients can identify the exact date of onset of IC/BPS symptoms. More than half of the latter patients describe dysuria beginning on that date. As stated, only a minority of IC/BPS patients who obtain medical care soon after symptom onset have uropathogenic bacteria or leukocytes in the urine. These patients—and many others with new-onset IC/BPS—are treated with antibiotics for presumptive bacterial cystitis or, if male, chronic bacterial prostatitis. Persistent or recurring symptoms without bacteriuria eventually prompt a differential diagnosis, and IC/BPS is considered. Traditionally, the diagnosis of IC/BPS has been delayed for years, but recent interest in the disease has shortened this interval. The pain of IC/BPS includes suprapubic prominence and changes with the voiding cycle. Two-thirds of women with IC/BPS report two or more sites of pain. The most common site (involved in 80% of women) and generally the one with the most severe pain is the supra-pubic area. About 35% of female patients have pain in the urethra, 25% in other parts of the vulva, and 30% in nonurogenital areas, mostly the low back and also the anterior or posterior thighs or the buttocks. The pain of IC/BPS is most commonly described as aching, pressing, throbbing, tender, and/or piercing. What may distinguish IC/BPS from other pelvic pain is that, in 95% of patients, bladder filling exacerbates the pain and/or bladder emptying relieves it. Almost as many patients report a puzzling pattern in which certain dietary substances worsen the pain of IC/BPS. Smaller proportions—but still the majority—of patients report that their IC/BPS pain is worsened by menstruation, stress, tight clothing, exercise, and riding in a car as well as during or after vaginal intercourse. The urethral and vulvar pains of IC/BPS merit special mention. In addition to the descriptive adjectives for IC/BPS mentioned above, these pains commonly are described as burning, stinging, and sharp and as being worsened by touch, tampons, and vaginal intercourse. Patients report that urethral pain increases during urination and generally lessens afterward. These characteristics have commonly resulted in diagnosis of the urethral pain of IC/BPS as chronic urethral syndrome and the vulvar pain as vulvodynia. In many patients with IC/BPS, there is a link between pain and urinary urgency; that is, two-thirds of patients describe the urge to urinate as a desire to relieve their bladder pain. Only 20% report that the urge stems from a desire to prevent incontinence; indeed, very few patients with IC/BPS are incontinent. As mentioned above, urinary frequency can be severe, with ∼85% of patients voiding more than 10 times per 24 h and some as often as 60 times. Voiding continues through the night, and nocturia is common, frequent, and often associated with sleep deprivation. Beyond these common symptoms of IC/BPS, additional urinary and other symptoms may be present. Among the urinary symptoms are difficulty in starting urine flow, perceptions of difficulty in emptying the bladder, and bladder spasms. Other symptoms include the manifestations of comorbid FSSs as well as symptoms that do not constitute recognized syndromes, such as numbness, muscle spasms, dizziness, ringing in the ears, and blurred vision. The pain, urgency, and frequency of IC/BPS can be debilitating. Proximity to a bathroom is a continual focus, and patients report difficulties in the workplace, leisure activities, travel, and simply leaving home. Familial and sexual relationships can be strained. Traditionally, IC/BPS has been considered a rare condition that is diagnosed by urologists at cystoscopy. However, this disorder is much more common than once was thought; it is now being considered earlier in its course and is being diagnosed and managed more often by primary care clinicians. Results of physical examination, urinalysis, and urologic procedures are insensitive and/or nonspecific. Thus, diagnosis is based on the presence of appropriate symptoms and the exclusion of diseases with a similar presentation. Three categories of disorders can be considered in the differential diagnosis of IC/BPS. The first comprises diseases that manifest as bladder pain (see above) or urinary symptoms. Among the latter diseases is overactive bladder, a chronic condition of women and men that presents as urgency and frequency and that can be distinguished from IC/BPS by the patient’s history: pain is not a feature of overactive bladder, and its urgency arises from the need to avoid incontinence. Endometriosis is a special case: it can be asymptomatic or can cause pelvic pain, dysmenorrhea, and dyspareunia—i.e., types of pain that mimic IC/BPS. Endometrial implants on the bladder (although uncommon) can cause urinary symptoms, and the resulting syndrome can mimic IC/BPS. Even if endometriosis is identified, it is difficult in the absence of bladder implants to determine whether it is causative of or incidental to the symptoms of IC/BPS in a specific woman. The second category of disorders encompasses the FSSs that can accompany IC/BPS. IC/BPS can be misdiagnosed as gynecologic chronic pelvic pain, irritable bowel syndrome, or fibromyalgia. The correct diagnosis may be entertained only when either changes of pain with altered bladder volume or urinary symptoms become more prominent. The third category involves syndromes that IC/BPS mimics by way of its referred pain, such as vulvodynia and chronic urethral syndrome. Therefore, IC/BPS should be considered in the differential diagnosis of persistent or recurrent “urinary tract infection” (UTI) with sterile urine cultures; overactive bladder with pain; chronic pelvic pain, endometriosis, vulvodynia, or FSSs with urinary symptoms; and “chronic prostatitis.” As mentioned above, important clues to the diagnosis of IC/BPS are pain that changes with bladder volume or with certain foods or drinks. Common among these are chilies, chocolate, citrus fruits, tomatoes, alcohol, caffeinated drinks, and carbonated beverages; full lists of common trigger foods are available at the websites cited in the treatment section below. Cystoscopy under anesthesia formerly was thought to be necessary for the diagnosis of IC/BPS because of its capacity to reveal a Hunner’s lesion or—in the 90% of patients without an ulcer—petechial hemorrhages after bladder distention. However, because Hunner’s lesions are uncommon in IC/BPS and petechiae are nonspecific, cystoscopy is no longer necessary for diagnosis. Accordingly, the indications for urologic referral have evolved toward the need to rule out other diseases or to administer more advanced treatment. A typical patient presents to the primary clinician after days, weeks, or months of pain, urgency, frequency, and/or nocturia. The presence of urinary nitrites, leukocytes, or uropathogenic bacteria should prompt treatment for UTI in women and chronic bacterial prostatitis in men. Persistence or recurrence of symptoms in the absence of bacteriuria should prompt a pelvic examination for women, an assay for serum prostate-specific antigen for men, and urine cytology and inclusion of IC/BPS in the differential diagnosis for both sexes. In the diagnosis of IC/BPS, inquiries about pain, pressure, and discomfort are useful; IC/BPS should be considered if any of these sensations are noted in one or more anterior or posterior sites between the umbilicus and the upper thighs. Nondirective questions about the effect of bladder volume changes include “As your next urination approaches, does this pain get better, get worse, or stay the same?” and “After you urinate, does this pain get better, get worse, or stay the same?” Establishing that the pain is exacerbated by the consumption of certain foods and drinks not only supports the diagnosis of IC/BPS but also serves as the basis for one of the first steps in managing this syndrome. A nondirective way to ask about urgency is to describe it to the patient as a compelling urge to urinate that is difficult to postpone; follow-up questions can determine whether this urge is intended to relieve pain or prevent incontinence. To assess severity and provide quantitative baseline measures, pain and urgency should be estimated by the patient on a scale of 0–10, with 0 being none and 10 the worst 60e-3 imaginable. Frequency per 24-h period should be determined and nocturia assessed as the number of times per night the patient is awakened by the need to urinate. About half of patients with IC/BPS have intermittent or persistent microscopic hematuria; this manifestation and the need to exclude bladder stones or cancer require urologic or urogynecologic referral. Initiation of therapy for IC/BPS does not hamper subsequent urologic evaluation. The goal of therapy is to relieve the symptoms of IC/BPS; the challenge lies in the fact that no treatment is uniformly successful. However, most patients eventually obtain relief, generally with a multifaceted approach. The American Urological Association’s guidelines for management of IC/BPS are an excellent resource. The correct strategy is to begin with conservative therapies and proceed to riskier measures only if necessary and under the supervision of a urologist or urogynecologist. Conservative tactics include education, stress reduction, dietary changes, medications, pelvic-floor physical therapy, and treatment of associated FSSs. Months or even years may have passed since the onset of symptoms, and the patient’s life may have been disrupted continually, with repeated medical visits provoking frustration and dismay in both patient and physician. In this circumstance, simply giving a name to the syndrome is beneficial. The physician should discuss the disease, the diagnostic and therapeutic strategies, and the prognosis with the patient and with the spouse and/or other pertinent family members, who may need to be made aware that although IC/BPS has no visible manifestations, the patient is undergoing substantial pain and suffering. This information is particularly important for sexual partners, as exacerbation of pain during and after intercourse is a common feature of IC/BPS. Because stress can worsen IC/ BPS symptoms, stress reduction and active measures such as yoga or meditation exercises may be suggested. The Interstitial Cystitis Association (http://www.ichelp.com) and the Interstitial Cystitis Network (http://www.ic-network.com) can be useful in this educational process. In constructing a benign diet, some of the many patients who identify particular foods and drinks that exacerbate their symptoms find it useful to exclude all possible offenders and add items back into the diet one at a time to confirm which ones worsen their symptoms. Patients also should experiment with fluid volumes; some find relief with less fluid, others with more. The pelvic floor is often tender in IC/BPS patients. Two randomized controlled trials showed that weekly physical therapy directed at relaxation of the pelvic muscles yielded significantly more relief than a similar schedule of general body massage. This intervention can be initiated under the direction of a knowledgeable physical therapist who recognizes that the objective is to relax the pelvic floor, not to strengthen it. Among oral medications, nonsteroidal anti-inflammatory drugs are commonly used but are controversial and often unsuccessful. Two randomized controlled trials showed that amitriptyline can diminish IC/BPS symptoms if an adequate dose (≥50 mg per night) can be given. This drug is used not for its antidepressant activity but because of its proven effects on neuropathic pain; however, it is not approved by the U.S. Food and Drug Administration for treatment of IC/BPS. An initial dose of 10 mg at bedtime is increased weekly up to 75 mg (or less if a lower dose adequately relieves symptoms). Side effects can be expected and include dry mouth, weight gain, sedation, and constipation. If this regimen does not control symptoms adequately, pentosan polysulfate, a semisynthetic polysaccharide, can be added at a dose of 100 mg three times a day. Its theoretical effect is to replenish a possibly defective glycosaminoglycan layer over the bladder mucosa; randomized controlled trials suggest only CHAPTER 60e Dysuria, Bladder Pain, and the Interstitial Cystitis/Bladder Pain Syndrome 60e-4 a modest benefit over placebo. Adverse reactions are uncommon and include gastrointestinal symptoms, headache, and alopecia. Pentosan polysulfate has weak anticoagulant effects and perhaps should be avoided by patients with coagulation abnormalities. Anecdotal reports suggest that successful therapy for one FSS is accompanied by diminished symptoms of other FSSs. As has been noted here, IC/BPS often is associated with one or several FSSs. Thus, it seems reasonable to hope that, to the extent that accompanying FSSs are treated successfully, the symptoms of IC/BPS will be relieved as well. If several months of these therapies in combination do not relieve symptoms adequately, the patient should be referred to a urologist or urogynecologist who has access to additional modalities. Cystoscopy under anesthesia allows distention of the bladder with water, a procedure that provides ∼40% of patients with several months of relief and can be repeated. For those few patients with a Hunner’s lesion, fulguration may offer relief. Bladder instillation of solutions containing lidocaine or dimethyl sulfoxide can be administered. Physicians experienced in the care of IC/BPS patients have used anticonvulsants, narcotics, and cyclosporine as components of therapy. Pain specialists can be of assistance. Sacral neuromodulation with a temporary percutaneous electrode can be tested and, if effective, can then be performed with an implanted device. In a very small number of patients with recalcitrant symptoms, surgeries, including cystoplasty, partial or total cystectomy, and urinary diversion, may provide relief. PART 2 Cardinal Manifestations and Presentation of Diseases Azotemia and urinary Abnormalities Julie Lin, Bradley M. Denker Normal kidney functions occur through numerous cellular pro-cesses to maintain body homeostasis. Disturbances in any of these 61 functions can lead to abnormalities that may be detrimental to survival. Clinical manifestations of these disorders depend on the pathophysiology of renal injury and often are identified as a complex of symptoms, abnormal physical findings, and laboratory changes that constitute specific syndromes. These renal syndromes (Table 61-1) may arise from systemic illness or as primary renal disease. Nephrologic syndromes usually consist of several elements that reflect the underlying pathologic processes, typically including one or more of the following: (1) reduction in glomerular filtration 289 rate (GFR) (azotemia), (2) abnormalities of urine sediment (red blood cells [RBCs], white blood cells [WBCs], casts, and crystals), abnormal excretion of serum proteins (proteinuria), (4) disturbances in urine volume (oliguria, anuria, polyuria), (5) presence of hypertension and/or expanded total body fluid volume (edema), (6) electrolyte abnormalities, and (7) in some syndromes, fever/pain. The specific combination of these findings should permit identification of one of the major nephrologic syndromes (Table 61-1) and allow differential diagnoses to be narrowed so that the appropriate diagnostic and therapeutic course can be determined. All these syndromes and their associated diseases are discussed in more detail in subsequent chapters. This chapter focuses on several aspects of renal abnormalities that are critically important for distinguishing among those processes: (1) reduction in GFR leading to azotemia, alterations of the urinary sediment and/or protein excretion, and abnormalities of urinary volume. Monitoring the GFR is important in both hospital and outpatient settings, and several different methodologies are available. GFR is the primary metric for kidney “function,” and its direct measurement involves administration of a radioactive isotope (such as inulin or iothalamate) that is filtered at the glomerulus into the urinary space but is neither reabsorbed nor secreted throughout the tubule. GFR—i.e., the clearance of inulin or iothalamate in milliliters per minute—is calculated from the rate of appearance of the isotope in the urine over several hours. In most clinical circumstances, direct GFR measurement is not feasible, and the plasma creatinine level is used as a surrogate to estimate GFR. Plasma creatinine (PCr) is the most widely used marker for GFR, which is related directly to urine creatinine (UCr) excretion and inversely to PCr. On the basis of this relationship (with some important caveats, as discussed below), GFR will fall in roughly inverse proportion to the rise in PCr. Failure to account for GFR reductions in drug dosing can lead to significant morbidity and death from drug toxicities (e.g., digoxin, aminoglycosides). In the outpatient setting, PCr serves as an estimate for GFR (although much less accurate; see below). In patients with chronic progressive renal disease, there is an approximately linear relationship between 1/PCr (y axis) and time (x axis). The slope of that line will remain constant for an individual; PART 2 Cardinal Manifestations and Presentation of Diseases when values deviate, an investigation for a superimposed acute process (e.g., volume depletion, drug reaction) should be initiated. Signs and symptoms of uremia develop at significantly different levels of PCr, depending on the patient (size, age, and sex), underlying renal disease, existence of concurrent diseases, and true GFR. Generally, patients do not develop symptomatic uremia until renal insufficiency is severe (GFR <15 mL/min). A significantly reduced GFR (either acute or chronic) is usually reflected in a rise in PCr, leading to retention of nitrogenous waste products (defined as azotemia) such as urea. Azotemia may result from reduced renal perfusion, intrinsic renal disease, or postrenal processes (ureteral obstruction; see below and Fig. 61-1). Precise determination of GFR is problematic, as both commonly measured indices (urea and creatinine) have characteristics that affect their accuracy as markers of clearance. Urea clearance may underestimate GFR significantly because of urea reabsorption by the tubule. In contrast, creatinine is derived from muscle metabolism of creatine, and its generation varies little from day to day. Creatinine clearance (CrCl), an approximation of GFR, is measured from plasma and urinary creatinine excretion rates for a defined period (usually 24 h) and is expressed in milliliters per minute: CrCl = (Uvol × UCr)/(PCr × Tmin). Creatinine is useful for estimating GFR because it is a small, freely filtered solute that is not reabsorbed by the tubules. PCr levels can increase acutely from AZOTEMIA Urinalysis and Renal ultrasound HydronephrosisRenal size parenchyma Urinalysis Urologic evaluation Relieve obstruction Small kidneys, thin cortex, bland sediment, isosthenuria <3.5 g protein/24 h Normal size kidneys Intact parenchyma Bacteria Pyelonephritis Chronic Renal Failure Symptomatic treatment delay progression If end-stage, prepare for dialysis Normal urinalysis with oliguria Abnormal urinalysis WBC, casts eosinophils Interstitial nephritis Red blood cells Renal artery or vein occlusion Urine electrolytes Muddy brown casts, amorphous sediment + protein RBC casts Proteinuria Angiogram FeNa <1% U osmolality > 500 mosmol FeNa >1% U osmolality < 350 mosmol Renal biopsy Prerenal Azotemia Volume contraction, cardiac failure, vasodilatation, drugs, sepsis, renal vasoconstriction, impaired autoregulation Acute Tubular Necrosis Glomerulonephritis or vasculitis Immune complex, anti-GBM disease Acute Renal Failure FIguRE 61-1 Approach to the patient with azotemia. FeNa, fractional excretion of sodium; GBM, glomerular basement membrane; RBC, red blood cell; WBC, white blood cell. dietary ingestion of cooked meat, however, and creatinine can be secreted into the proximal tubule through an organic cation pathway (especially in advanced progressive chronic kidney disease), leading to overestimation of GFR. When a timed collection for CrCl is not available, decisions about drug dosing must be based on PCr alone. Two formulas are used widely to estimate kidney function from PCr: (1) Cockcroft-Gault and (2) four-variable MDRD (Modification of Diet in Renal Disease). Cockcroft-Gault: CrCl (mL/min) = (140 − age (years) × weight (kg) ×[0.85 if female])/(72 × PCr (mg/dL). MDRD: eGFR (mL/min per 1.73 m2) = 186.3 × PCr (e−1.154) × age (e−0.203) ×(0.742 if female)×(1.21 if black). Numerous websites are available to assist with these calculations (www.kidney.org/professionals/kdoqi/gfr_calculator.cfm). A newer CKDEPI eGFR, which was developed by pooling several cohorts with and without kidney disease who had data on directly measured GFR, appears to be more accurate: CKD-EPI: eGFR = 141 × min (PCr/k, 1)a × max (PCr/k, 1)−1.209 × 0.993Age × 1.018 [if female] × 1.159 [if black], where PCr is plasma creatinine, k is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of PCr/k or 1, and max indicates the maximum of PCr/k or 1 (http:// www.qxmd.com/renal/Calculate-CKD-EPI-GFR.php). There are limitations to all creatinine-based estimates of GFR. Each equation, along with 24-h urine collection for measurement of creatinine clearance, is based on the assumption that the patient is in steady state, without daily increases or decreases in PCr as a result of rapidly changing GFR. The MDRD equation is better correlated with true GFR when the GFR is <60 mL/min per 1.73 m2. The gradual loss of muscle from chronic illness, chronic use of glucocorticoids, or malnutrition can mask significant changes in GFR with small or imperceptible changes in PCr. Cystatin C, a member of the cystatin superfamily of cysteine protease inhibitors, is produced at a relatively constant rate from all nucleated cells. Serum cystatin C has been proposed to be a more sensitive marker of early GFR decline than is PCr; however, like serum creatinine, cystatin C is influenced by the patient’s age, race, and sex and also is associated with diabetes, smoking, and markers of inflammation. APPROACH TO THE PATIENT: Once GFR reduction has been established, the physician must decide if it represents acute or chronic renal injury. The clinical situation, history, and laboratory data often make this an easy distinction. However, the laboratory abnormalities characteristic of chronic renal failure, including anemia, hypocalcemia, and hyperphosphatemia, often are present as well in patients presenting with acute renal failure. Radiographic evidence of renal osteodystrophy (Chap. 335) can be seen only in chronic renal failure but is a very late finding, and these patients are usually undergoing dialysis. The urinalysis and renal ultrasound can facilitate distinguishing acute from chronic renal failure. An approach to the evaluation of azotemic patients is shown in Fig. 61-1. Patients with advanced chronic renal insufficiency often have some proteinuria, nonconcentrated urine (isosthenuria; isosmotic with plasma), and small kidneys on ultrasound, characterized by increased echogenicity and cortical thinning. Treatment should be directed toward slowing the progression of renal disease and providing symptomatic relief for edema, acidosis, anemia, and hyperphosphatemia, as discussed in Chap. 335. Acute renal failure (Chap. 334) can result from processes that affect renal blood flow (prerenal azotemia), intrinsic renal diseases (affecting small vessels, glomeruli, or tubules), or postrenal processes (obstruction of urine flow in ureters, bladder, or urethra) (Chap. 343). Decreased renal perfusion accounts for 40–80% of cases of acute renal failure and, if appropriately treated, is readily reversible. The etiologies of prerenal azotemia include any cause of decreased circulating blood volume (gastrointestinal hemorrhage, burns, diarrhea, diuretics), volume sequestration (pancreatitis, peritonitis, rhabdomyolysis), or decreased effective arterial volume (cardiogenic shock, sepsis). Renal perfusion also can be affected by reductions in cardiac output from peripheral vasodilation (sepsis, drugs) or profound renal vasoconstriction (severe heart failure, hepatorenal syndrome, agents such as nonsteroidal anti-inflammatory drugs [NSAIDs]). True or “effective” arterial hypovolemia leads to a fall in mean arterial pressure, which in turn triggers a series of neural and humoral responses, including activation of the sympathetic nervous and renin-angiotensin-aldosterone systems and antidiuretic hormone (ADH) release. GFR is maintained by prostaglandin-mediated relaxation of afferent arterioles and angiotensin II–mediated constriction of efferent arterioles. Once the mean arterial pressure falls below 80 mmHg, GFR declines steeply. Blockade of prostaglandin production by NSAIDs can result in severe vasoconstriction and acute renal failure. Blocking angiotensin action with angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs) decreases efferent arteriolar tone and in turn decreases glomerular capillary perfusion pressure. Patients taking NSAIDs and/or ACE inhibitors/ARBs are most susceptible to hemodynamically mediated acute renal failure when blood volume is reduced for any reason. Patients with bilateral renal artery stenosis (or stenosis in a solitary kidney) are dependent on efferent arteriolar vasoconstriction for maintenance of glomerular filtration pressure and are particularly susceptible to a precipitous decline in GFR when given ACE inhibitors or ARBs. Prolonged renal hypoperfusion may lead to acute tubular necrosis (ATN), an intrinsic renal disease that is discussed below. The urinalysis and urinary electrolyte measurements can be useful in distinguishing prerenal azotemia from ATN (Table 61-2). The urine Na and osmolality of patients with prerenal azotemia can be predicted from the stimulatory actions of norepinephrine, angiotensin II, ADH, and low tubule fluid flow rate. In prerenal conditions, the tubules are intact, leading to a concentrated urine (>500 mosmol), avid Na retention (urine Na concentration, <20 mmol/L; fractional excretion of Na, <1%), and UCr/PCr >40 (Table 61-2). The prerenal urine sediment is usually normal or has hyaline and granular casts, whereas the sediment of ATN usually is filled with cellular debris, tubular epithelial casts, and dark (muddy brown) granular casts. Urinary tract obstruction accounts for <5% of cases of acute renal failure but is usually reversible and must be ruled out early in the evaluation (Fig. 61-1). Since a single kidney is capable of adequate BUN/PCr ratio Urine sodium UNa, meq/L Urine osmolality, mosmol/L H2O Fractional excretion of sodiuma Urine/plasma creatinine UCr/PCr Urinalysis (casts) FENa = . PNa × >20:1 granular 10–15:1 >40 <350 >2% <20 Muddy brown Abbreviations: BUN, blood urea nitrogen; PCr, plasma creatinine concentration; PNa, plasma sodium concentration; UCr, urine creatinine concentration; UNa, urine sodium concentration. clearance, obstructive acute renal failure requires obstruction at the urethra or bladder outlet, bilateral ureteral obstruction, or unilateral obstruction in a patient with a single functioning kidney. Obstruction is usually diagnosed by the presence of ureteral and renal pelvic dilation on renal ultrasound. However, early in the course of obstruction or if the ureters are unable to dilate (e.g., encasement by pelvic or periureteral tumors), the ultrasound examination may be negative. The specific urologic conditions that cause obstruction are discussed in Chap. 343. When prerenal and postrenal azotemia have been excluded as etiologies of renal failure, an intrinsic parenchymal renal disease is present. Intrinsic renal disease can arise from processes involving large renal vessels, intrarenal microvasculature and glomeruli, or the tubulointerstitium. Ischemic and toxic ATN account for ~90% of cases of acute intrinsic renal failure. As outlined in Fig. 61-1, the clinical setting and urinalysis are helpful in separating the possible etiologies. Prerenal azotemia and ATN are part of a spectrum of renal hypoperfusion; evidence of structural tubule injury is present in ATN, whereas prompt reversibility occurs with prerenal azotemia upon restoration of adequate renal perfusion. Thus, ATN often can be distinguished from prerenal azotemia by urinalysis and urine electrolyte composition (Table 61-2 and Fig. 61-1). Ischemic ATN is observed most frequently in patients who have undergone major surgery, trauma, severe hypovolemia, overwhelming sepsis, or extensive burns. Nephrotoxic ATN complicates the administration of many common medications, usually by inducing a combination of intrarenal vasoconstriction, direct tubule toxicity, and/ or tubule obstruction. The kidney is vulnerable to toxic injury by virtue of its rich blood supply (25% of cardiac output) and its ability to concentrate and metabolize toxins. A diligent search for hypo-tension and nephrotoxins usually uncovers the specific etiology of ATN. Discontinuation of nephrotoxins and stabilization of blood pressure often suffice without the need for dialysis while the tubules recover. An extensive list of potential drugs and toxins implicated in ATN is found in Chap. 334. Processes involving the tubules and interstitium can lead to acute kidney injury (AKI), a subtype of acute renal failure. These processes include drug-induced interstitial nephritis (especially by antibiotics, NSAIDs, and diuretics), severe infections (both bacterial and viral), systemic diseases (e.g., systemic lupus erythematosus), and infiltrative disorders (e.g., sarcoidosis, lymphoma, or leukemia). A list of drugs associated with allergic interstitial nephritis is found in Chap. 340. Urinalysis usually shows mild to moderate proteinuria, hematuria, and pyuria (~75% of cases) and occasionally WBC casts. The finding of RBC casts in interstitial nephritis has been reported but should prompt a search for glomerular diseases (Fig. 61-1). Occasionally, renal biopsy will be needed to distinguish among these possibilities. The finding of eosinophils in the urine is suggestive of allergic interstitial nephritis or atheroembolic renal disease and is optimally observed with Hansel staining. The absence of eosinophiluria, however, does not exclude these etiologies. Occlusion of large renal vessels, including arteries and veins, is an uncommon cause of acute renal failure. A significant reduction in GFR by this mechanism suggests bilateral processes or, in a patient with a single functioning kidney, a unilateral process. Renal arteries can be occluded with atheroemboli, thromboemboli, in situ thrombosis, aortic dissection, or vasculitis. Atheroembolic renal failure can occur spontaneously but most often is associated with recent aortic instrumentation. The emboli are cholesterol-rich and lodge in medium and small renal arteries, with a consequent eosinophil-rich inflammatory reaction. Patients with atheroembolic acute renal failure often have a normal urinalysis, but the urine may contain eosinophils and casts. The diagnosis can be confirmed by renal biopsy, but this procedure is often unnecessary when other stigmata of atheroemboli are present (livedo reticularis, distal peripheral infarcts, eosinophilia). Renal artery thrombosis may lead PART 2 Cardinal Manifestations and Presentation of Diseases HEMATURIA Proteinuria (>500 mg/24 h), Dysmorphic RBCs or RBC casts Pyuria, WBC casts Urine culture Urine eosinophils Hemoglobin electrophoresis Urine cytology UA of family members 24 h urinary calcium/uric acid IVP +/Renal ultrasound As indicated: retrograde pyelography or arteriogram, or cyst aspiration Cystoscopy Urogenital biopsy and evaluation Renal CT scan Renal biopsy of mass/lesion Follow periodic urinalysis Renal biopsy FIguRE 61-2 Approach to the patient with hematuria. ANCA, antineutrophil cytoplasmic antibody; ASLO, antistreptolysin O; CT, computed tomography; GBM, glomerular basement membrane; IVP, intravenous pyelography; RBC, red blood cell; UA, urinalysis; VDRL, Venereal Disease Research Laboratory; WBC, white blood cell. Serologic and hematologic evaluation: blood cultures, anti-GBM antibody, ANCA, complement levels, cryoglobulins, hepatitis B and C serologies, VDRL, HIV, ASLO to mild proteinuria and hematuria, whereas renal vein thrombosis typically induces heavy proteinuria and hematuria. These vascular complications often require angiography for confirmation and are discussed in Chap. 341. Diseases of the glomeruli (glomerulonephritis and vasculitis) and the renal microvasculature (hemolytic-uremic syndromes, thrombotic thrombocytopenic purpura, and malignant hypertension) usually present with various combinations of glomerular injury: proteinuria, hematuria, reduced GFR, and alterations of sodium excretion that lead to hypertension, edema, and circulatory congestion (acute nephritic syndrome). These findings may occur as primary renal diseases or as renal manifestations of systemic diseases. The clinical setting and other laboratory data help distinguish primary renal diseases from systemic diseases. The finding of RBC casts in the urine is an indication for early renal biopsy (Fig. 61-1), as the pathologic pattern has important implications for diagnosis, prognosis, and treatment. Hematuria without RBC casts can also be an indication of glomerular disease; this evaluation is summarized in Fig. 61-2. A detailed discussion of glomerulonephritis and diseases of the microvasculature is found in Chap. 340. Oliguria refers to a 24-h urine output <400 mL, and anuria is the complete absence of urine formation (<100 mL). Anuria can be caused by total urinary tract obstruction, total renal artery or vein occlusion, and shock (manifested by severe hypotension and intense renal vasoconstriction). Cortical necrosis, ATN, and rapidly progressive glomerulonephritis occasionally cause anuria. Oliguria can accompany acute renal failure of any etiology and carries a more serious prognosis for renal recovery in all conditions except prerenal azotemia. Nonoliguria refers to urine output >400 mL/d in patients with acute or chronic azotemia. With nonoliguric ATN, disturbances of potassium and hydrogen balance are less severe than in oliguric patients, and recovery to normal renal function is usually more rapid. The evaluation of proteinuria is shown schematically in Fig. 61-3 and typically is initiated after detection of proteinuria by dipstick examination. The dipstick measurement detects only albumin and gives false-positive results at pH >7.0 or when the urine is very concentrated or contaminated with blood. Because the dipstick relies on urinary albumin concentration, a very dilute urine may obscure significant proteinuria on dipstick examination. Quantification of urinary albumin on a spot urine sample (ideally from a first morning void) by measurement of an albumin-to-creatinine ratio (ACR) is helpful in approximating a 24-h albumin excretion rate (AER), where ACR (mg/g) ≈AER (mg/24 h). Furthermore, proteinuria that is not predominantly due to albumin will be missed by dipstick screening. This information is particularly important for the detection of Bence-Jones proteins in the urine of patients with multiple myeloma. Tests to measure total urine protein concentration accurately rely on precipitation with sulfosalicylic or trichloracetic acid (Fig. 61-3). The magnitude of proteinuria and its composition in the urine depend on the mechanism of renal injury that leads to protein losses. Both charge and size selectivity normally prevent virtually all plasma albumin, globulins, and other high-molecular-weight proteins from crossing the glomerular wall; however, if this barrier is disrupted, plasma proteins may leak into the urine (glomerular proteinuria; Fig. 61-3). Smaller proteins (<20 kDa) are freely filtered but are readily reabsorbed by the proximal tubule. Traditionally, healthy individuals excrete <150 mg/d of total protein and <30 mg/d of albumin. However, even at albuminuria levels <30 mg/d, risk for progression to overt nephropathy or subsequent cardiovascular disease is increased. The remainder of the protein in the urine is secreted by 293 the tubules (Tamm-Horsfall, IgA, and urokinase) or represents small amounts of filtered β2-microglobulin, apoproteins, enzymes, and peptide hormones. Another mechanism of proteinuria entails excessive production of an abnormal protein that exceeds the capacity of the tubule for reabsorption. This situation most commonly occurs with plasma cell dyscrasias, such as multiple myeloma, amyloidosis, and lymphomas, that are associated with monoclonal production of immunoglobulin light chains. The normal glomerular endothelial cell forms a barrier composed of pores of ~100 nm that retain blood cells but offer little impediment to passage of most proteins. The glomerular basement membrane traps most large proteins (>100 kDa), and the foot processes of epithelial cells (podocytes) cover the urinary side of the glomerular basement membrane and produce a series of narrow channels (slit diaphragms) to allow molecular passage of small solutes and water but not proteins. Some glomerular diseases, such as minimal change disease, cause fusion of glomerular epithelial cell foot processes, resulting in predominantly “selective” (Fig. 61-3) loss of albumin. Other glomerular diseases can present with disruption of the basement membrane and slit diaphragms (e.g., by immune complex deposition), resulting in losses of albumin and other plasma proteins. The fusion of foot processes causes increased pressure across the capillary basement membrane, resulting in areas with larger pore sizes (and more severe “nonselective” proteinuria (Fig. 61-3). When the total daily urinary excretion of protein is >3.5 g, hypoalbuminemia, hyperlipidemia, and edema (nephrotic syndrome; Fig. 61-3) are often present as well. However, total daily urinary protein excretion >3.5 g can occur without the other features of the nephrotic syndrome in a variety of other renal diseases, including diabetes (Fig. 61-3). Plasma cell dyscrasias (multiple myeloma) can be associated with large amounts of excreted light chains in the urine, which may not be detected by dipstick. The light chains are filtered by the glomerulus and overwhelm the reabsorptive capacity of the proximal tubule. Renal failure from these disorders occurs through a variety of mechanisms, including proximal tubule injury, tubule obstruction (cast nephropathy), and light chain deposition (Chap. 340). However, not all excreted light chains are nephrotoxic. PROTEINURIA ON URINE DIPSTICK Quantify by 24-h urinary excretion of protein and albumin or first morning spot albumin-to-creatinine ratio RBCs or RBC casts on urinalysis In addition to disorders listed under microalbuminuria consider Myeloma-associated kidney disease (check UPEP) Intermittent proteinuria Postural proteinuria Congestive heart failure Fever Exercise Go to Fig. 61-2 Macroalbuminuria 300-3500 mg/d or 300-3500 mg/g Microalbuminuria 30-300 mg/d or 30-300 mg/g Nephrotic range > 3500 mg/d or > 3500 mg/g + Consider Early diabetes Essential hypertension Early stages of glomerulonephritis (especially with RBCs, RBC casts)Consider Early diabetes Essential hypertension Early stages of glomerulonephritis (especially with RBCs, RBC casts) Nephrotic syndrome Diabetes Amyloidosis Minimal change disease FSGS Membranous glomerulopathy IgA nephropathy FIguRE 61-3 Approach to the patient with proteinuria. Investigation of proteinuria is often initiated by a positive dipstick on routine urinalysis. Conventional dipsticks detect predominantly albumin and provide a semiquantitative assessment (trace, 1+, 2+, or 3+), which is influenced by urinary concentration as reflected by urine specific gravity (minimum, <1.005; maximum, 1.030). However, more exact determination of proteinuria should employ a spot morning protein/creatinine ratio (mg/g) or a 24-h urine collection (mg/24 h). FSGS, focal segmental glomerulosclerosis; RBC, red blood cell; UPEP, urine protein electrophoresis. Hypoalbuminemia in nephrotic syndrome occurs through excessive urinary losses and increased proximal tubule catabolism of filtered albumin. Edema forms from renal sodium retention and reduced plasma oncotic pressure, which favors fluid movement from capillaries to interstitium. To compensate for the perceived decrease in effective intravascular volume, activation of the renin-angiotensin system, stimulation of ADH, and activation of the sympathetic nervous system take place, promoting continued renal salt and water reabsorption and progressive edema. Despite these changes, hypertension is uncommon in primary kidney diseases resulting in the nephrotic syndrome (Fig. 61-3 and Chap. 338). The urinary loss of regulatory proteins and changes in hepatic synthesis contribute to the other manifestations of the nephrotic syndrome. A hypercoagulable state may arise from urinary losses of antithrombin III, reduced serum levels of proteins S and C, hyperfibrinogenemia, and enhanced platelet aggregation. Hypercholesterolemia may be severe and results from increased hepatic lipoprotein synthesis. Loss of immunoglobulins contributes to an increased risk of infection. Many diseases (some listed in Fig. 61-3) and drugs can cause the nephrotic syndrome; a complete list is found in Chap. 338. HEMATuRIA, PYuRIA, AND CASTS Isolated hematuria without proteinuria, other cells, or casts is often indicative of bleeding from the urinary tract. Hematuria is defined as two to five RBCs per high-power field (HPF) and can be detected by dipstick. A false-positive dipstick for hematuria (where no RBCs are seen on urine microscopy) may occur when myoglobinuria is present, often in the setting of rhabdomyolysis. Common causes of isolated hematuria include stones, neoplasms, tuberculosis, trauma, and prostatitis. Gross hematuria with blood clots usually is not an intrinsic renal process; rather, it suggests a postrenal source in the urinary collecting system. Evaluation of patients presenting with microscopic hematuria is outlined in Fig. 61-2. A single urinalysis with hematuria is common and can result from menstruation, viral illness, allergy, exercise, or mild trauma. Persistent or significant hematuria (>3 RBCs/ HPF on three urinalyses, a single urinalysis with >100 RBCs, or gross hematuria) is associated with significant renal or urologic lesions in 9.1% of cases. The level of suspicion for urogenital neoplasms in patients with isolated painless hematuria and nondysmorphic RBCs increases with age. Neoplasms are rare in the pediatric population, and isolated hematuria is more likely to be “idiopathic” or associated with a congenital anomaly. Hematuria with pyuria and bacteriuria is typical of infection and should be treated with antibiotics after appropriate cultures. Acute cystitis or urethritis in women can cause gross hematuria. Hypercalciuria and hyperuricosuria are also risk factors for unexplained isolated hematuria in both children and adults. In some of these patients (50–60%), reducing calcium and uric acid excretion through dietary interventions can eliminate the microscopic hematuria. Isolated microscopic hematuria can be a manifestation of glomerular diseases. The RBCs of glomerular origin are often dysmorphic when examined by phase-contrast microscopy. Irregular shapes of RBCs may also result from pH and osmolarity changes produced along the distal nephron. Observer variability in detecting dysmorphic RBCs is common. The most common etiologies of isolated glomerular hematuria are IgA nephropathy, hereditary nephritis, and thin basement membrane disease. IgA nephropathy and hereditary nephritis can lead to episodic gross hematuria. A family history of renal failure is often present in hereditary nephritis, and patients with thin basement membrane disease often have family members with microscopic hematuria. A renal biopsy is needed for the definitive diagnosis of these disorders, which are discussed in more detail in Chap. 338. Hematuria with dysmorphic RBCs, RBC casts, and protein excretion >500 mg/d is virtually diagnostic of glomerulonephritis. RBC casts form as RBCs that enter the tubule fluid and become trapped in a cylindrical mold of gelled Tamm-Horsfall protein. Even in the absence of azotemia, these patients should undergo serologic evaluation and renal biopsy as outlined in Fig. 61-2. PART 2 Cardinal Manifestations and Presentation of Diseases Isolated pyuria is unusual since inflammatory reactions in the kidney or collecting system also are associated with hematuria. The presence of bacteria suggests infection, and WBC casts with bacteria are indicative of pyelonephritis. WBCs and/or WBC casts also may be seen in acute glomerulonephritis as well as in tubulointerstitial processes such as interstitial nephritis and transplant rejection. Casts can be seen in chronic renal diseases. Degenerated cellular casts called waxy casts or broad casts (arising in the dilated tubules that have undergone compensatory hypertrophy in response to reduced renal mass) may be seen in the urine. By history, it is often difficult for patients to distinguish urinary frequency (often of small volumes) from true polyuria (>3 L/d), and a quantification of volume by 24-h urine collection may be needed (Fig. 61-4). Polyuria results from two potential mechanisms: excretion of nonabsorbable solutes (such as glucose) or excretion of water (usually from a defect in ADH production or renal responsiveness). To distinguish a solute diuresis from a water diuresis and to determine whether the diuresis is appropriate for the clinical circumstances, urine osmolality is measured. The average person excretes between 600 and 800 mosmol of solutes per day, primarily as urea and electrolytes. If the urine output is >3 L/d and the urine is dilute (<250 mosmol/L), total mosmol excretion is normal and a water diuresis is present. This circumstance could arise from polydipsia, inadequate secretion of vasopressin (central diabetes insipidus), or failure of renal tubules to respond to vasopressin (nephrogenic diabetes insipidus). If the urine volume is >3 L/d and urine POLYURIA (>3 L/24 h) Urine osmolality < 250 mosmol History, low serum sodium Water deprivation test or ADH level Primary polydipsia Psychogenic Hypothalamic disease Drugs (thioridazine, chlorpromazine, anticholinergic agents) > 300 mosmol Diabetes insipidus (DI) Solute diuresis Glucose, mannitol, radiocontrast, urea (from high protein feeding), medullary cystic diseases, resolving ATN, or obstruction, diuretics posthypophysectomy, trauma, histiocystosis or granuloma, encroachment by aneurysm, Sheehan's syndrome, infection, Guillain-Barré, fat embolus, Acquired tubular diseases: pyelonephritis, analgesic nephropathy, multiple myeloma, amyloidosis, obstruction, sarcoidosis, hypercalcemia, hypokalemia, Sjren’s syndrome, sickle cell anemia Drugs or toxins: lithium, demeclocycline, methoxyflurane, ethanol, diphenylhydantoin, propoxyphene, amphotericin Congenital: hereditary, polycystic or medullary cystic disease FIguRE 61-4 Approach to the patient with polyuria. ADH, antidiuretic hormone; ATN, acute tubular necrosis. osmolality is >300 mosmol/L, a solute diuresis is clearly present and a search for the responsible solute(s) is mandatory. Excessive filtration of a poorly reabsorbed solute such as glucose or mannitol can depress reabsorption of NaCl and water in the proximal tubule and lead to enhanced excretion in the urine. Poorly controlled diabetes mellitus with glucosuria is the most common cause of a solute diuresis, leading to volume depletion and serum hypertonicity. Since the urine sodium concentration is less than that of blood, more water than sodium is lost, causing hypernatremia and hypertonicity. Common iatrogenic solute diuresis occurs in association with mannitol administration, radiocontrast media, and high-protein feedings (enteral or parenteral), leading to increased urea production and excretion. Less commonly, excessive sodium loss may result from cystic renal diseases or Bartter’s syndrome or may develop during a tubulointerstitial process (such as resolving ATN). In these so-called salt-wasting disorders, the tubule damage results in direct impairment of sodium reabsorption and indirectly reduces the responsiveness of the tubule to aldosterone. Usually, the sodium losses are mild, and the obligatory urine output is <2 L/d; resolving ATN and postobstructive diuresis are exceptions and may be associated with significant natriuresis and polyuria. Formation of large volumes of dilute urine is usually due to polydipsic states or diabetes insipidus. Primary polydipsia can result from habit, psychiatric disorders, neurologic lesions, or medications. During deliberate polydipsia, extracellular fluid volume is normal or expanded and plasma vasopressin levels are reduced because serum osmolality tends to be near the lower limits of normal. Urine osmolality is also maximally dilute at 50 mosmol/L. Central diabetes insipidus may be idiopathic in origin or secondary to a variety of conditions, including hypophysectomy, trauma, neoplastic, inflammatory, vascular, or infectious hypothalamic diseases. Idiopathic central diabetes insipidus is associated with selective destruction of the vasopressin-secreting neurons in the supraoptic and paraventricular nuclei and can either be inherited as an autosomal dominant trait or occur spontaneously. Nephrogenic diabetes insipidus can occur in a variety of clinical situations, as summarized in Fig. 61-4. A plasma vasopressin level is recommended as the best method for distinguishing between central and nephrogenic diabetes insipidus. Alternatively, a water deprivation test plus exogenous vasopressin may distinguish primary polydipsia from central and nephrogenic diabetes insipidus. For a detailed discussion, see Chap. 404. Atlas of Urinary Sediments and Renal Biopsies Agnes B. Fogo, Eric G. Neilson Key diagnostic features of selected diseases in renal biopsy are illus-trated, with light, immunofluorescence, and electron microscopic 62e images. Common urinalysis findings are also documented. Figure 62e-1 Minimal-change disease. In minimal-change disease, light microscopy is unremarkable (A), whereas electron microscopy (B) reveals podocyte injury evidenced by complete foot process effacement. (ABF/Vanderbilt Collection.) CHAPTER 62e Atlas of Urinary Sediments and Renal Biopsies Figure 62e-2 Focal segmental glomerulosclerosis (FSGS). There is a well-defined segmental increase in matrix and obliteration of capil-lary loops (arrow), the sine qua non of segmental sclerosis not other-wise specified (NOS) type. (EGN/UPenn Collection.) Figure 62e-3 Collapsing glomerulopathy. There is segmental col-lapse (arrow) of the glomerular capillary loops and overlying podocyte hyperplasia. This lesion may be idiopathic or associated with HIV infec-tion and has a particularly poor prognosis. (ABF/Vanderbilt Collection.) Figure 62e-4 Hilar variant of FSGS. There is segmental sclerosis of the glomerular tuft at the vascular pole with associated hyalinosis, also present in the afferent arteriole (arrows). This lesion often occurs as a secondary response when nephron mass is lost due to, e.g., scar-ring from other conditions. Patients usually have less proteinuria and less steroid response than FSGS, NOS type. (ABF/Vanderbilt Collection.) Figure 62e-5 Tip lesion variant of FSGS. There is segmental scle-rosis of the glomerular capillary loops at the proximal tubular outlet (arrow). This lesion has a better prognosis than other types of FSGS. (ABF/Vanderbilt Collection.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 62e-6 Postinfectious (poststreptococcal) glomerulonephritis. The glomerular tuft shows proliferative changes with numerous poly-morphonuclear leukocytes (PMNs), with a crescentic reaction (arrow) in severe cases (A). These deposits localize in the mesangium and along the capillary wall in a subepithelial pattern and stain dominantly for C3 and to a lesser extent for IgG (B). Subepithelial hump-shaped deposits are seen by electron microscopy (arrow) (C). (ABF/Vanderbilt Collection.) Figure 62e-7 Membranous glomerulopathy. Membranous glomerulopathy is due to subepithelial deposits, with resulting basement membrane reaction, resulting in the appearance of spike-like projections on silver stain (A). The deposits are directly visualized by fluorescent anti-IgG, revealing diffuse granular capillary loop staining (B). By electron microscopy, the subepithelial location of the deposits and early surrounding basement membrane reaction is evident, with overlying foot process effacement (C). (ABF/Vanderbilt Collection.) Figure 62e-8 IgA nephropathy. There is variable mesangial expansion due to mesangial deposits, with some cases also showing endocapillary proliferation or segmental sclerosis (A). By immunofluorescence, mesangial IgA deposits are evident (B). (ABF/Vanderbilt Collection.) CHAPTER 62e Atlas of Urinary Sediments and Renal Biopsies Figure 62e-9 Membranoproliferative glomerulonephritis. There is mesangial expansion and endocapillary proliferation with cellular interposition in response to subendothelial deposits, resulting in the “tram-track” of duplication of glomerular basement membrane. (EGN/ UPenn Collection.) Figure 62e-10 Dense deposit disease (membranoproliferative glomerulonephritis type II). By light microscopy, there is a membranoproliferative pattern. By electron microscopy, there is a dense transformation of the glomerular basement membrane with round, globular deposits within the mesangium. By immunofluorescence, only C3 staining is usually present. Dense deposit disease is part of the group of renal diseases called C3 glomerulopathy, related to underlying complement dysregulation. (ABF/Vanderbilt Collection.) Figure 62e-12 C3 glomerulonephritis. By immunofluorescence, only C3 staining is usually present, with occasional minimal immu-noglobulin, in an irregular capillary wall and mesangial distribution. (ABF/Vanderbilt Collection.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 62e-11 C3 glomerulonephritis. By light microscopy, there is a membranoproliferative pattern. C3 glomerulonephritis is part of the group of renal diseases called C3 glomerulopathy, related to underlying complement dysregulation. (ABF/ Vanderbilt Collection.) Figure 62e-13 C3 glomerulonephritis. By electron microscopy, usual density deposits are present (arrows), including mesangial, subendothelial, and occasional hump-type subepithelial deposits. (ABF/ Vanderbilt Collection.) Figure 62e-14 Mixed proliferative and membranous glomerulonephritis. This specimen shows pink subepithelial deposits with spike reaction, and the “tram-track” sign of reduplication of glomerular basement membrane, resulting from subendothelial deposits, as may be seen in mixed membranous and proliferative lupus nephritis (International Society of Nephrology [ISN]/Renal Pathology Society [RPS] class V and IV). (EGN/UPenn Collection.) Figure 62e-16 Granulomatosis with polyangiitis (Wegener’s). This pauci-immune necrotizing crescentic glomerulonephritis shows numerous breaks in the glomerular basement membrane with associated segmental fibrinoid necrosis and a crescent formed by proliferation of the parietal epithelium. Note that the uninvolved segment of the glomerulus (at ∼5 o’clock) shows no evidence of proliferation or immune complexes. (ABF/Vanderbilt Collection.) CHAPTER 62e Atlas of Urinary Sediments and Renal Biopsies Figure 62e-15 Lupus nephritis. Proliferative lupus nephritis, ISN/RPS class III (focal) or IV (diffuse), manifests as endocapillary proliferation, which may result in segmental necrosis due to deposits, particularly in the subendothelial area (A). By immunofluorescence, chunky irregular mesangial and capillary loop deposits are evident, with some of the peripheral loop deposits having a smooth, molded outer contour due to their subendothelial location. These deposits typically stain for all three immunoglobulins, IgG, IgA, IgM, and both C3 and C1q (B). By electron microscopy, subendothelial (arrow), mesangial (white rim arrowhead), and rare subepithelial (black arrowhead) dense immune complex deposits are evident, along with extensive foot process effacement (C). (ABF/Vanderbilt Collection.) PART 2 Cardinal Manifestations and Presentation of Diseases Figure 62e-17 Anti–glomerular basement membrane antibody-mediated glomerulonephritis. There is segmental necrosis with a break of the glomerular basement membrane (arrow) and a cellular crescent (A), and immunofluorescence for IgG shows linear staining of the glomerular basement membrane with a small crescent at ∼1 o’clock (B). (ABF/Vanderbilt Collection.) Figure 62e-18 Amyloidosis. Amyloidosis shows amorphous, acellular expansion of the mesangium, with material often also infiltrating glomerular basement membranes, vessels, and the interstitium, with apple-green birefringence by polarized Congo red stain (A). The deposits are composed of randomly organized 9to 11-nm fibrils by electron microscopy (B). (ABF/Vanderbilt Collection.) Figure 62e-19 Light chain deposition disease. There is mesangial expansion, often nodular by light microscopy (A), with immunofluorescence showing monoclonal staining, more commonly with kappa than lambda light chain, of tubules (B) and glomerular tufts. By electron microscopy (C), the deposits show an amorphous granular appearance and line the inside of the glomerular basement membrane (arrows) and are also found along the tubular basement membranes. (ABF/Vanderbilt Collection.) Figure 62e-20 Light chain cast nephropathy (myeloma kidney). Monoclonal light chains precipitate in tubules and result in a syncytial giant cell reaction (arrow) surrounding the casts and a surround-ing chronic interstitial nephritis with tubulointerstitial fibrosis. (ABF/ Vanderbilt Collection.) CHAPTER 62e Atlas of Urinary Sediments and Renal Biopsies Figure 62e-21 Fabry’s disease. Due to deficiency of α-galactosidase, there is abnormal accumulation of glycolipids, resulting in foamy podocytes by light microscopy (A). These deposits can be directly visualized by electron microscopy (B), where the glycosphingolipid appears as whorled so-called myeloid bodies, particularly in the podocytes. (ABF/Vanderbilt Collection.) Cardinal Manifestations and Presentation of Diseases Figure 62e-22 Alport’s syndrome and thin glomerular basement membrane lesion. In Alport’s syndrome, there is irregular thinning alternating with thickened so-called basket-weaving abnormal organization of the glomerular basement membrane (A). In benign familial hematuria, or in early cases of Alport’s syndrome or female carriers, only extensive thinning of the glomerular basement membrane is seen by electron microscopy (B). (ABF/Vanderbilt Collection.) Figure 62e-23 Diabetic nephropathy. In the earliest stage of diabetic nephropathy, only mild mesangial increase and prominent glomerular basement membranes (confirmed to be thickened by electron microscopy) are present (A). In slightly more advanced stages, more marked mesangial expansion with early nodule formation develops, with evident arteriolar hyaline (B). In established diabetic nephropathy, there is nodular mesangial expansion, so-called Kimmelstiel-Wilson nodules, with increased mesangial matrix and cellularity, microaneurysm formation in the glomerulus on the left, and prominent glomerular basement membranes without evidence of immune deposits and arteriolar hyalinosis of both afferent and efferent arterioles (C). (ABF/Vanderbilt Collection.) Figure 62e-24 Arterionephrosclerosis. Hypertension-associated injury often manifests extensive global sclerosis of glomeruli, with accompanying and proportional tubulointerstitial fibrosis and pericapsular fibrosis, and there may be segmental sclerosis (A). The vessels show disproportionately severe changes of intimal fibrosis, medial hypertrophy, and arteriolar hyaline deposits (B). (ABF/Vanderbilt Collection.) Figure 62e-26 Hemolytic-uremic syndrome. There are character-istic intraglomerular fibrin thrombi, with a chunky pink appearance (thrombotic microangiopathy) (arrow). The remaining portion of the capillary tuft shows corrugation of the glomerular basement mem-brane due to ischemia. (ABF/Vanderbilt Collection.) B Figure 62e-27 Progressive systemic sclerosis. Acutely, there is fibrinoid necrosis of interlobular and larger vessels, with intervening normal vessels and ischemic change in the glomeruli (A). Chronically, this injury leads to intimal proliferation, the so-called onion-skinning appearance (B). (ABF/Vanderbilt Collection.) CHAPTER 62e Atlas of Urinary Sediments and Renal Biopsies Figure 62e-25 Cholesterol emboli. Cholesterol emboli cause cleft-like spaces (arrow) where the lipid has been extracted during processing, with smooth outer contours and surrounding fibrotic and mononuclear cell reaction in these arterioles. (ABF/Vanderbilt Collection.) Figure 62e-28 Acute pyelonephritis. There are characteristic intra-tubular plugs and casts of PMNs (arrow) with inflammation extending into the surrounding interstitium and accompanying tubular injury. (ABF/ Vanderbilt Collection.) Figure 62e-29 Acute tubular injury. There is extensive flattening of the tubular epithelium and loss of the brush border, with mild intersti-tial edema, characteristic of acute tubular injury due to ischemia. (ABF/ Vanderbilt Collection.) Figure 62e-30 Acute interstitial nephritis. There is extensive interstitial lymphoplasmocytic infiltrate with mild edema and associated tubular injury (A), which is frequently associated with interstitial eosinophils (B) when caused by a drug hypersensitivity reaction. (ABF/Vanderbilt Collection.) B Figure 62e-31 Oxalosis. Calcium oxalate crystals have caused extensive tubular injury, with flattening and regeneration of tubular epithelium (A). Crystals are well visualized as sheaves when viewed under polarized light (B). (ABF/Vanderbilt Collection.) Figure 62e-32 Acute phosphate nephropathy. There is extensive acute tubular injury with intratubular nonpolarizable calcium phos-phate crystals. (ABF/Vanderbilt Collection.) Figure 62e-35 Coarse granular cast. (ABF/Vanderbilt Collection.) Figure 62e-33 Sarcoidosis. There is chronic interstitial nephritis with numerous, confluent, nonnecrotizing granulomas. The glomeruli are unremarkable, but there is moderate tubular atrophy and interstitial fibrosis. (ABF/Vanderbilt Collection.) Figure 62e-36 Fine granular casts. (ABF/Vanderbilt Collection.) CHAPTER 62e Atlas of Urinary Sediments and Renal Biopsies Figure 62e-34 Hyaline cast. (ABF/Vanderbilt Collection.) Figure 62e-37 Red blood cell cast. (ABF/Vanderbilt Collection.) Figure 62e-38 White blood cell cast. (ABF/Vanderbilt Collection.) Figure 62e-40 “Maltese cross” formation in an oval fat body. (ABF/Vanderbilt Collection.) Figure 62e-39 Triple phosphate crystals. (ABF/Vanderbilt Collection.) Figure 62e-41 Uric acid crystals. (ABF/Vanderbilt Collection.) 295 fluid and Electrolyte Disturbances David B. Mount SODIuM AND WATER COMPOSITION OF BODY FLuIDS Water is the most abundant constituent in the body, comprising approximately 50% of body weight in women and 60% in men. Total-63 body water is distributed in two major compartments: 55–75% is intracellular (intracellular fluid [ICF]), and 25–45% is extracellular (extracellular fluid [ECF]). The ECF is further subdivided into intravascular (plasma water) and extravascular (interstitial) spaces in a ratio of 1:3. Fluid movement between the intravascular and interstitial spaces occurs across the capillary wall and is determined by Starling forces, i.e., capillary hydraulic pressure and colloid osmotic pressure. The transcapillary hydraulic pressure gradient exceeds the corresponding oncotic pressure gradient, thereby favoring the movement of plasma ultrafiltrate into the extravascular space. The return of fluid into the intravascular compartment occurs via lymphatic flow. The solute or particle concentration of a fluid is known as its osmolality, expressed as milliosmoles per kilogram of water (mOsm/kg). Water easily diffuses across most cell membranes to achieve osmotic equilibrium (ECF osmolality = ICF osmolality). Notably, the extracellular and intracellular solute compositions differ considerably owing to the activity of various transporters, channels, and ATP-driven membrane pumps. The major ECF particles are Na+ and its accompanying anions Cl– and HCO3–, whereas K+ and organic phosphate esters (ATP, creatine phosphate, and phospholipids) are the predominant ICF osmoles. Solutes that are restricted to the ECF or the ICF determine the “tonicity” or effective osmolality of that compartment. Certain solutes, particularly urea, do not contribute to water shifts across most membranes and are thus known as ineffective osmoles. Water Balance Vasopressin secretion, water ingestion, and renal water transport collaborate to maintain human body fluid osmolality between 280 and 295 mOsm/kg. Vasopressin (AVP) is synthesized in magnocellular neurons within the hypothalamus; the distal axons of these neurons project to the posterior pituitary or neurohypophysis, from which AVP is released into the circulation. A network of central “osmoreceptor” neurons, which includes the AVP-expressing magnocellular neurons themselves, sense circulating osmolality via nonselective, stretch-activated cation channels. These osmoreceptor neurons are activated or inhibited by modest increases and decreases in circulating osmolality, respectively; activation leads to AVP release and thirst. AVP secretion is stimulated as systemic osmolality increases above a threshold level of ~285 mOsm/kg, above which there is a linear relationship between osmolality and circulating AVP (Fig. 63-1). Thirst and thus water ingestion are also activated at ~285 mOsm/kg, beyond which there is an equivalent linear increase in the perceived intensity of thirst as a function of circulating osmolality. Changes in blood volume and blood pressure are also direct stimuli for AVP release and thirst, albeit with a less sensitive response profile. Of perhaps greater clinical relevance to the pathophysiology of water homeostasis, ECF volume strongly modulates the relationship between circulating osmolality and AVP release, such that hypovolemia reduces the osmotic threshold and increases the slope of the response curve to osmolality; hypervolemia has an opposite effect, increasing the osmotic threshold and reducing the slope of the response curve (Fig. 63-1). Notably, AVP has a half-life in the circulation of only 10–20 minutes; thus, changes in ECF volume and/or circulating osmolality can rapidly affect water homeostasis. In addition to volume status, a number of other “nonosmotic” stimuli have potent activating effects on osmosensitive neurons and AVP release, including nausea, intracerebral angiotensin II, serotonin, and multiple drugs. The excretion or retention of electrolyte-free water by the kidney is modulated by circulating AVP. AVP acts on renal, V2-type receptors in FIguRE 63-1 Circulating levels of vasopressin (AVP) in response to changes in osmolality. Plasma AVP becomes detectable in euvolemic, healthy individuals at a threshold of ~285 mOsm/kg, above which there is a linear relationship between osmolality and circulating AVP. The vasopressin response to osmolality is modulated strongly by volume status. The osmotic threshold is thus slightly lower in hypovolemia, with a steeper response curve; hypervolemia reduces the sensitivity of circulating AVP levels to osmolality. the thick ascending limb of Henle and principal cells of the collecting duct (CD), increasing intracellular levels of cyclic AMP and activating protein kinase A (PKA)–dependent phosphorylation of multiple transport proteins. The AVPand PKA-dependent activation of Na+-Cl– and K+ transport by the thick ascending limb of the loop of Henle (TALH) is a key participant in the countercurrent mechanism (Fig. 63-2). The countercurrent mechanism ultimately increases the interstitial osmolality in the inner medulla of the kidney, driving water absorption across the renal CD. However, water, salt, and solute transport by both proximal and distal nephron segments participates in the renal concentrating mechanism (Fig. 63-2). Water transport across apical and basolateral aquaporin-1 water channels in the descending thin limb of the loop of Henle is thus involved, as is passive absorption of Na+-Cl– PART 2 Cardinal Manifestations and Presentation of Diseases AQP2,3 Cortex excretion. The thiazide-sensitive apical Na+-Cl– cotransporter (NCC) reabsorbs 5–10% of filtered Na+-Cl– in the AQP2,3 DCT. Principal cells in the CNT and CD reabsorb Na+ via electrogenic, amiloride-sensitive epithelial Na+ channels (ENaC); Cl– ions are primarily reabsorbed by adjacent intercalated cells, via apical Cl– exchange (Cl–-OH– and Vasa Recta: – Cl–-HCO exchange, mediated by the SLC26A4 anion AQP1, UT-B 3 H2O exchanger) (Fig. 63-4). AQP2–4 Renal tubular reabsorption of filtered Na+-Cl– is regulated by multiple circulating and paracrine hormones, in addition to the activity of renal nerves. Angiotensin II activates proximal Na+-Cl– reabsorption, as do adrenergic receptors under the influence of renal sympathetic innervation; locally generated dopamine, in contrast, has a natriuretic effect. Aldosterone primarily activates Na+-Cl– reabsorption within the aldosterone-sensitive FIguRE 63-2 The renal concentrating mechanism. Water, salt, and solute trans-distal nephron. In particular, aldosterone activates the port by both proximal and distal nephron segments participates in the renal con-ENaC channel in principal cells, inducing Na+ absorption centrating mechanism (see text for details). Diagram showing the location of the and promoting K+ excretion (Fig. 63-4). major transport proteins involved; a loop of Henle is depicted on the left, collect-Circulatory integrity is critical for the perfusion and ing duct on the right. AQP, aquaporin; CLC-K1, chloride channel; NKCC2, Na-K-2Cl function of vital organs. “Underfilling” of the artecotransporter; ROMK, renal outer medullary K+ channel; UT, urea transporter. (Used rial circulation is sensed by ventricular and vascular with permission from JM Sands: Molecular approaches to urea transporters. J Am Soc pressure receptors, resulting in a neurohumoral acti-Nephrol 13:2795, 2002.) vation (increased sympathetic tone, activation of the by the thin ascending limb, via apical and basolateral CLC-K1 chloride channels and paracellular Na+ transport. Renal urea transport in turn plays important roles in the generation of the medullary osmotic gradient and the ability to excrete solute-free water under conditions of both high and low protein intake (Fig. 63-2). AVP-induced, PKA-dependent phosphorylation of the aquaporin-2 water channel in principal cells stimulates the insertion of active water channels into the lumen of the CD, resulting in transepithelial water absorption down the medullary osmotic gradient (Fig. 63-3). Under “antidiuretic” conditions, with increased circulating AVP, the kidney reabsorbs water filtered by the glomerulus, equilibrating the osmolality across the CD epithelium to excrete a hypertonic, “concentrated” urine (osmolality of up to 1200 mOsm/kg). In the absence of circulating AVP, insertion of aquaporin-2 channels and water absorption across the CD is essentially abolished, resulting in secretion of a hypotonic, dilute urine (osmolality as low as 30–50 mOsm/kg). Abnormalities in this “final common pathway” are involved in most disorders of water homeostasis, e.g., a reduced or absent insertion of active aquaporin-2 water channels into the membrane of principal cells in diabetes insipidus. Maintenance of Arterial Circulatory Integrity Sodium is actively pumped out of cells by the Na+/K+-ATPase membrane pump. In consequence, 85–90% of body Na+ is extracellular, and the ECF volume (ECFV) is a function of total-body Na+ content. Arterial perfusion and circulatory integrity are, in turn, determined by renal Na+ retention or excretion, in addition to the modulation of systemic arterial resistance. Within the kidney, Na+ is filtered by the glomeruli and then sequentially reabsorbed by the renal tubules. The Na+ cation is typically reabsorbed with the chloride anion (Cl–), and, thus, chloride homeostasis also affects the ECFV. On a quantitative level, at a glomerular filtration rate (GFR) of 180 L/d and serum Na+ of ~140 mM, the kidney filters some 25,200 mmol/d of Na+. This is equivalent to ~1.5 kg of salt, which would occupy roughly 10 times the extracellular space; 99.6% of filtered Na+-Cl– must be reabsorbed to excrete 100 mM per day. Minute changes in renal Na+-Cl– excretion will thus have significant effects on the ECFV, leading to edema syndromes or hypovolemia. Approximately two-thirds of filtered Na+-Cl– is reabsorbed by the renal proximal tubule, via both paracellular and transcel lular mechanisms. The TALH subsequently reabsorbs another 25–30% of filtered Na+-Cl– via the apical, furose mide-sensitive Na+-K+-2Cl– cotransporter. The adjacent aldosterone-sensitive distal nephron, comprising the dis tal convoluted tubule (DCT), connecting tubule (CNT), and CD, accomplishes the “fine-tuning” of renal Na+-Cl– Vasopressin, also called antidiuretic hormone (ADH) FIguRE 63-3 Vasopressin and the regulation of water permeability in the renal collecting duct. Vasopressin binds to the type 2 vasopressin receptor (V2R) on the basolateral membrane of principal cells, activates adenylyl cyclase (AC), increases intracellular cyclic adenosine monophosphatase (cAMP), and stimulates protein kinase A (PKA) activity. Cytoplasmic vesicles carrying aquaporin-2 (AQP) water channel proteins are inserted into the luminal membrane in response to vasopressin, thereby increasing the water permeability of this membrane. When vasopressin stimulation ends, water channels are retrieved by an endocytic process and water permeability returns to its low basal rate. The AQP3 and AQP4 water channels are expressed on the basolateral membrane and complete the transcellular pathway for water reabsorption. pAQP2, phosphorylated aquaporin-2. (From JM Sands, DG Bichet: Nephrogenic diabetes insipidus. Ann Intern Med 144:186, 2006, with permission.) renin-angiotensin-aldosterone axis, and increased circulating AVP) that synergistically increases renal Na+-Cl– reabsorption, vascular resistance, and renal water reabsorption. This occurs in the context of decreased cardiac output, as occurs in hypovolemic states, low-output cardiac failure, decreased oncotic pressure, and/or increased capillary permeability. Alternatively, excessive arterial vasodilation results in relative arterial underfilling, leading to neurohumoral activation in the defense of tissue perfusion. These physiologic responses play important roles in many of the disorders discussed in this chapter. In particular, it is important to appreciate that AVP functions in the defense of circulatory integrity, inducing vasoconstriction, increasing sympathetic nervous system tone, increasing renal retention of both water and Na+-Cl–, and modulating the arterial baroreceptor reflex. Most of these responses involve activation of systemic V1A AVP receptors, but concomitant activation of V2 receptors in the kidney can result in renal water retention and hyponatremia. HYPOVOLEMIA Etiology True volume depletion, or hypovolemia, generally refers to a state of combined salt and water loss, leading to contraction of the ECFV. The loss of salt and water may be renal or nonrenal in origin. RENAL CAUSES Excessive urinary Na+-Cl– and water loss is a feature of several conditions. A high filtered load of endogenous solutes, such as glucose and urea, can impair tubular reabsorption of Na+-Cl– and water, leading to an osmotic diuresis. Exogenous mannitol, often used to decrease intracerebral pressure, is filtered by glomeruli but not reabsorbed by the proximal tubule, thus causing an osmotic diuresis. Pharmacologic diuretics selectively impair Na+-Cl– reabsorption at specific sites along the nephron, leading to increased urinary Na+-Cl– excretion. Other drugs can induce natriuresis as a side effect. For example, acetazolamide can inhibit proximal tubular Na+-Cl– absorption via its inhibition of carbonic anhydrase; other drugs, such as the H2O AQP-2 AQP-3,4 H2O FIguRE 63-4 Sodium, water, and potassium transport in principal cells (PC) and adjacent β-intercalated cells (B-IC). The absorption of Na+ via the amiloride-sensitive epithelial sodium channel (ENaC) generates a lumen-negative potential difference, which drives K+ excretion through the apical secretory K+ channel ROMK (renal outer medullary K+ channel) and/or the flow-dependent BK channel. Transepithelial Cl– transport occurs in adjacent β-intercalated cells, via apical Cl–-HCO3 and Cl–-OH– exchange (SLC26A4 anion exchanger, also known as pendrin) basolateral CLC chloride channels. Water is absorbed down the osmotic gradient by principal cells, through the apical aquaporin-2 (AQP-2) and basolateral aquaporin-3 and aquaporin-4 (Fig. 63-3). antibiotics trimethoprim and pentamidine, inhibit distal tubular Na+ reabsorption through the amiloride-sensitive ENaC channel, leading to urinary Na+-Cl– loss. Hereditary defects in renal transport proteins are also associated with reduced reabsorption of filtered Na+-Cl– and/ or water. Alternatively, mineralocorticoid deficiency, mineralocorticoid resistance, or inhibition of the mineralocorticoid receptor (MLR) can reduce Na+-Cl– reabsorption by the aldosterone-sensitive distal nephron. Finally, tubulointerstitial injury, as occurs in interstitial nephritis, acute tubular injury, or obstructive uropathy, can reduce distal tubular Na+-Cl– and/or water absorption. Excessive excretion of free water, i.e., water without electrolytes, can also lead to hypovolemia. However, the effect on ECFV is usually less marked, given that two-thirds of the water volume is lost from the ICF. Excessive renal water excretion occurs in the setting of decreased circulating AVP or renal resistance to AVP (central and nephrogenic diabetes insipidus, respectively). EXTRARENAL CAUSES Nonrenal causes of hypovolemia include fluid loss from the gastrointestinal tract, skin, and respiratory system. Accumulations of fluid within specific tissue compartments, typically the interstitium, peritoneum, or gastrointestinal tract, can also cause hypovolemia. Approximately 9 L of fluid enter the gastrointestinal tract daily, 2 L by ingestion and 7 L by secretion; almost 98% of this volume is absorbed, such that daily fecal fluid loss is only 100–200 mL. Impaired gastrointestinal reabsorption or enhanced secretion of fluid can cause hypovolemia. Because gastric secretions have a low pH (high H+ 298 concentration), whereas biliary, pancreatic, and intestinal secretions of hypovolemia, such as acute tubular necrosis; similarly, patients with are alkaline (high HCO3 concentration), vomiting and diarrhea are diabetes insipidus will have an inappropriately dilute urine. often accompanied by metabolic alkalosis and acidosis, respectively. Evaporation of water from the skin and respiratory tract (so-called “insensible losses”) constitutes the major route for loss of solute-free PART 2 Cardinal Manifestations and Presentation of Diseases water, which is typically 500–650 mL/d in healthy adults. This evaporative loss can increase during febrile illness or prolonged heat exposure. Hyperventilation can also increase insensible losses via the respiratory tract, particularly in ventilated patients; the humidity of inspired air is another determining factor. In addition, increased exertion and/or ambient temperature will increase insensible losses via sweat, which is hypotonic to plasma. Profuse sweating without adequate repletion of water and Na+-Cl– can thus lead to both hypovolemia and hypertonicity. Alternatively, replacement of these insensible losses with a surfeit of free water, without adequate replacement of electrolytes, may lead to hypovolemic hyponatremia. Excessive fluid accumulation in interstitial and/or peritoneal spaces can also cause intravascular hypovolemia. Increases in vascular permeability and/or a reduction in oncotic pressure (hypoalbuminemia) alter Starling forces, resulting in excessive “third spacing” of the ECFV. This occurs in sepsis syndrome, burns, pancreatitis, nutritional hypoalbuminemia, and peritonitis. Alternatively, distributive hypovolemia can occur due to accumulation of fluid within specific compartments, for example within the bowel lumen in gastrointestinal obstruction or ileus. Hypovolemia can also occur after extracorporeal hemorrhage or after significant hemorrhage into an expandable space, for example, the retroperitoneum. Diagnostic Evaluation A careful history will usually determine the etiologic cause of hypovolemia. Symptoms of hypovolemia are nonspecific and include fatigue, weakness, thirst, and postural dizziness; more severe symptoms and signs include oliguria, cyanosis, abdominal and chest pain, and confusion or obtundation. Associated electrolyte disorders may cause additional symptoms, for example, muscle weakness in patients with hypokalemia. On examination, diminished skin turgor and dry oral mucous membranes are less than ideal markers of a decreased ECFV in adult patients; more reliable signs of hypovolemia include a decreased jugular venous pressure (JVP), orthostatic tachycardia (an increase of >15–20 beats/min upon standing), and orthostatic hypotension (a >10–20 mmHg drop in blood pressure on standing). More severe fluid loss leads to hypovolemic shock, with hypotension, tachycardia, peripheral vasoconstriction, and peripheral hypoperfusion; these patients may exhibit peripheral cyanosis, cold extremities, oliguria, and altered mental status. Routine chemistries may reveal an increase in blood urea nitrogen (BUN) and creatinine, reflective of a decrease in GFR. Creatinine is the more dependable measure of GFR, because BUN levels may be influenced by an increase in tubular reabsorption (“prerenal azotemia”), an increase in urea generation in catabolic states, hyperalimentation, or gastrointestinal bleeding, and/or a decreased urea generation in decreased protein intake. In hypovolemic shock, liver function tests and cardiac biomarkers may show evidence of hepatic and cardiac ischemia, respectively. Routine chemistries and/or blood gases may reveal evidence of acid-base disorders. For example, bicarbonate loss due to diarrheal illness is a very common cause of metabolic acidosis; alternatively, patients with severe hypovolemic shock may develop lactic acidosis with an elevated anion gap. The neurohumoral response to hypovolemia stimulates an increase in renal tubular Na+ and water reabsorption. Therefore, the urine Na+ concentration is typically <20 mM in nonrenal causes of hypovolemia, with a urine osmolality of >450 mOsm/kg. The reduction in both GFR and distal tubular Na+ delivery may cause a defect in renal potassium excretion, with an increase in plasma K+ concentration. Of note, patients with hypovolemia and a hypochloremic alkalosis due to vomiting, diarrhea, or diuretics will typically have a urine Na+ concentration >20 mM and urine pH of >7.0, due to the increase in filtered HCO3–; the urine Cl– concentration in this setting is a more accurate indicator of volume status, with a level <25 mM suggestive of hypovolemia. The urine Na+ concentration is often >20 mM in patients with renal causes The therapeutic goals in hypovolemia are to restore normovolemia and replace ongoing fluid losses. Mild hypovolemia can usually be treated with oral hydration and resumption of a normal maintenance diet. More severe hypovolemia requires intravenous hydration, tailoring the choice of solution to the underlying pathophysiology. Isotonic, “normal” saline (0.9% NaCl, 154 mM Na+) is the most appropriate resuscitation fluid for normonatremic or hyponatremic patients with severe hypovolemia; colloid solutions such as intravenous albumin are not demonstrably superior for this purpose. Hypernatremic patients should receive a hypotonic solution, 5% dextrose if there has only been water loss (as in diabetes insipidus), or hypotonic saline (1/2 or 1/4 normal saline) if there has been water and Na+-Cl– loss. Patients with bicarbonate loss and metabolic acidosis, as occur frequently in diarrhea, should receive intravenous bicar bonate, either an isotonic solution (150 meq of Na+-HCO3 in 5% dextrose) or a more hypotonic bicarbonate solution in dextrose or dilute saline. Patients with severe hemorrhage or anemia should receive red cell transfusions, without increasing the hematocrit beyond 35%. Disorders of serum Na+ concentration are caused by abnormalities in water homeostasis, leading to changes in the relative ratio of Na+ to body water. Water intake and circulating AVP constitute the two key effectors in the defense of serum osmolality; defects in one or both of these two defense mechanisms cause most cases of hyponatremia and hypernatremia. In contrast, abnormalities in sodium homeostasis per se lead to a deficit or surplus of whole-body Na+-Cl– content, a key determinant of the ECFV and circulatory integrity. Notably, volume status also modulates the release of AVP by the posterior pituitary, such that hypovolemia is associated with higher circulating levels of the hormone at each level of serum osmolality. Similarly, in “hypervolemic” causes of arterial underfilling, e.g., heart failure and cirrhosis, the associated neurohumoral activation is associated with an increase in circulating AVP, leading to water retention and hyponatremia. Therefore, a key concept in sodium disorders is that the absolute plasma Na+ concentration tells one nothing about the volume status of a given patient, which furthermore must be taken into account in the diagnostic and therapeutic approach. Hyponatremia, which is defined as a plasma Na+ concentration <135 mM, is a very common disorder, occurring in up to 22% of hospitalized patients. This disorder is almost always the result of an increase in circulating AVP and/or increased renal sensitivity to AVP, combined with an intake of free water; a notable exception is hyponatremia due to low solute intake (see below). The underlying pathophysiology for the exaggerated or “inappropriate” AVP response differs in patients with hyponatremia as a function of their ECFV. Hyponatremia is thus subdivided diagnostically into three groups, depending on clinical history and volume status, i.e., “hypovolemic,” “euvolemic,” and “hypervolemic” (Fig. 63-5). Hypovolemic Hyponatremia Hypovolemia causes a marked neurohumoral activation, increasing circulating levels of AVP. The increase in circulating AVP helps preserve blood pressure via vascular and baroreceptor V1A receptors and increases water reabsorption via renal receptors; activation of V receptors can lead to hyponatremia in the setting of increased free water intake. Nonrenal causes of hypovolemic hyponatremia include GI loss (e.g., vomiting, diarrhea, tube drainage) and insensible loss (sweating, burns) of Na+-Cl– and water, in the absence of adequate oral replacement; urine Na+ concentration is typically <20 mM. Notably, these patients may be clinically classified as euvolemic, with only the reduced urinary Na+ concentration to FIguRE 63-5 The diagnostic approach to hyponatremia. (From S Kumar, T Berl: Diseases of water metabolism, in Atlas of Diseases of the Kidney, RW Schrier [ed]. Philadelphia, Current Medicine, Inc, 1999; with permission.) indicate the cause of their hyponatremia. Indeed, a urine Na+ concentration <20 mM, in the absence of a cause of hypervolemic hyponatremia, predicts a rapid increase in plasma Na+ concentration in response to intravenous normal saline; saline therapy thus induces a water diuresis in this setting, as circulating AVP levels plummet. The renal causes of hypovolemic hyponatremia share an inappropriate loss of Na+-Cl– in the urine, leading to volume depletion and an increase in circulating AVP; urine Na+ concentration is typically >20 mM (Fig. 63-5). A deficiency in circulating aldosterone and/or its renal effects can lead to hyponatremia in primary adrenal insufficiency and other causes of hypoaldosteronism; hyperkalemia and hyponatremia in a hypotensive and/or hypovolemic patient with high urine Na+ concentration (much greater than 20 mM) should strongly suggest this diagnosis. Salt-losing nephropathies may lead to hyponatremia when sodium intake is reduced, due to impaired renal tubular function; typical causes include reflux nephropathy, interstitial nephropathies, postobstructive uropathy, medullary cystic disease, and the recovery phase of acute tubular necrosis. Thiazide diuretics cause hyponatremia via a number of mechanisms, including polydipsia and diuretic-induced volume depletion. Notably, thiazides do not inhibit the renal concentrating mechanism, such that circulating AVP retains a full effect on renal water retention. In contrast, loop diuretics, which are less frequently associated with hyponatremia, inhibit Na+-Cl– and K+ absorption by the TALH, blunting the countercurrent mechanism and reducing the ability to concentrate the urine. Increased excretion of an osmotically active nonreabsorbable or poorly reabsorbable solute can also lead to volume depletion and hyponatremia; important causes include glycosuria, ketonuria (e.g., in starvation or in diabetic or alcoholic ketoacidosis), and bicarbonaturia (e.g., in renal tubular acidosis or metabolic alkalosis, where the associated bicarbonaturia leads to loss of Na+). Finally, the syndrome of “cerebral salt wasting” is a rare cause of hypovolemic hyponatremia, encompassing hyponatremia with clinical hypovolemia and inappropriate natriuresis in association with intracranial disease; associated disorders include subarachnoid hemorrhage, traumatic brain injury, craniotomy, encephalitis, and meningitis. Distinction from the more common syndrome of inappropriate antidiuresis is critical because cerebral salt wasting will typically respond to aggressive Na+-Cl– repletion. Hypervolemic Hyponatremia Patients with hypervolemic hyponatremia develop an increase in total-body Na+-Cl– that is accompanied by a proportionately greater increase in total-body water, leading to a reduced plasma Na+ concentration. As in hypovolemic hyponatremia, the causative disorders can be separated by the effect on urine Na+ concentration, with acute or chronic renal failure uniquely associated with an increase in urine Na+ concentration (Fig. 63-5). The pathophysiology of hyponatremia in the sodium-avid edematous disorders (congestive heart failure [CHF], cirrhosis, and nephrotic syndrome) is similar to that in hypovolemic hyponatremia, except that arterial filling and circulatory integrity is decreased due to the specific etiologic factors (e.g., cardiac dysfunction in CHF, peripheral vasodilation in cirrhosis). Urine Na+ concentration is typically very low, i.e., <10 mM, even after hydration with normal saline; this Na+-avid state may be obscured by diuretic therapy. The degree of hyponatremia provides an indirect index of the associated neurohumoral activation and is an important prognostic indicator in hypervolemic hyponatremia. Euvolemic Hyponatremia Euvolemic hyponatremia can occur in moderate to severe hypothyroidism, with correction after achieving a euthyroid state. Severe hyponatremia can also be a consequence of secondary adrenal insufficiency due to pituitary disease; whereas the deficit in circulating aldosterone in primary adrenal insufficiency causes hypovolemic hyponatremia, the predominant glucocorticoid deficiency in secondary adrenal failure is associated with euvolemic hyponatremia. Glucocorticoids exert a negative feedback on AVP release by the posterior pituitary such that hydrocortisone replacement in these patients will rapidly normalize the AVP response to osmolality, reducing circulating AVP. The syndrome of inappropriate antidiuresis (SIAD) is the most frequent cause of euvolemic hyponatremia (Table 63-1). The generation of hyponatremia in SIAD requires an intake of free water, with persistent intake at serum osmolalities that are lower than the usual threshold for thirst; as one would expect, the osmotic threshold and osmotic response curves for the sensation of thirst are shifted downward in patients with SIAD. Four distinct patterns of AVP secretion have been recognized in patients with SIAD, independent for the most part of the underlying cause. Unregulated, erratic AVP secretion is seen in about a third of patients, with no obvious correlation between serum osmolality and circulating AVP levels. Other patients fail to suppress AVP secretion at lower serum osmolalities, with a normal response curve to hyperosmolar conditions; others have a “reset osmostat,” with a lower threshold osmolality and a left-shifted osmotic response curve. Finally, the fourth subset of patients have essentially no detectable Disorders of the Central Malignant Diseases Pulmonary Disorders Nervous System Drugs Other Causes PART 2 Cardinal Manifestations and Presentation of Diseases ated with positive-pressure breathing porphyria Drugs that stimulate release of AVP or enhance its action Chlorpropamide SSRIs Tricyclic antidepressants Clofibrate Carbamazepine Vincristine Nicotine Narcotics Antipsychotic drugs Ifosfamide MDMA (“ecstasy”) AVP analogues Desmopressin Oxytocin Vasopressin Hereditary (gain-of-function mutations in the vasopressin V2 receptor) Idiopathic Transient Endurance exercise General anesthesia Nausea Pain Stress Abbreviations: AVP, vasopressin; MDMA; 3,4-methylenedioxymethamphetamine; SSRI, selective serotonin reuptake inhibitor. Source: From DH Ellison, T Berl: Syndrome of inappropriate antidiuresis. N Engl J Med 356:2064, 2007. circulating AVP, suggesting either a gain in function in renal water reabsorption or a circulating antidiuretic substance that is distinct from AVP. Gain-in-function mutations of a single specific residue in the V2 AVP receptor have been described in some of these patients, leading to constitutive activation of the receptor in the absence of AVP and “nephrogenic” SIAD. Strictly speaking, patients with SIAD are not euvolemic but are sub-clinically volume-expanded, due to AVP-induced water and Na+-Cl– retention; “AVP escape” mechanisms invoked by sustained increases in AVP serve to limit distal renal tubular transport, preserving a modestly hypervolemic steady state. Serum uric acid is often low (<4 mg/dL) in patients with SIAD, consistent with suppressed proximal tubular transport in the setting of increased distal tubular Na+-Cl– and water transport; in contrast, patients with hypovolemic hyponatremia will often be hyperuricemic, due to a shared activation of proximal tubular Na+-Cl– and urate transport. Common causes of SIAD include pulmonary disease (e.g., pneumonia, tuberculosis, pleural effusion) and central nervous system (CNS) diseases (e.g., tumor, subarachnoid hemorrhage, meningitis). SIAD also occurs with malignancies, most commonly with small-cell lung carcinoma (75% of malignancy-associated SIAD); ~10% of patients with this tumor will have a plasma Na+ concentration of <130 mM at presentation. SIAD is also a frequent complication of certain drugs, most commonly the selective serotonin reuptake inhibitors (SSRIs). Other drugs can potentiate the renal effect of AVP, without exerting direct effects on circulating AVP levels (Table 63-1). Low Solute Intake and Hyponatremia Hyponatremia can occasionally occur in patients with a very low intake of dietary solutes. Classically, this occurs in alcoholics whose sole nutrient is beer, hence the diagnostic label of beer potomania; beer is very low in protein and salt content, containing only 1–2 mM of Na+. The syndrome has also been described in nonalcoholic patients with highly restricted solute intake due to nutrient-restricted diets, e.g., extreme vegetarian diets. Patients with hyponatremia due to low solute intake typically present with a very low urine osmolality (<100–200 mOsm/kg) with a urine Na+ concentration that is <10–20 mM. The fundamental abnormality is the inadequate dietary intake of solutes; the reduced urinary solute excretion limits water excretion such that hyponatremia ensues after relatively modest polydipsia. AVP levels have not been reported in patients with beer potomania but are expected to be suppressed or rapidly suppressible with saline hydration; this fits with the overly rapid correction in plasma Na+ concentration that can be seen with saline hydration. Resumption of a normal diet and/or saline hydration will also correct the causative deficit in urinary solute excretion, such that patients with beer potomania typically correct their plasma Na+ concentration promptly after admission to the hospital. Clinical Features of Hyponatremia Hyponatremia induces generalized cellular swelling, a consequence of water movement down the osmotic gradient from the hypotonic ECF to the ICF. The symptoms of hyponatremia are primarily neurologic, reflecting the development of cerebral edema within a rigid skull. The initial CNS response to acute hyponatremia is an increase in interstitial pressure, leading to shunting of ECF and solutes from the interstitial space into the cerebrospinal fluid and then on into the systemic circulation. This is accompanied by an efflux of the major intracellular ions, Na+, K+, and Cl–, from brain cells. Acute hyponatremic encephalopathy ensues when these volume regulatory mechanisms are overwhelmed by a rapid decrease in tonicity, resulting in acute cerebral edema. Early symptoms can include nausea, headache, and vomiting. However, severe complications can rapidly evolve, including seizure activity, brainstem herniation, coma, and death. A key complication of acute hyponatremia is normocapneic or hypercapneic respiratory failure; the associated hypoxia may amplify the neurologic injury. Normocapneic respiratory failure in this setting is typically due to noncardiogenic, “neurogenic” pulmonary edema, with a normal pulmonary capillary wedge pressure. Acute symptomatic hyponatremia is a medical emergency, occurring in a number of specific settings (Table 63-2). Women, particularly before menopause, are much more likely than men to develop encephalopathy and severe neurologic sequelae. Acute hyponatremia often has an iatrogenic component, e.g., when hypotonic intravenous fluids are given to postoperative patients with an increase in circulating AVP. Exercise-associated hyponatremia, an important clinical issue at CAuSES of ACuTE HyPonATREMiA Postoperative: premenopausal women Hypotonic fluids with cause of ↑ vasopressin Glycine irrigation: TURP, uterine surgery Recent institution of thiazides MDMA (“ecstasy,”“Molly”) ingestion Multifactorial, e.g., thiazide and polydipsia Abbreviations: MDMA, 3,4-methylenedioxymethamphetamine; TURP, transurethral resection of the prostate. marathons and other endurance events, has similarly been linked to both a “nonosmotic” increase in circulating AVP and excessive free water intake. The recreational drugs Molly and ecstasy, which share an active ingredient (MDMA, 3,4-methylenedioxymethamphetamine), cause a rapid and potent induction of both thirst and AVP, leading to severe acute hyponatremia. Persistent, chronic hyponatremia results in an efflux of organic osmolytes (creatine, betaine, glutamate, myoinositol, and taurine) from brain cells; this response reduces intracellular osmolality and the osmotic gradient favoring water entry. This reduction in intracellular osmolytes is largely complete within 48 h, the time period that clinically defines chronic hyponatremia; this temporal definition has considerable relevance for the treatment of hyponatremia (see below). The cellular response to chronic hyponatremia does not fully protect patients from symptoms, which can include vomiting, nausea, confusion, and seizures, usually at plasma Na+ concentration <125 mM. Even patients who are judged “asymptomatic” can manifest subtle gait and cognitive defects that reverse with correction of hyponatremia; notably, chronic “asymptomatic” hyponatremia increases the risk of falls. Chronic hyponatremia also increases the risk of bony fractures owing to the associated neurologic dysfunction and to a hyponatremia-associated reduction in bone density. Therefore, every attempt should be made to correct safely the plasma Na+ concentration in patients with chronic hyponatremia, even in the absence of overt symptoms (see the section on treatment of hyponatremia below). The management of chronic hyponatremia is complicated significantly by the asymmetry of the cellular response to correction of plasma Na+ concentration. Specifically, the reaccumulation of organic osmolytes by brain cells is attenuated and delayed as osmolality increases after correction of hyponatremia, sometimes resulting in degenerative loss of oligodendrocytes and an osmotic demyelination syndrome (ODS). Overly rapid correction of hyponatremia (>8–10 mM in 24 h or 18 mM in 48 h) is also associated with a disruption in integrity of the blood-brain barrier, allowing the entry of immune mediators that may contribute to demyelination. The lesions of ODS classically affect the pons, a structure wherein the delay in the reaccumulation of osmotic osmolytes is particularly pronounced; clinically, patients with central pontine myelinolysis can present 1 or more days after overcorrection of hyponatremia with paraparesis or quadriparesis, dysphagia, dysarthria, diplopia, a “locked-in syndrome,” and/or loss of consciousness. Other regions of the brain can also be involved in ODS, most commonly in association with lesions of the pons but occasionally in isolation; in order of frequency, the lesions of extrapontine myelinolysis can occur in the cerebellum, lateral geniculate body, thalamus, putamen, and cerebral cortex or subcortex. Clinical presentation of ODS can, therefore, vary as a function of the extent and localization of extrapontine myelinolysis, with the reported development of ataxia, mutism, parkinsonism, dystonia, and catatonia. Relowering of plasma Na+ concentration after overly rapid correction can prevent or attenuate ODS (see the section on treatment of hyponatremia below). However, even appropriately slow correction can be associated with ODS, particularly in patients with additional risk factors; these include alcoholism, malnutrition, hypokalemia, and liver 301 transplantation. Diagnostic Evaluation of Hyponatremia Clinical assessment of hyponatremic patients should focus on the underlying cause; a detailed drug history is particularly crucial (Table 63-1). A careful clinical assessment of volume status is obligatory for the classical diagnostic approach to hyponatremia (Fig. 63-5). Hyponatremia is frequently multifactorial, particularly when severe; clinical evaluation should consider all the possible causes for excessive circulating AVP, including volume status, drugs, and the presence of nausea and/or pain. Radiologic imaging may also be appropriate to assess whether patients have a pulmonary or CNS cause for hyponatremia. A screening chest x-ray may fail to detect a small-cell carcinoma of the lung; computed tomography (CT) scanning of the thorax should be considered in patients at high risk for this tumor (e.g., patients with a smoking history). Laboratory investigation should include a measurement of serum osmolality to exclude pseudohyponatremia, which is defined as the coexistence of hyponatremia with a normal or increased plasma tonicity. Most clinical laboratories measure plasma Na+ concentration by testing diluted samples with automated ion-sensitive electrodes, correcting for this dilution by assuming that plasma is 93% water. This correction factor can be inaccurate in patients with pseudohyponatremia due to extreme hyperlipidemia and/or hyperproteinemia, in whom serum lipid or protein makes up a greater percentage of plasma volume. The measured osmolality should also be converted to the effective osmolality (tonicity) by subtracting the measured concentration of urea (divided by 2.8, if in mg/dL); patients with hyponatremia have an effective osmolality of <275 mOsm/kg. Elevated BUN and creatinine in routine chemistries can also indicate renal dysfunction as a potential cause of hyponatremia, whereas hyperkalemia may suggest adrenal insufficiency or hypoaldosteronism. Serum glucose should also be measured; plasma Na+ concentration falls by ~1.6–2.4 mM for every 100-mg/dL increase in glucose, due to glucose-induced water efflux from cells; this “true” hyponatremia resolves after correction of hyperglycemia. Measurement of serum uric acid should also be performed; whereas patients with SIAD-type physiology will typically be hypouricemic (serum uric acid <4 mg/dL), volume-depleted patients will often be hyperuricemic. In the appropriate clinical setting, thyroid, adrenal, and pituitary function should also be tested; hypothyroidism and secondary adrenal failure due to pituitary insufficiency are important causes of euvolemic hyponatremia, whereas primary adrenal failure causes hypovolemic hyponatremia. A cosyntropin stimulation test is necessary to assess for primary adrenal insufficiency. Urine electrolytes and osmolality are crucial tests in the initial evaluation of hyponatremia. A urine Na+ concentration <20–30 mM is consistent with hypovolemic hyponatremia, in the clinical absence of a hypervolemic, Na+-avid syndrome such as CHF (Fig. 63-5). In contrast, patients with SIAD will typically excrete urine with an Na+ concentration that is >30 mM. However, there can be substantial overlap in urine Na+ concentration values in patients with SIAD and hypovolemic hyponatremia, particularly in the elderly; the ultimate “gold standard” for the diagnosis of hypovolemic hyponatremia is the demonstration that plasma Na+ concentration corrects after hydration with normal saline. Patients with thiazide-associated hyponatremia may also present with higher than expected urine Na+ concentration and other findings suggestive of SIAD; one should defer making a diagnosis of SIAD in these patients until 1–2 weeks after discontinuing the thiazide. A urine osmolality <100 mOsm/kg is suggestive of polydipsia; urine osmolality >400 mOsm/kg indicates that AVP excess is playing a more dominant role, whereas intermediate values are more consistent with multifactorial pathophysiology (e.g., AVP excess with a significant component of polydipsia). Patients with hyponatremia due to decreased solute intake (beer potomania) typically have urine Na+ concentration <20 mM and urine osmolality in the range of <100 to the low 200s. Finally, the measurement of urine K+ concentration is required to calculate the urine-to-plasma electrolyte ratio, which is useful to predict the response to fluid restriction (see the section on treatment of hyponatremia below). Three major considerations guide the therapy of hyponatremia. First, the presence and/or severity of symptoms determine the urgency and goals of therapy. Patients with acute hyponatremia (Table 63-2) present with symptoms that can range from headache, nausea, and/or vomiting, to seizures, obtundation, and central herniation; patients with chronic hyponatremia, present for >48 h, are less likely to have severe symptoms. Second, patients with chronic hyponatremia are at risk for ODS if plasma Na+ concentration is corrected by >8–10 mM within the first 24 h and/or by >18 mM within the first 48 h. Third, the response to interventions such as hypertonic saline, isotonic saline, or AVP antagonists can be highly unpredictable, such that frequent monitoring of plasma Na+ concentration during corrective therapy is imperative. Once the urgency in correcting the plasma Na+ concentration has been established and appropriate therapy instituted, the focus should be on treatment or withdrawal of the underlying cause. Patients with euvolemic hyponatremia due to SIAD, hypothyroidism, or secondary adrenal failure will respond to successful treatment of the underlying cause, with an increase in plasma Na+ concentration. However, not all causes of SIAD are immediately reversible, necessitating pharmacologic therapy to increase the plasma Na+ concentration (see below). Hypovolemic hyponatremia will respond to intravenous hydration with isotonic normal saline, with a rapid reduction in circulating AVP and a brisk water diuresis; it may be necessary to reduce the rate of correction if the history suggests that hyponatremia has been chronic, i.e., present for more than 48 h (see below). Hypervolemic hyponatremia due to CHF will often respond to improved therapy of the underlying cardiomyopathy, e.g., following the institution or intensification of angiotensin-converting enzyme (ACE) inhibition. Finally, patients with hyponatremia due to beer potomania and low solute intake will respond very rapidly to intravenous saline and the resumption of a normal diet. Notably, patients with beer potomania have a very high risk of developing ODS, due to the associated hypokalemia, alcoholism, malnutrition, and high risk of overcorrecting the plasma Na+ concentration. Water deprivation has long been a cornerstone of the therapy of chronic hyponatremia. However, patients who are excreting minimal electrolyte-free water will require aggressive fluid restriction; this can be very difficult for patients with SIAD to tolerate, given that their thirst is also inappropriately stimulated. The urine-to-plasma electrolyte ratio (urinary [Na+] + [K+]/plasma [Na+]) can be exploited as a quick indicator of electrolyte-free water excretion (Table 63-3); patients with a ratio of >1 should be more aggressively restricted PART 2 Cardinal Manifestations and Presentation of Diseases 1. Estimate total-body water (TBW): 50% of body weight in women and 60% in men 2. Calculate free-water deficit: [(Na+ − 140)/140] × TBW 3. Administer deficit over 48–72 h, without decrease in plasma Na+ concentration by >10 mM/24 h 4. Calculate free-water clearance, CeH2O: where V is urinary volume, U is urinary [Na+], U is urinary [K+], and P is 5. ~10 mL/kg per day: less if ventilated, more if febrile 6. Add components to determine water deficit and ongoing water loss; correct the water deficit over 48–72 h and replace daily water loss. Avoid correction of plasma [Na+] by >10 mM/d. (<500 mL/d), those with a ratio of ~1 should be restricted to 500– 700 mL/d, and those with a ratio <1 should be restricted to <1 L/d. In hypokalemic patients, potassium replacement will serve to increase plasma Na+ concentration, given that the plasma Na+ concentration is a functional of both exchangeable Na+ and exchangeable K+ divided by total-body water; a corollary is that aggressive repletion of K+ has the potential to overcorrect the plasma Na+ concentration even in the absence of hypertonic saline. Plasma Na+ concentration will also tend to respond to an increase in dietary solute intake, which increases the ability to excrete free water; however, the use of oral urea and/or salt tablets for this purpose is generally not practical or well tolerated. Patients in whom therapy with fluid restriction, potassium replacement, and/or increased solute intake fails may merit pharmacologic therapy to increase their plasma Na+ concentration. Many patients with SIAD respond to combined therapy with oral furosemide, 20 mg twice a day (higher doses may be necessary in renal insufficiency), and oral salt tablets; furosemide serves to inhibit the renal countercurrent mechanism and blunt urinary concentrating ability, whereas the salt tablets counteract diuretic-associated natriuresis. Demeclocycline is a potent inhibitor of principal cells and can be used in patients whose Na levels do not increase in response to furosemide and salt tablets. However, this agent can be associated with a reduction in GFR, due to excessive natriuresis and/or direct renal toxicity; it should be avoided in cirrhotic patients in particular, who are at higher risk of nephrotoxicity due to drug accumulation. AVP antagonists (vaptans) are highly effective in SIAD and in hypervolemic hyponatremia due to heart failure or cirrhosis, reliably increasing plasma Na+ concentration due to their “aquaretic” effects (augmentation of free water clearance). Most of these agents specifically antagonize the V2 AVP receptor; tolvaptan is currently the only oral V2 antagonist to be approved by the U.S. Food and Drug Administration. Conivaptan, the only available intravenous vaptan, is a mixed V1A/V2 antagonist, with a modest risk of hypotension due to V1A receptor inhibition. Therapy with vaptans must be initiated in a hospital setting, with a liberalization of fluid restriction (>2 L/d) and close monitoring of plasma Na+ concentration. Although approved for the management of all but hypovolemic hyponatremia and acute hyponatremia, the clinical indications for these agents are not completely clear. Oral tolvaptan is perhaps most appropriate for the management of significant and persistent SIAD (e.g., in small-cell lung carcinoma) that has not responded to water restriction and/or oral furosemide and salt tablets. Abnormalities in liver function tests have been reported with chronic tolvaptan therapy; hence, the use of this agent should be restricted to <1–2 months. Treatment of acute symptomatic hyponatremia should include hypertonic 3% saline (513 mM) to acutely increase plasma Na+ concentration by 1–2 mM/h to a total of 4–6 mM; this modest increase is typically sufficient to alleviate severe acute symptoms, after which corrective guidelines for chronic hyponatremia are appropriate (see below). A number of equations have been developed to estimate the required rate of hypertonic saline, which has an Na+-Cl– concentration of 513 mM. The traditional approach is to calculate an Na+ deficit, where the Na+ deficit = 0.6 × body weight × (target plasma Na+ concentration – starting plasma Na+ concentration), followed by a calculation of the required rate. Regardless of the method used to determine the rate of administration, the increase in plasma Na+ concentration can be highly unpredictable during treatment with hypertonic saline, due to rapid changes in the underlying physiology; plasma Na+ concentration should be monitored every 2–4 h during treatment, with appropriate changes in therapy based on the observed rate of change. The administration of supplemental oxygen and ventilatory support is also critical in acute hyponatremia, in the event that patients develop acute pulmonary edema or hypercapneic respiratory failure. Intravenous loop diuretics will help treat acute pulmonary edema and will also increase free water excretion, by interfering with the renal countercurrent multiplication system. AVP antagonists do not have an approved role in the management of acute hyponatremia. The rate of correction should be comparatively slow in chronic hyponatremia (<8–10 mM in the first 24 h and <18 mM in the first 48 h), so as to avoid ODS; lower target rates are appropriate in patients at particular risk for ODS, such as alcoholics or hypokalemic patients. Overcorrection of the plasma Na+ concentration can occur when AVP levels rapidly normalize, for example following the treatment of patients with chronic hypovolemic hyponatremia with intravenous saline or following glucocorticoid replacement of patients with hypopituitarism and secondary adrenal failure. Approximately 10% of patients treated with vaptans will overcorrect; the risk is increased if water intake is not liberalized. In the event that the plasma Na+ concentration overcorrects following therapy, be it with hypertonic saline, isotonic saline, or a vaptan, hyponatremia can be safely reinduced or stabilized by the administration of the AVP agonist desmopressin acetate (DDAVP) and/or the administration of free water, typically intravenous D5W; the goal is to prevent or reverse the development of ODS. Alternatively, the treatment of patients with marked hyponatremia can be initiated with the twice-daily administration of DDAVP to maintain constant AVP bioactivity, combined with the administration of hypertonic saline to slowly correct the serum sodium in a more controlled fashion, thus reducing upfront the risk of overcorrection. HYPERNATREMIA Etiology Hypernatremia is defined as an increase in the plasma Na+ concentration to >145 mM. Considerably less common than hyponatremia, hypernatremia is nonetheless associated with mortality rates of as high as 40–60%, mostly due to the severity of the associated underlying disease processes. Hypernatremia is usually the result of a combined water and electrolyte deficit, with losses of H2O in excess of Na+. Less frequently, the ingestion or iatrogenic administration of excess Na+ can be causative, for example after IV administration of excessive hypertonic Na+-Cl– or Na+-HCO3 (Fig. 63-6). Elderly individuals with reduced thirst and/or diminished access to fluids are at the highest risk of developing hypernatremia. Patients with hypernatremia may rarely have a central defect in hypothalamic osmoreceptor function, with a mixture of both decreased thirst and reduced AVP secretion. Causes of this adipsic diabetes insipidus include primary or metastatic tumor, occlusion or ligation of the anterior communicating artery, trauma, hydrocephalus, and inflammation. FIguRE 63-6 The diagnostic approach to hypernatremia. ECF, extracellular fluid. Hypernatremia can develop following the loss of water via both 303 renal and nonrenal routes. Insensible losses of water may increase in the setting of fever, exercise, heat exposure, severe burns, or mechanical ventilation. Diarrhea is, in turn, the most common gastrointestinal cause of hypernatremia. Notably, osmotic diarrhea and viral gastroenteritides typically generate stools with Na+ and K+ <100 mM, thus leading to water loss and hypernatremia; in contrast, secretory diarrhea typically results in isotonic stool and thus hypovolemia with or without hypovolemic hyponatremia. Common causes of renal water loss include osmotic diuresis secondary to hyperglycemia, excess urea, postobstructive diuresis, or mannitol; these disorders share an increase in urinary solute excretion and urinary osmolality (see “Diagnostic Approach,” below). Hypernatremia due to a water diuresis occurs in central or nephrogenic diabetes insipidus (DI). Nephrogenic DI (NDI) is characterized by renal resistance to AVP, which can be partial or complete (see “Diagnostic Approach,” below). Genetic causes include loss-of-function mutations in the X-linked V2 receptor; mutations in the AVP-responsive aquaporin-2 water channel can cause autosomal recessive and autosomal dominant NDI, whereas recessive deficiency of the aquaporin-1 water channel causes a more modest concentrating defect (Fig. 63-2). Hypercalcemia can also cause polyuria and NDI; calcium signals directly through the calcium-sensing receptor to downregulate Na+, K+, and Cl– transport by the TALH and water transport in principal cells, thus reducing renal concentrating ability in hypercalcemia. Another common acquired cause of NDI is hypokalemia, which inhibits the renal response to AVP and downregulates aquaporin-2 expression. Several drugs can cause acquired NDI, in particular lithium, ifosfamide, and several antiviral agents. Lithium causes NDI by multiple mechanisms, including direct inhibition of renal glycogen synthase kinase-3 (GSK3), a kinase thought to be the pharmacologic target of lithium in bipolar disease; GSK3 is required for the response of principal cells to AVP. The entry of lithium through the amiloride-sensitive Na+ channel ENaC (Fig. 63-4) is required for the effect of the drug on principal cells, such that combined therapy within lithium and amiloride can mitigate lithium-associated NDI. However, lithium causes chronic tubulointerstitial scarring and chronic kidney disease after prolonged therapy, such that patients may have a persistent NDI long after stopping the drug, with a reduced therapeutic benefit from amiloride. Finally, gestational DI is a rare complication of late-term pregnancy wherein increased activity of a circulating placental protease with “vasopressinase” activity leads to reduced circulating AVP and polyuria, often accompanied by hypernatremia. DDAVP is an effective therapy for this syndrome, given its resistance to the vasopressinase enzyme. Clinical Features Hypernatremia increases osmolality of the ECF, generating an osmotic gradient between the ECF and ICF, an efflux of intracellular water, and cellular shrinkage. As in hyponatremia, the symptoms of hypernatremia are predominantly neurologic. Altered mental status is the most frequent manifestation, ranging from mild confusion and lethargy to deep coma. The sudden shrinkage of brain cells in acute hypernatremia may lead to parenchymal or subarachnoid hemorrhages and/or subdural hematomas; however, these vascular complications are primarily encountered in pediatric and neonatal patients. Osmotic damage to muscle membranes can also lead to hypernatremic rhabdomyolysis. Brain cells accommodate to a chronic increase in ECF osmolality (>48 h) by activating membrane transporters that mediate influx and intracellular accumulation of organic osmolytes (creatine, betaine, glutamate, myoinositol, and taurine); this results in an increase in ICF water and normalization of brain parenchymal volume. In consequence, patients with chronic hypernatremia are less likely to develop severe neurologic compromise. However, the cellular response to chronic hypernatremia predisposes these patients to the development of cerebral edema and seizures during overly rapid hydration (overcorrection of plasma Na+ concentration by >10 mM/d). Diagnostic Approach The history should focus on the presence or absence of thirst, polyuria, and/or an extrarenal source for water loss, 304 such as diarrhea. The physical examination should include a detailed neurologic exam and an assessment of the ECFV; patients with a particularly large water deficit and/or a combined deficit in electrolytes and water may be hypovolemic, with reduced JVP and orthostasis. Accurate documentation of daily fluid intake and daily urine output is also critical for the diagnosis and management of hypernatremia. Laboratory investigation should include a measurement of serum and urine osmolality, in addition to urine electrolytes. The appropriate response to hypernatremia and a serum osmolality >295 mOsm/kg is an increase in circulating AVP and the excretion of low volumes (<500 mL/d) of maximally concentrated urine, i.e., urine with osmolality >800 mOsm/kg; should this be the case, then an extrarenal source of water loss is primarily responsible for the generation of hypernatremia. Many patients with hypernatremia are polyuric; should an osmotic diuresis be responsible, with excessive excretion of Na+-Cl–, glucose, and/or urea, then daily solute excretion will be >750–1000 mOsm/d (>15 mOsm/kg body water per day) (Fig. 63-6). More commonly, patients with hypernatremia and polyuria will have a predominant water diuresis, with excessive excretion of hypotonic, dilute urine. Adequate differentiation between nephrogenic and central causes of DI requires the measurement of the response in urinary osmolality to DDAVP, combined with measurement of circulating AVP in the setting of hypertonicity. By definition, patients with baseline hypernatremia are hypertonic, with an adequate stimulus for AVP by the posterior pituitary. Therefore, in contrast to polyuric patients with a normal or reduced baseline plasma Na+ concentration and osmolality, a water deprivation test (Chap. 61) is unnecessary in hypernatremia; indeed, water deprivation is absolutely contraindicated in this setting, given the risk for worsening the hypernatremia. Patients with NDI will fail to respond to DDAVP, with a urine osmolality that increases by <50% or <150 mOsm/kg from baseline, in combination with a normal or high circulating AVP level; patients with central DI will respond to DDAVP, with a reduced circulating AVP. Patients may exhibit a partial response to DDAVP, with a >50% rise in urine osmolality that nonetheless fails to reach 800 mOsm/kg; the level of circulating AVP will help differentiate the underlying cause, i.e., NDI versus central DI. In pregnant patients, AVP assays should be drawn in tubes containing the protease inhibitor 1,10-phenanthroline, to prevent in vitro degradation of AVP by placental vasopressinase. For patients with hypernatremia due to renal loss of water, it is critical to quantify ongoing daily losses using the calculated electrolyte-free water clearance, in addition to calculation of the baseline water deficit (the relevant formulas are discussed in Table 63-3). This requires daily measurement of urine electrolytes, combined with accurate measurement of daily urine volume. PART 2 Cardinal Manifestations and Presentation of Diseases The underlying cause of hypernatremia should be withdrawn or corrected, be it drugs, hyperglycemia, hypercalcemia, hypokalemia, or diarrhea. The approach to the correction of hypernatremia is outlined in Table 63-3. It is imperative to correct hypernatremia slowly to avoid cerebral edema, typically replacing the calculated free water deficit over 48 h. Notably, the plasma Na+ concentration should be corrected by no more than 10 mM/d, which may take longer than 48 h in patients with severe hypernatremia (>160 mM). A rare exception is patients with acute hypernatremia (<48 h) due to sodium loading, who can safely be corrected rapidly at a rate of 1 mM/h. Water should ideally be administered by mouth or by nasogastric tube, as the most direct way to provide free water, i.e., water without electrolytes. Alternatively, patients can receive free water in dextrose-containing IV solutions, such as 5% dextrose (D5W); blood glucose should be monitored in case hyperglycemia occurs. Depending on the history, blood pressure, or clinical volume status, it may be appropriate to initially treat with hypotonic saline solutions (1/4 or 1/2 normal saline); normal saline is usually inappropriate in the absence of very severe hypernatremia, where normal saline is proportionally more hypotonic relative to plasma, or frank hypotension. Calculation of urinary electrolyte-free water clearance (Table 63-3) is required to estimate daily, ongoing loss of free water in patients with NDI or central DI, which should be replenished daily. Additional therapy may be feasible in specific cases. Patients with central DI should respond to the administration of intravenous, intranasal, or oral DDAVP. Patients with NDI due to lithium may reduce their polyuria with amiloride (2.5–10 mg/d), which decreases entry of lithium into principal cells by inhibiting ENaC (see above); in practice, however, most patients with lithium-associated DI are able to compensate for their polyuria by simply increasing their daily water intake. Thiazides may reduce polyuria due to NDI, ostensibly by inducing hypovolemia and increasing proximal tubular water reabsorption. Occasionally, nonsteroidal anti-inflammatory drugs (NSAIDs) have been used to treat polyuria associated with NDI, reducing the negative effect of intrarenal prostaglandins on urinary concentrating mechanisms; however, this assumes the risks of NSAID-associated gastric and/or renal toxicity. Furthermore, it must be emphasized that thiazides, amiloride, and NSAIDs are only appropriate for chronic management of polyuria from NDI and have no role in the acute management of associated hypernatremia, where the focus is on replacing free water deficits and ongoing free water loss. 3.5 and 5.0 mM, despite marked variation in dietary K+ intake. In a healthy individual at steady state, the entire daily intake of potassium is excreted, approximately 90% in the urine and 10% in the stool; thus, the kidney plays a dominant role in potassium homeostasis. However, more than 98% of total-body potassium is intracellular, chiefly in muscle; buffering of extracellular K+ by this large intracellular pool plays a crucial role in the regulation of plasma K+ concentration. Changes in the exchange and distribution of intraand extracellular K+ can thus lead to marked hypoor hyperkalemia. A corollary is that massive necrosis and the attendant release of tissue K+ can cause severe hyperkalemia, particularly in the setting of acute kidney injury and reduced excretion of K+. Changes in whole-body K+ content are primarily mediated by the kidney, which reabsorbs filtered K+ in hypokalemic, K+-deficient states and secretes K+ in hyperkalemic, K+-replete states. Although K+ is transported along the entire nephron, it is the principal cells of the connecting segment (CNT) and cortical CD that play a dominant role in renal K+ secretion, whereas alpha-intercalated cells of the outer medullary CD function in renal tubular reabsorption of filtered K+ in K+-deficient states. In principal cells, apical Na+ entry via the amiloride-sensitive ENaC generates a lumen-negative potential difference, which drives passive K+ exit through apical K+ channels (Fig. 63-4). Two major K+ channels mediate distal tubular K+ secretion: the secretory K+ channel ROMK (renal outer medullary K+ channel; also known as Kir1.1 or KcnJ1) and the flow-sensitive big potassium (BK) or maxi-K K+ channel. ROMK is thought to mediate the bulk of constitutive K+ secretion, whereas increases in distal flow rate and/or genetic absence of ROMK activate K+ secretion via the BK channel. An appreciation of the relationship between ENaC-dependent Na+ entry and distal K+ secretion (Fig. 63-4) is required for the bedside interpretation of potassium disorders. For example, decreased distal delivery of Na+, as occurs in hypovolemic, prerenal states, tends to blunt the ability to excrete K+, leading to hyperkalemia; on the other hand, an increase in distal delivery of Na+ and distal flow rate, as occurs after treatment with thiazide and loop diuretics, can enhance K+ secretion and lead to hypokalemia. Hyperkalemia is also a predictable consequence of drugs that directly inhibit ENaC, due to the role of this Na+ channel in generating a lumen-negative potential difference. Aldosterone in turn has a major influence on potassium excretion, increasing the activity of ENaC channels and thus amplifying the driving force for K+ secretion across the luminal membrane of principal cells. Abnormalities in the renin-angiotensin-aldosterone system can thus cause both hypokalemia and hyperkalemia. Notably, however, potassium excess and potassium restriction have opposing, aldosterone-independent effects on the density and activity of apical K+ channels in the distal nephron, i.e., factors other than aldosterone modulate the renal capacity to secrete K+. In addition, potassium restriction and hypokalemia activates aldosterone-independent distal reabsorption of filtered K+, activating apical H+/K+-ATPase activity in intercalated cells within the outer medullary CD. Reflective perhaps of this physiology, changes in plasma K+ concentration are not universal in disorders associated with changes in aldosterone activity. Hypokalemia, defined as a plasma K+ concentration of <3.5 mM, occurs in up to 20% of hospitalized patients. Hypokalemia is associated with a 10-fold increase in in-hospital mortality, due to adverse effects on cardiac rhythm, blood pressure, and cardiovascular morbidity. Mechanistically, hypokalemia can be caused by redistribution of K+ between tissues and the ECF or by renal and nonrenal loss of K+ (Table 63-4). Systemic hypomagnesemia can also cause treatment-resistant hypokalemia, due to a combination of reduced cellular uptake of K+ and exaggerated renal secretion. Spurious hypokalemia or “pseudohypokalemia” can occasionally result from in vitro cellular uptake of K+ after venipuncture, for example, due to profound leukocytosis in acute leukemia. Redistribution and Hypokalemia Insulin, β2-adrenergic activity, thyroid hormone, and alkalosis promote Na+/K+-ATPase-mediated cellular uptake of K+, leading to hypokalemia. Inhibition of the passive efflux of K+ can also cause hypokalemia, albeit rarely; this typically occurs in the setting of systemic inhibition of K+ channels by toxic barium ions. Exogenous insulin can cause iatrogenic hypokalemia, particularly during the management of K+-deficient states such as diabetic ketoacidosis. Alternatively, the stimulation of endogenous insulin can provoke hypokalemia, hypomagnesemia, and/or hypophosphatemia in malnourished patients given a carbohydrate load. Alterations in the activity of the endogenous sympathetic nervous system can cause hypokalemia in several settings, including alcohol withdrawal, hyperthyroidism, acute myocardial infarction, and severe head injury. β2 agonists, including both bronchodilators and tocolytics (ritodrine), are powerful activators of cellular K+ uptake; “hidden” sympathomimetics, such as pseudoephedrine and ephedrine in cough syrup or dieting agents, may also cause unexpected hypokalemia. Finally, xanthinedependent activation of cAMP-dependent signaling, downstream of the β2 receptor, can lead to hypokalemia, usually in the setting of overdose (theophylline) or marked overingestion (dietary caffeine). Redistributive hypokalemia can also occur in the setting of hyperthyroidism, with periodic attacks of hypokalemic paralysis (thyrotoxic periodic paralysis [TPP]). Similar episodes of hypokalemic weakness in the absence of thyroid abnormalities occur in familial hypokalemic periodic paralysis, usually caused by missense mutations of voltage sensor domains within the α1 subunit of L-type calcium channels or the skeletal Na+ channel; these mutations generate an abnormal gating pore current activated by hyperpolarization. TPP develops more frequently in patients of Asian or Hispanic origin; this shared predisposition has been linked to genetic variation in Kir2.6, a muscle-specific, thyroid hormone–responsive K+ channel. Patients with TPP typically present with weakness of the extremities and limb girdles, with paralytic episodes that occur most frequently between 1 and 6 am. Signs and symptoms of hyperthyroidism are not invariably present. Hypokalemia is usually profound and almost invariably accompanied by hypophosphatemia and hypomagnesemia. The hypokalemia in TPP is attributed to both direct and indirect activation of the Na+/ K+-ATPase, resulting in increased uptake of K+ by muscle and other tissues. Increases in β-adrenergic activity play an important role in that high-dose propranolol (3 mg/kg) rapidly reverses the associated hypokalemia, hypophosphatemia, and paralysis. Nonrenal Loss of Potassium The loss of K+ in sweat is typically low, except under extremes of physical exertion. Direct gastric losses of K+ due to vomiting or nasogastric suctioning are also minimal; however, the ensuing hypochloremic alkalosis results in persistent kaliuresis due to secondary hyperaldosteronism and bicarbonaturia, i.e., a renal loss CAuSES of HyPoKALEMiA I. Decreased intake A. Starvation B. Clay ingestion II. A. 1. Metabolic alkalosis B. Hormonal 1. 2. Increased β2-adrenergic sympathetic activity: post–myocardial infarction, head injury 3. β2-Adrenergic agonists – bronchodilators, tocolytics 4. 5. 6. Downstream stimulation of Na+/K+-ATPase: theophylline, caffeine C. Anabolic state 1. 2. 3. D. Other 1. 2. 3. 4. Barium toxicity: systemic inhibition of “leak” K+ channels III. A. 1. 2. B. Renal 1. Increased distal flow and distal Na+ delivery: diuretics, osmotic diuresis, salt-wasting nephropathies 2. Increased secretion of potassium a. Mineralocorticoid excess: primary hyperaldosteronism (aldosterone-producing adenomas, primary or unilateral adrenal hyperplasia, idiopathic hyperaldosteronism due to bilateral adrenal hyperplasia, and adrenal carcinoma), genetic hyperaldosteronism (familial hyperaldosteronism types I/II/III, congenital adrenal hyperplasias), secondary hyperaldosteronism (malignant hypertension, renin-secreting tumors, renal artery stenosis, hypovolemia), Cushing’s syndrome, Bartter’s syndrome, Gitelman’s syndrome b. Apparent mineralocorticoid excess: genetic deficiency of 11β-dehydrogenase-2 (syndrome of apparent mineralocorticoid excess), inhibition of 11β-dehydrogenase-2 (glycyrrhetinic/ glycyrrhizinic acid and/or carbenoxolone; licorice, food products, drugs), Liddle’s syndrome (genetic activation of epithelial Na+ channels) c. Distal delivery of nonreabsorbed anions: vomiting, nasogastric suction, proximal renal tubular acidosis, diabetic ketoacidosis, glue-sniffing (toluene abuse), penicillin derivatives (penicillin, nafcillin, dicloxacillin, ticarcillin, oxacillin, and carbenicillin) 3. Magnesium deficiency of K+. Diarrhea is a globally important cause of hypokalemia, given the worldwide prevalence of infectious diarrheal disease. Noninfectious gastrointestinal processes such as celiac disease, ileostomy, villous adenomas, inflammatory bowel disease, colonic pseudo-obstruction (Ogilvie’s syndrome), VIPomas, and chronic laxative abuse can also cause significant hypokalemia; an exaggerated intestinal secretion of potassium by upregulated colonic BK channels has been directly implicated in the pathogenesis of hypokalemia in many of these disorders. Renal Loss of Potassium Drugs can increase renal K+ excretion by a variety of different mechanisms. Diuretics are a particularly common cause, due to associated increases in distal tubular Na+ delivery and 306 distal tubular flow rate, in addition to secondary hyperaldosteronism. Thiazides have a greater effect on plasma K+ concentration than loop diuretics, despite their lesser natriuretic effect. The diuretic effect of thiazides is largely due to inhibition of the Na+-Cl– cotransporter NCC in DCT cells. This leads to a direct increase in the delivery of luminal Na+ to the principal cells immediately downstream in the CNT and cortical CD, which augments Na+ entry via ENaC, increases the lumen-negative potential difference, and amplifies K+ secretion. The higher propensity of thiazides to cause hypokalemia may also be secondary to thiazide-associated hypocalciuria, versus the hypercalciuria seen with loop diuretics; the increases in downstream luminal calcium in response to loop diuretics inhibit ENaC in principal cells, thus reducing the lumen-negative potential difference and attenuating distal K+ excretion. High doses of penicillin-related antibiotics (nafcillin, dicloxacillin, ticarcillin, oxacillin, and carbenicillin) can increase obligatory K+ excretion by acting as nonreabsorbable anions in the distal nephron. Finally, several renal tubular toxins cause renal K+ and magnesium wasting, leading to hypokalemia and hypomagnesemia; these drugs include aminoglycosides, amphotericin, foscarnet, cisplatin, and ifosfamide (see also “Magnesium Deficiency and Hypokalemia,” below). Aldosterone activates the ENaC channel in principal cells via multiple synergistic mechanisms, thus increasing the driving force for K+ excretion. In consequence, increases in aldosterone bioactivity and/ or gains in function of aldosterone-dependent signaling pathways are associated with hypokalemia. Increases in circulating aldosterone (hyperaldosteronism) may be primary or secondary. Increased levels of circulating renin in secondary forms of hyperaldosteronism lead to increased angiotensin II and thus aldosterone; renal artery stenosis is perhaps the most frequent cause (Table 63-4). Primary hyperaldosteronism may be genetic or acquired. Hypertension and hypokalemia, due to increases in circulating 11-deoxycorticosterone, occur in patients with congenital adrenal hyperplasia caused by defects in either steroid 11β-hydroxylase or steroid 17α-hydroxylase; deficient 11β-hydroxylase results in associated virilization and other signs of androgen excess, whereas reduced sex steroids in 17α-hydroxylase deficiency lead to hypogonadism. The major forms of isolated primary genetic hyperaldosteronism are familial hyperaldosteronism type I (FH-I, also known as glucocorticoid-remediable hyperaldosteronism [GRA]) and familial hyperaldosteronism types II and III (FH-II and FH-III), in which aldosterone production is not repressible by exogenous glucocorticoids. FH-I is caused by a chimeric gene duplication between the homologous 11β-hydroxylase (CYP11B1) and aldosterone synthase (CYP11B2) genes, fusing the adrenocorticotropic hormone (ACTH)– responsive 11β-hydroxylase promoter to the coding region of aldosterone synthase; this chimeric gene is under the control of ACTH and thus repressible by glucocorticoids. FH-III is caused by mutations in the KCNJ5 gene, which encodes the G-protein-activated inward rectifier K+ channel 4 (GIRK4); these mutations lead to the acquisition of sodium permeability in the mutant GIRK4 channels, causing an exaggerated membrane depolarization in adrenal glomerulosa cells and the activation of voltage-gated calcium channels. The resulting calcium influx is sufficient to produce aldosterone secretion and cell proliferation, leading to adrenal adenomas and hyperaldosteronism. Acquired causes of primary hyperaldosteronism include aldosteroneproducing adenomas (APAs), primary or unilateral adrenal hyperplasia (PAH), idiopathic hyperaldosteronism (IHA) due to bilateral adrenal hyperplasia, and adrenal carcinoma; APA and IHA account for close to 60% and 40%, respectively, of diagnosed hyperaldosteronism. Acquired somatic mutations in KCNJ5 or less frequently in the ATP1A1 (an Na+/K+ ATPase α subunit) and ATP2B3 (a Ca2+ ATPase) genes can be detected in APAs; as in FH-III (see above), the exaggerated depolarization of adrenal glomerulosa cells caused by these mutations is implicated in the excessive adrenal proliferation and the exaggerated release of aldosterone. Random testing of plasma renin activity (PRA) and aldosterone is a helpful screening tool in hypokalemic and/or hypertensive patients, with an aldosterone:PRA ratio of >50 suggestive of primary PART 2 Cardinal Manifestations and Presentation of Diseases hyperaldosteronism. Hypokalemia and multiple antihypertensive drugs may alter the aldosterone:PRA ratio by suppressing aldosterone or increasing PRA, leading to a ratio of <50 in patients who do in fact have primary hyperaldosteronism; therefore, the clinical context should always be considered when interpreting these results. The glucocorticoid cortisol has equal affinity for the mineralocorticoid receptor (MLR) to that of aldosterone, with resultant “mineralocorticoid-like” activity. However, cells in the aldosteronesensitive distal nephron are protected from this “illicit” activation by the enzyme 11β-hydroxysteroid dehydrogenase-2 (11βHSD-2), which converts cortisol to cortisone; cortisone has minimal affinity for the MLR. Recessive loss-of-function mutations in the 11βHSD-2 gene are thus associated with cortisol-dependent activation of the MLR and the syndrome of apparent mineralocorticoid excess (SAME), encompassing hypertension, hypokalemia, hypercalciuria, and metabolic alkalosis, with suppressed PRA and suppressed aldosterone. A similar syndrome is caused by biochemical inhibition of 11βHSD-2 by glycyrrhetinic/glycyrrhizinic acid and/or carbenoxolone. Glycyrrhizinic acid is a natural sweetener found in licorice root, typically encountered in licorice and its many guises or as a flavoring agent in tobacco and food products. Finally, hypokalemia may also occur with systemic increases in glucocorticoids. In Cushing’s syndrome caused by increases in pituitary ACTH (Chap. 406), the incidence of hypokalemia is only 10%, whereas it is 60–100% in patients with ectopic secretion of ACTH, despite a similar incidence of hypertension. Indirect evidence suggests that the activity of renal 11βHSD-2 is reduced in patients with ectopic ACTH compared with Cushing’s syndrome, resulting in SAME. Finally, defects in multiple renal tubular transport pathways are associated with hypokalemia. For example, loss-of-function mutations in subunits of the acidifying H+-ATPase in alpha-intercalated cells cause hypokalemic distal renal tubular acidosis, as do many acquired disorders of the distal nephron. Liddle’s syndrome is caused by autosomal dominant gain-in-function mutations of ENaC subunits. Disease-associated mutations either activate the channel directly or abrogate aldosterone-inhibited retrieval of ENaC subunits from the plasma membrane; the end result is increased expression of activated ENaC channels at the plasma membrane of principal cells. Patients with Liddle’s syndrome classically manifest severe hypertension with hypokalemia, unresponsive to spironolactone yet sensitive to amiloride. Hypertension and hypokalemia are, however, variable aspects of the Liddle’s phenotype; more consistent features include a blunted aldosterone response to ACTH and reduced urinary aldosterone excretion. Loss of the transport functions of the TALH and DCT nephron segments causes hereditary hypokalemic alkalosis, Bartter’s syndrome (BS) and Gitelman’s syndrome (GS), respectively. Patients with classic BS typically suffer from polyuria and polydipsia, due to the reduction in renal concentrating ability. They may have an increase in urinary calcium excretion, and 20% are hypomagnesemic. Other features include marked activation of the renin-angiotensin-aldosterone axis. Patients with antenatal BS suffer from a severe systemic disorder characterized by marked electrolyte wasting, polyhydramnios, and hypercalciuria with nephrocalcinosis; renal prostaglandin synthesis and excretion are significantly increased, accounting for much of the systemic symptoms. There are five disease genes for BS, all of them functioning in some aspect of regulated Na+, K+, and Cl– transport by the TALH. In contrast, GS is genetically homogeneous, caused almost exclusively by loss-of-function mutations in the thiazide-sensitive Na+-Cl– cotransporter of the DCT. Patients with GS are uniformly hypomagnesemic and exhibit marked hypocalciuria, rather than the hypercalciuria typically seen in BS; urinary calcium excretion is thus a critical diagnostic test in GS. GS is a milder phenotype than BS; however, patients with GS may suffer from chondrocalcinosis, an abnormal deposition of calcium pyrophosphate dihydrate (CPPD) in joint cartilage (Chap. 339). Magnesium Deficiency and Hypokalemia Magnesium depletion has inhibitory effects on muscle Na+/K+-ATPase activity, reducing influx into muscle cells and causing a secondary kaliuresis. In addition, magnesium depletion causes exaggerated K+ secretion by the distal nephron; this effect is attributed to a reduction in the magnesium-dependent, intracellular block of K+ efflux through the secretory K+ channel of principal cells (ROMK; Fig. 63-4). In consequence, hypomagnesemic patients are clinically refractory to K+ replacement in the absence of Mg2+ repletion. Notably, magnesium deficiency is also a common concomitant of hypokalemia because many disorders of the distal nephron may cause both potassium and magnesium wasting (Chap. 339). Clinical Features Hypokalemia has prominent effects on cardiac, skeletal, and intestinal muscle cells. In particular, hypokalemia is a major risk factor for both ventricular and atrial arrhythmias. Hypokalemia predisposes to digoxin toxicity by a number of mechanisms, including reduced competition between K+ and digoxin for shared binding sites on cardiac Na+/K+-ATPase subunits. Electrocardiographic changes in hypokalemia include broad flat T waves, ST depression, and QT prolongation; these are most marked when serum K+ is <2.7 mmol/L. Hypokalemia can thus be an important precipitant of arrhythmia in patients with additional genetic or acquired causes of QT prolongation. Hypokalemia also results in hyperpolarization of skeletal muscle, thus impairing the capacity to depolarize and contract; weakness and even paralysis may ensue. It also causes a skeletal myopathy and predisposes to rhabdomyolysis. Finally, the paralytic effects of hypokalemia on intestinal smooth muscle may cause intestinal ileus. The functional effects of hypokalemia on the kidney can include Na+-Cl– and HCO3 retention, polyuria, phosphaturia, hypocitraturia, and an activation of renal ammoniagenesis. Bicarbonate retention and other acid-base effects of hypokalemia can contribute to the generation of metabolic alkalosis. Hypokalemic polyuria is due to a combination of central polydipsia and an AVP-resistant renal concentrating defect. Structural changes in the kidney due to hypokalemia include a relatively specific vacuolizing injury to proximal tubular cells, interstitial nephritis, and renal cysts. Hypokalemia also predisposes to acute kidney injury and can lead to end-stage renal disease in patients with longstanding hypokalemia due to eating disorders and/or laxative abuse. Hypokalemia and/or reduced dietary K+ are implicated in the pathophysiology and progression of hypertension, heart failure, and stroke. For example, short-term K+ restriction in healthy humans and patients with essential hypertension induces Na+-Cl– retention and hypertension. Correction of hypokalemia is particularly important in hypertensive patients treated with diuretics, in whom blood pressure improves with the establishment of normokalemia. Diagnostic Approach The cause of hypokalemia is usually evident from history, physical examination, and/or basic laboratory tests. The history should focus on medications (e.g., laxatives, diuretics, antibiotics), diet and dietary habits (e.g., licorice), and/or symptoms that suggest a particular cause (e.g., periodic weakness, diarrhea). The physical examination should pay particular attention to blood pressure, volume status, and signs suggestive of specific hypokalemic disorders, e.g., hyperthyroidism and Cushing’s syndrome. Initial laboratory evaluation should include electrolytes, BUN, creatinine, serum osmolality, Mg2+, Ca2+, a complete blood count, and urinary pH, osmolality, creatinine, and electrolytes (Fig. 63-7). The presence of a non–anion gap acidosis suggests a distal, hypokalemic renal tubular acidosis or diarrhea; calculation of the urinary anion gap can help differentiate these two diagnoses. Renal K+ excretion can be assessed with a 24-h urine collection; a 24-h K+ excretion of <15 mmol is indicative of an extrarenal cause of hypokalemia (Fig. 63-7). If only a random, spot urine sample is available, serum and urine osmolality can be used to calculate the transtubular K+ gradient (TTKG), which should be <3 in the presence of hypokalemia (see also “Hyperkalemia”). Alternatively, a urinary K+-to-creatinine ratio of >13 mmol/g creatinine (>1.5 mmol/ mmol creatinine) is compatible with excessive renal K+ excretion. Urine Cl– is usually decreased in patients with hypokalemia from a nonreabsorbable anion, such as antibiotics or HCO3–. The most common causes of chronic hypokalemic alkalosis are surreptitious vomiting, diuretic abuse, and GS; these can be distinguished by the pattern of urinary electrolytes. Hypokalemic patients with vomiting due to bulimia will thus have a urinary Cl– <10 mmol/L; urine Na+, K+, and 307 Cl– are persistently elevated in GS, due to loss of function in the thiazide-sensitive Na+-Cl– cotransporter, but less elevated in diuretic abuse and with greater variability. Urine diuretic screens for loop diuretics and thiazides may be necessary to further exclude diuretic abuse. Other tests, such as urinary Ca2+, thyroid function tests, and/or PRA and aldosterone levels, may also be appropriate in specific cases. A plasma aldosterone:PRA ratio of >50, due to suppression of circulating renin and an elevation of circulating aldosterone, is suggestive of hyperaldosteronism. Patients with hyperaldosteronism or apparent mineralocorticoid excess may require further testing, for example adrenal vein sampling (Chap. 406) or the clinically available testing for specific genetic causes (e.g., FH-I, SAME, Liddle’s syndrome). Patients with primary aldosteronism should thus be tested for the chimeric FH-I/GRA gene (see above) if they are younger than 20 years of age or have a family history of primary aldosteronism or stroke at a young age (<40 years). Preliminary differentiation of Liddle’s syndrome due to mutant ENaC channels from SAME due to mutant 11βHSD-2 (see above), both of which cause hypokalemia and hypertension with aldosterone suppression, can be made on a clinical basis and then confirmed by genetic analysis; patients with Liddle’s syndrome should respond to amiloride (ENaC inhibition) but not spironolactone, whereas patients with SAME will respond to spironolactone. The goals of therapy in hypokalemia are to prevent life-threatening and/or serious chronic consequences, to replace the associated K+ deficit, and to correct the underlying cause and/or mitigate future hypokalemia. The urgency of therapy depends on the severity of hypokalemia, associated clinical factors (e.g., cardiac disease, digoxin therapy), and the rate of decline in serum K+. Patients with a prolonged QT interval and/or other risk factors for arrhythmia should be monitored by continuous cardiac telemetry during repletion. Urgent but cautious K+ replacement should be considered in patients with severe redistributive hypokalemia (plasma K+ concentration <2.5 mM) and/or when serious complications ensue; however, this approach has a risk of rebound hyperkalemia following acute resolution of the underlying cause. When excessive activity of the sympathetic nervous system is thought to play a dominant role in redistributive hypokalemia, as in TPP, theophylline overdose, and acute head injury, high-dose propranolol (3 mg/kg) should be considered; this nonspecific b-adrenergic blocker will correct hypokalemia without the risk of rebound hyperkalemia. Oral replacement with K+-Cl– is the mainstay of therapy in hypokalemia. Potassium phosphate, oral or IV, may be appropriate in patients with combined hypokalemia and hypophosphatemia. Potassium bicarbonate or potassium citrate should be considered in patients with concomitant metabolic acidosis. Notably, hypomagnesemic patients are refractory to K+ replacement alone, such that concomitant Mg2+ deficiency should always be corrected with oral or intravenous repletion. The deficit of K+ and the rate of correction should be estimated as accurately as possible; renal function, medications, and comorbid conditions such as diabetes should also be considered, so as to gauge the risk of overcorrection. In the absence of abnormal K+ redistribution, the total deficit correlates with serum K+, such that serum K+ drops by approximately 0.27 mM for every 100-mmol reduction in total-body stores; loss of 400–800 mmol of total-body K+ results in a reduction in serum K+ by approximately 2.0 mM. Notably, given the delay in redistributing potassium into intracellular compartments, this deficit must be replaced gradually over 24-48 h, with frequent monitoring of plasma K+ concentration to avoid transient overrepletion and transient hyperkalemia. The use of intravenous administration should be limited to patients unable to use the enteral route or in the setting of severe complications (e.g., paralysis, arrhythmia). Intravenous K+-Cl– should always be administered in saline solutions, rather than dextrose, because the dextrose-induced increase in insulin can acutely PART 2 Cardinal Manifestations and Presentation of Diseases Hypokalemia (Serum K+<3.5 mmol/l) <15 mmol/day OR <15 mmol/g Cr >15 mmol/g Cr OR >15 mmol/day Renal loss TTKG ˜ Distal K+ secretion ˜ Tubular flow -Osmotic diuresis BP and/or Volume Extrarenal loss/remote renal loss Metabolic acidosis -GI K+ loss Normal -Profuse sweating Metabolic alkalosis -Remote diuretic use -Remote vomiting or stomach drainage -Profuse sweating Non-reabsorbable anions other than HCO3 – -Hippurate -Penicillins Metabolic acidosis -Proximal RTA -Distal RTA -DKA -Amphotericin B -Acetazolamide Acid-base status Low OR normal Acid-base status Variable Aldosterone High Low High High Low High Normal Cortisol Renin Urine K+ Emergency? Pseudohypokalemia? Move to therapy History, physical examination & basic laboratory tests Clear evidence of transcellular shift No further workup Treat accordingly Clear evidence of low intake Treat accordingly and re-evaluate Yes Yes Yes Yes No No No No -Insulin excess -˜2-adrenergic agonists -FHPP -Hyperthyroidism -Barium intoxication -Theophylline -Chloroquine >4 >20 >0.20 <0.15 <10 <2 Metabolic alkalosis Urine Ca/Cr (molar ratio) -Vomiting -Chloride diarrhea Urine Cl– (mmol/l) -Loop diuretic -Bartter’s syndrome -Thiazide diuretic -Gitelman’s syndrome -RAS -RST -Malignant HTN -PA -FH-I -Cushing’s syndrome -Liddle’s syndrome -Licorice -SAME FIguRE 63-7 The diagnostic approach to hypokalemia. See text for details. AME, apparent mineralocorticoid excess; BP, blood pressure; CCD, cortical collecting duct; DKA, diabetic ketoacidosis; FH-I, familial hyperaldosteronism type I; FHPP, familial hypokalemic periodic paralysis; GI, gastrointestinal; GRA, glucocorticoid remediable aldosteronism; HTN, hypertension; PA, primary aldosteronism; RAS, renal artery stenosis; RST, renin-secreting tumor; RTA, renal tubular acidosis; SAME, syndrome of apparent mineralocorticoid excess; TTKG, transtubular potassium gradient. (Used with permission from DB Mount, K Zandi-Nejad K: Disorders of potassium balance, in Brenner and Rector’s The Kidney, 8th ed, BM Brenner [ed]. Philadelphia, W.B. Saunders & Company, 2008, pp 547-587.) exacerbate hypokalemia. The peripheral intravenous dose is usually 20–40 mmol of K+-Cl– per liter; higher concentrations can cause localized pain from chemical phlebitis, irritation, and sclerosis. If hypokalemia is severe (<2.5 mmol/L) and/or critically symptomatic, intravenous K+-Cl– can be administered through a central vein with cardiac monitoring in an intensive care setting, at rates of 10–20 mmol/h; higher rates should be reserved for acutely life-threatening complications. The absolute amount of administered K+ should be restricted (e.g., 20 mmol in 100 mL of saline solution) to prevent inadvertent infusion of a large dose. Femoral veins are preferable, because infusion through internal jugular or subclavian central lines can acutely increase the local concentration of K+ and affect cardiac conduction. Strategies to minimize K+ losses should also be considered. These measures may include minimizing the dose of non-K+-sparing diuretics, restricting Na+ intake, and using clinically appropriate combinations of non-K+-sparing and K+-sparing medications (e.g., loop diuretics with angiotensin-converting enzyme inhibitors). Hyperkalemia is defined as a plasma potassium level of 5.5 mM, occurring in up to 10% of hospitalized patients; severe hyperkalemia (>6.0 mM) occurs in approximately 1%, with a significantly increased risk of mortality. Although redistribution and reduced tissue uptake can acutely cause hyperkalemia, a decrease in renal K+ excretion is the most frequent underlying cause (Table 63-5). Excessive intake of K+ is a rare cause, given the adaptive capacity to increase renal secretion; however, dietary intake can have a major effect in susceptible patients, e.g., diabetics with hyporeninemic hypoaldosteronism and chronic kidney disease. Drugs that impact on the renin-angiotensin-aldosterone axis are also a major cause of hyperkalemia. CAuSES of HyPERKALEMiA I. Pseudohyperkalemia A. Cellular efflux; thrombocytosis, erythrocytosis, leukocytosis, in vitro hemolysis B. Hereditary defects in red cell membrane transport II. A. B. Hyperosmolality; radiocontrast, hypertonic dextrose, mannitol C. D. Digoxin and related glycosides (yellow oleander, foxglove, bufadienolide) E. F. Lysine, arginine, and ε-aminocaproic acid (structurally similar, positively charged) G. Succinylcholine; thermal trauma, neuromuscular injury, disuse atrophy, mucositis, or prolonged immobilization H. III. A. Inhibition of the renin-angiotensin-aldosterone axis; ↑ risk of hyperkalemia when used in combination 1. 2. Renin inhibitors; aliskiren (in combination with ACE inhibitors or angiotensin receptor blockers [ARBs]) 3. 4. Blockade of the mineralocorticoid receptor: spironolactone, eplerenone, drospirenone 5. Blockade of the epithelial sodium channel (ENaC): amiloride, triamterene, trimethoprim, pentamidine, nafamostat B. 1. 2. C. 1. Tubulointerstitial diseases: systemic lupus erythematosus (SLE), sickle cell anemia, obstructive uropathy 2. Diabetes, diabetic nephropathy 3. Drugs: nonsteroidal anti-inflammatory drugs (NSAIDs), cyclooxygenase 2 (COX2) inhibitors, β-blockers, cyclosporine, tacrolimus 4. Chronic kidney disease, advanced age 5. Pseudohypoaldosteronism type II: defects in WNK1 or WNK4 kinases, Kelch-like 3 (KLHL3), or Cullin 3 (CUL3) D. Renal resistance to mineralocorticoid 1. Tubulointerstitial diseases: SLE, amyloidosis, sickle cell anemia, obstructive uropathy, post–acute tubular necrosis 2. Hereditary: pseudohypoaldosteronism type I; defects in the mineralocorticoid receptor or the epithelial sodium channel (ENaC) E. 1. 2. 3. F. 1. Autoimmune: Addison’s disease, polyglandular endocrinopathy 2. Infectious: HIV, cytomegalovirus, tuberculosis, disseminated fungal infection 3. Infiltrative: amyloidosis, malignancy, metastatic cancer 4. Drug-associated: heparin, low-molecular-weight heparin 5. Hereditary: adrenal hypoplasia congenita, congenital lipoid adrenal hyperplasia, aldosterone synthase deficiency 6. Adrenal hemorrhage or infarction, including in antiphospholipid syndrome Pseudohyperkalemia Hyperkalemia should be distinguished from factitious hyperkalemia or “pseudohyperkalemia,” an artifactual increase in serum K+ due to the release of K+ during or after venipuncture. Pseudohyperkalemia can occur in the setting of excessive muscle activity during venipuncture (e.g., fist clenching), a marked increase in cellular elements (thrombocytosis, leukocytosis, and/or erythrocytosis) with in vitro efflux of K+, and acute anxiety during venipuncture with respiratory alkalosis and redistributive hyperkalemia. Cooling of blood following venipuncture is another cause, due to reduced cellular uptake; the converse is the increased uptake of K+ by cells at high ambient temperatures, leading to normal values for hyperkalemic patients and/or to spurious hypokalemia in normokalemic patients. Finally, there are multiple genetic subtypes of hereditary pseudohyperkalemia, caused by increases in the passive K+ permeability of erythrocytes. For example, causative mutations have been described in the red cell anion exchanger (AE1, encoded by the SLC4A1 gene), leading to reduced red cell anion transport, hemolytic anemia, the acquisition of a novel AE1mediated K+ leak, and pseudohyperkalemia. Redistribution and Hyperkalemia Several different mechanisms can induce an efflux of intracellular K+ and hyperkalemia. Acidemia is associated with cellular uptake of H+ and an associated efflux of K+; it is thought that this effective K+-H+ exchange serves to help maintain extracellular pH. Notably, this effect of acidosis is limited to non– anion gap causes of metabolic acidosis and, to a lesser extent, respiratory causes of acidosis; hyperkalemia due to an acidosis-induced shift of potassium from the cells into the ECF does not occur in the anion gap acidoses lactic acidosis and ketoacidosis. Hyperkalemia due to hypertonic mannitol, hypertonic saline, and intravenous immune globulin is generally attributed to a “solvent drag” effect, as water moves out of cells along the osmotic gradient. Diabetics are also prone to osmotic hyperkalemia in response to intravenous hypertonic glucose, when given without adequate insulin. Cationic amino acids, specifically lysine, arginine, and the structurally related drug epsilonaminocaproic acid, cause efflux of K+ and hyperkalemia, through an effective cation-K+ exchange of unknown identity and mechanism. Digoxin inhibits Na+/K+-ATPase and impairs the uptake of K+ by 310 skeletal muscle, such that digoxin overdose predictably results in hyperkalemia. Structurally related glycosides are found in specific plants (e.g., yellow oleander, foxglove) and in the cane toad, Bufo marinus (bufadienolide); ingestion of these substances and extracts thereof can also cause hyperkalemia. Finally, fluoride ions also inhibit Na+/K+-ATPase, such that fluoride poisoning is typically associated with hyperkalemia. Succinylcholine depolarizes muscle cells, causing an efflux of K+ through acetylcholine receptors (AChRs). The use of this agent is contraindicated in patients who have sustained thermal trauma, neuromuscular injury, disuse atrophy, mucositis, or prolonged immobilization. These disorders share a marked increase and redistribution of AChRs at the plasma membrane of muscle cells; depolarization of these upregulated AChRs by succinylcholine leads to an exaggerated efflux of K+ through the receptor-associated cation channels, resulting in acute hyperkalemia. Hyperkalemia Caused by Excess Intake or Tissue Necrosis Increased intake of even small amounts of K+ may provoke severe hyperkalemia in patients with predisposing factors; hence, an assessment of dietary intake is crucial. Foods rich in potassium include tomatoes, bananas, and citrus fruits; occult sources of K+, particularly K+-containing salt substitutes, may also contribute significantly. Iatrogenic causes include simple overreplacement with K+-Cl– or the administration of a potassium-containing medication (e.g., K+-penicillin) to a susceptible patient. Red cell transfusion is a well-described cause of hyperkalemia, typically in the setting of massive transfusions. Finally, severe tissue necrosis, as in acute tumor lysis syndrome and rhabdomyolysis, will predictably cause hyperkalemia from the release of intracellular K+. Hypoaldosteronism and Hyperkalemia Aldosterone release from the adrenal gland may be reduced by hyporeninemic hypoaldosteronism, medications, primary hypoaldosteronism, or isolated deficiency of ACTH (secondary hypoaldosteronism). Primary hypoaldosteronism may be genetic or acquired (Chap. 406) but is commonly caused by autoimmunity, either in Addison’s disease or in the context of a polyglandular endocrinopathy. HIV has surpassed tuberculosis as the most important infectious cause of adrenal insufficiency. The adrenal involvement in HIV disease is usually subclinical; however, adrenal insufficiency may be precipitated by stress, drugs such as ketoconazole that inhibit steroidogenesis, or the acute withdrawal of steroid agents such as megestrol. Hyporeninemic hypoaldosteronism is a very common predisposing factor in several overlapping subsets of hyperkalemic patients: diabetics, the elderly, and patients with renal insufficiency. Classically, patients should have suppressed PRA and aldosterone; approximately 50% have an associated acidosis, with a reduced renal excretion of NH4+, a positive urinary anion gap, and urine pH <5.5. Most patients are volume expanded, with secondary increases in circulating atrial natriuretic peptide (ANP) that inhibit both renal renin release and adrenal aldosterone release. Renal Disease and Hyperkalemia Chronic kidney disease and end-stage kidney disease are very common causes of hyperkalemia, due to the associated deficit or absence of functioning nephrons. Hyperkalemia is more common in oliguric acute kidney injury; distal tubular flow rate and Na+ delivery are less limiting factors in nonoliguric patients. Hyperkalemia out of proportion to GFR can also be seen in the context of tubulointerstitial disease that affects the distal nephron, such as amyloidosis, sickle cell anemia, interstitial nephritis, and obstructive uropathy. Hereditary renal causes of hyperkalemia have overlapping clinical features with hypoaldosteronism, hence the diagnostic label pseudohypoaldosteronism (PHA). PHA type I (PHA-I) has both an autosomal recessive and an autosomal dominant form. The autosomal dominant form is due to loss-of-function mutations in the MLR; the recessive form is caused by various combinations of mutations in the three subunits of ENaC, resulting in impaired Na+ channel activity in principal cells and other tissues. Patients with recessive PHA-I suffer from lifelong salt wasting, hypotension, and hyperkalemia, whereas the phenotype of autosomal dominant PHA-I due to MLR dysfunction improves in adulthood. PHA type II (PHA-II; also known as hereditary hypertension with hyperkalemia) is in every respect the mirror image of PART 2 Cardinal Manifestations and Presentation of Diseases GS caused by loss of function in NCC, the thiazide-sensitive Na+-Cl– cotransporter (see above); the clinical phenotype includes hypertension, hyperkalemia, hyperchloremic metabolic acidosis, suppressed PRA and aldosterone, hypercalciuria, and reduced bone density. PHA-II thus behaves like a gain of function in NCC, and treatment with thiazides results in resolution of the entire clinical phenotype. However, the NCC gene is not directly involved in PHA-II, which is caused by mutations in the WNK1 and WNK4 serine-threonine kinases or the upstream Kelch-like 3 (KLHL3) and Cullin 3 (CUL3), two components of an E3 ubiquitin ligase complex that regulates these kinases; these proteins collectively regulate NCC activity, with PHA-II-associated activation of the transporter. Medication-Associated Hyperkalemia Most medications associated with hyperkalemia cause inhibition of some component of the reninangiotensin-aldosterone axis. ACE inhibitors, angiotensin receptor blockers, renin inhibitors, and MLRs are predictable and common causes of hyperkalemia, particularly when prescribed in combination. The oral contraceptive agent Yasmin-28 contains the progestin drospirenone, which inhibits the MLR and can cause hyperkalemia in susceptible patients. Cyclosporine, tacrolimus, NSAIDs, and cyclooxygenase 2 (COX2) inhibitors cause hyperkalemia by multiple mechanisms, but share the ability to cause hyporeninemic hypoaldosteronism. Notably, most drugs that affect the renin-angiotensin-aldosterone axis also block the local adrenal response to hyperkalemia, thus attenuating the direct stimulation of aldosterone release by increased plasma K+ concentration. Inhibition of apical ENaC activity in the distal nephron by amiloride and other K+-sparing diuretics results in hyperkalemia, often with a voltage-dependent hyperchloremic acidosis and/or hypovolemic hyponatremia. Amiloride is structurally similar to the antibiotics trimethoprim (TMP) and pentamidine, which also block ENaC; risk factors for TMP-associated hyperkalemia include the administered dose, renal insufficiency, and hyporeninemic hypoaldosteronism. Indirect inhibition of ENaC at the plasma membrane is also a cause of drug-associated hyperkalemia; nafamostat, a protease inhibitor used in some countries for the management of pancreatitis, inhibits aldosteroneinduced renal proteases that activate ENaC by proteolytic cleavage. Clinical Features Hyperkalemia is a medical emergency due to its effects on the heart. Cardiac arrhythmias associated with hyperkalemia include sinus bradycardia, sinus arrest, slow idioventricular rhythms, ventricular tachycardia, ventricular fibrillation, and asystole. Mild increases in extracellular K+ affect the repolarization phase of the cardiac action potential, resulting in changes in T-wave morphology; further increase in plasma K+ concentration depresses intracardiac conduction, with progressive prolongation of the PR and QRS intervals. Severe hyperkalemia results in loss of the P wave and a progressive widening of the QRS complex; development of a sine-wave sinoventricular rhythm suggests impending ventricular fibrillation or asystole. Hyperkalemia can also cause a type I Brugada pattern in the electrocardiogram (ECG), with a pseudo–right bundle branch block and persistent coved ST segment elevation in at least two precordial leads. This hyperkalemic Brugada’s sign occurs in critically ill patients with severe hyperkalemia and can be differentiated from genetic Brugada’s syndrome by an absence of P waves, marked QRS widening, and an abnormal QRS axis. Classically, the electrocardiographic manifestations in hyperkalemia progress from tall peaked T waves (5.5–6.5 mM), to a loss of P waves (6.5–7.5 mM) to a widened QRS complex (7.0–8.0 mM), and, ultimately, a to a sine wave pattern (>8.0 mM). However, these changes are notoriously insensitive, particularly in patients with chronic kidney disease or end-stage renal disease. Hyperkalemia from a variety of causes can also present with ascending paralysis, denoted secondary hyperkalemic paralysis to differentiate it from familial hyperkalemic periodic paralysis (HYPP). The presentation may include diaphragmatic paralysis and respiratory failure. Patients with familial HYPP develop myopathic weakness during hyperkalemia induced by increased K+ intake or rest after heavy exercise. Depolarization of skeletal muscle by hyperkalemia unmasks an inactivation defect in skeletal Na+ channel; autosomal dominant mutations in the SCN4A gene encoding this channel are the predominant cause. Within the kidney, hyperkalemia has negative effects on the ability to excrete an acid load, such that hyperkalemia per se can contribute to metabolic acidosis. This defect appears to be due in part to competition between K+ and NH4+ for reabsorption by the TALH and subsequent countercurrent multiplication, ultimately reducing the medullary gradient for NH3/NH4 excretion by the distal nephron. Regardless of the underlying mechanism, restoration of normokalemia can, in many instances, correct hyperkalemic metabolic acidosis. Diagnostic Approach The first priority in the management of hyperkalemia is to assess the need for emergency treatment, followed by a comprehensive workup to determine the cause (Fig. 63-8). History and physical examination should focus on medications, diet and dietary supplements, risk factors for kidney failure, reduction in urine output, blood pressure, and volume status. Initial laboratory 311 tests should include electrolytes, BUN, creatinine, serum osmolality, Mg2+ and Ca2+, a complete blood count, and urinary pH, osmolality, creatinine, and electrolytes. A urine Na+ concentration of <20 mM indicates that distal Na+ delivery is a limiting factor in K+ excretion; volume repletion with 0.9% saline or treatment with furosemide may be effective in reducing plasma K+ concentration. Serum and urine osmolality are required for calculation of the transtubular K+ gradient (TTKG) (Fig. 63-8). The expected values of the TTKG are largely based on historical data, and are <3 in the presence of hypokalemia and >7–8 in the presence of hyperkalemia. CHAPTER 63 Fluid and Electrolyte Disturbances Hyperkalemia(Serum K+ ˜5.5 mmol/l) History, physical examination & basic laboratory tests Decreased urinary K+ excretion (<40 mmol/day) Urine electrolytes TTKG Evidence of increased potassium load Urine Na+ <25 mmol/L Reduced tubular flow Reduced distal K+ secretion (GFR >20 ml/min) Advanced kidney failure (GFR ˜20 ml/min) Reduced ECV TTKG <8 (Tubular resistance) TTKG ˜8 Low aldosterone Renin 9°-Fludrocortisone Treat accordingly and re-evaluate Pseudohyperkalemia? Evidence of transcellular shift No further actionK+ ˜6.0 or ECG changes Emergency therapy Yes Yes Decreased distal Na+ delivery Yes Treat accordingly and re-evaluate -Hypertonicity (e.g., mannitol) -Hyperglycemia -Succinylcholine -˛-aminocaproic acid -Digoxin -˝-blockers -Metabolic acidosis (non-organic) -Arginine or lysine infusion -Hyperkalemic periodic paralysis -˙Insulin -Exercise Yes No No No No >8 <5 High Low Drugs -Amiloride -Spironolactone -Triamterene -Trimethoprim -Pentamidine -Eplerenone -Drospirenone -Calcineurin inhibitors Other causes -Tubulointerstitial diseases -Urinary tract obstruction -PHA type I -PHA type II -Sickle cell disease -Renal transplant -SLE -Primary adrenal insufficiency -Isolated aldosterone deficiency -Heparin/ LMW heparin -ACE-I / ARB -Ketoconazole -Diabetes mellitus -Acute GN -Tubulointerstitial diseases -PHA type II -NSAIDs -˝-Blockers FIguRE 63-8 The diagnostic approach to hyperkalemia. See text for details. ACE-I, angiotensin-converting enzyme inhibitor; ARB, angioten-sin II receptor blocker; CCD, cortical collecting duct; ECG, electrocardiogram; ECV, effective circulatory volume; GFR, glomerular filtration rate; GN, glomerulonephritis; HIV, human immunodeficiency virus; LMW heparin, low-molecular-weight heparin; NSAIDs, nonsteroidal anti-inflammatory drugs; PHA, pseudohypoaldosteronism; SLE, systemic lupus erythematosus; TTKG, transtubular potassium gradient. (Used with permission from DB Mount, K Zandi-Nejad K: Disorders of potassium balance, in Brenner and Rector’s The Kidney, 8th ed, BM Brenner [ed]. Philadelphia, W.B. Saunders & Company, 2008, pp 547-587.) Electrocardiographic manifestations of hyperkalemia should be considered a medical emergency and treated urgently. However, patients with significant hyperkalemia (plasma K+ concentration ≥6.5 mM) in the absence of ECG changes should also be aggressively managed, given the limitations of ECG changes as a predictor of cardiac toxicity. Urgent management of hyperkalemia includes admission to the hospital, continuous cardiac monitoring, and immediate treatment. The treatment of hyperkalemia is divided into three stages: 1. Immediate antagonism of the cardiac effects of hyperkalemia. Intravenous calcium serves to protect the heart, whereas other measures are taken to correct hyperkalemia. Calcium raises the action potential threshold and reduces excitability, without changing the resting membrane potential. By restoring the difference between resting and threshold potentials, calcium reverses the depolarization blockade due to hyperkalemia. The recommended dose is 10 mL of 10% calcium gluconate (3–4 mL of calcium chloride), infused intravenously over 2–3 min with cardiac monitoring. The effect of the infusion starts in 1–3 min and lasts 30–60 min; the dose should be repeated if there is no change in ECG findings or if they recur after initial improvement. Hypercalcemia potentiates the cardiac toxicity of digoxin; hence, intravenous calcium should be used with extreme caution in patients taking this medication; if judged necessary, 10 mL of 10% calcium gluconate can be added to 100 mL of 5% dextrose in water and infused over 20–30 min to avoid acute hypercalcemia. 2. Rapid reduction in plasma K+ concentration by redistribution into cells. Insulin lowers plasma K+ concentration by shifting K+ into cells. The recommended dose is 10 units of intravenous regular insulin followed immediately by 50 mL of 50% dextrose (D50W, 25 g of glucose total); the effect begins in 10–20 min, peaks at 30–60 min, and lasts for 4–6 h. Bolus D50W without insulin is never appropriate, given the risk of acutely worsening hyperkalemia due to the osmotic effect of hypertonic glucose. Hypoglycemia is common with insulin plus glucose; hence, this should be followed by an infusion of 10% dextrose at 50–75 mL/h, with close monitoring of plasma glucose concentration. In hyperkalemic patients with glucose concentrations of ≥200–250 mg/dL, insulin should be administered without glucose, again with close monitoring of glucose concentrations. β2-agonists, most commonly albuterol, are effective but underused agents for the acute management of hyperkalemia. Albuterol and insulin with glucose have an additive effect on plasma K+ concentration; however, ~20% of patients with end-stage renal disease (ESRD) are resistant to the effect of β2-agonists; hence, these drugs should not be used without insulin. The recommended dose for inhaled albuterol is 10–20 mg of nebulized albuterol in 4 mL of normal saline, inhaled over 10 min; the effect starts at about 30 min, reaches its peak at about 90 min, and lasts for 2–6 h. Hyperglycemia is a side effect, along with tachycardia. β2-Agonists should be used with caution in hyperkalemic patients with known cardiac disease. Intravenous bicarbonate has no role in the acute treatment of hyperkalemia, but may slowly attenuate hyperkalemia with sustained administration over several hours. It should not be given repeatedly as a hypertonic intravenous bolus of undiluted ampules, given the risk of associated hypernatremia, but should instead be infused in an isotonic or hypotonic fluid (e.g., 150 mEqu in 1 L of D5W). In patients with metabolic acidosis, a delayed drop in plasma K+ concentration can be seen after 4–6 h of isotonic bicarbonate infusion. 3. Removal of potassium. This is typically accomplished using cation exchange resins, diuretics, and/or dialysis. The cation exchange resin sodium polystyrene sulfonate (SPS) exchanges Na+ for K+ in the gastrointestinal tract and increases the fecal excretion of K+; alternative calcium-based resins, when available, may Part 2 Cardinal Manifestations and Presentation of Diseases be more appropriate in patients with an increased ECFV. The recommended dose of SPS is 15–30 g of powder, almost always given in a premade suspension with 33% sorbitol. The effect of SPS on plasma K+ concentration is slow; the full effect may take up to 24 h and usually requires repeated doses every 4–6 h. Intestinal necrosis, typically of the colon or ileum, is a rare but usually fatal complication of SPS. Intestinal necrosis is more common in patients administered SPS via enema and/or in patients with reduced intestinal motility (e.g., in the postoperative state or after treatment with opioids). The coadministration of SPS with sorbitol appears to increase the risk of intestinal necrosis; however, this complication can also occur with SPS alone. If SPS without sorbitol is not available, clinicians must consider whether treatment with SPS in sorbitol is absolutely necessary. The low but real risk of intestinal necrosis with SPS, which can sometimes be the only available or appropriate therapy for the removal of potassium, must be weighed against the delayed onset of efficacy. Whenever possible, alternative therapies for the acute management of hyperkalemia (i.e., aggressive redistributive therapy, isotonic bicarbonate infusion, diuretics, and/or hemodialysis) should be used instead of SPS. Therapy with intravenous saline may be beneficial in hypovolemic patients with oliguria and decreased distal delivery of Na+, with the associated reductions in renal K+ excretion. Loop and thiazide diuretics can be used to reduce plasma K+ concentration in volume-replete or hypervolemic patients with sufficient renal function for a diuretic response; this may need to be combined with intravenous saline or isotonic bicarbonate to achieve or maintain euvolemia. Hemodialysis is the most effective and reliable method to reduce plasma K+ concentration; peritoneal dialysis is considerably less effective. Patients with acute kidney injury require temporary, urgent venous access for hemodialysis, with the attendant risks; in contrast, patients with ESRD or advanced chronic kidney disease may have a preexisting venous access. The amount of K+ removed during hemodialysis depends on the relative distribution of K+ between ICF and ECF (potentially affected by prior therapy for hyperkalemia), the type and surface area of the dialyzer used, dialysate and blood flow rates, dialysate flow rate, dialysis duration, and the plasma-to-dialysate K+ gradient. Fluid and Electrolyte Imbalances and Acid-Base Disturbances: Case Examples David B. Mount, Thomas D. DuBose, Jr. CASE 1 64e A 23-year-old woman was admitted with a 3-day history of fever, cough productive of blood-tinged sputum, confusion, and orthostasis. Past medical history included type 1 diabetes mellitus. A physical examination in the emergency department indicated postural hypo-tension, tachycardia, and Kussmaul respiration. The breath was noted to smell of “acetone.” Examination of the thorax suggested consolidation in the right lower lobe. Sodium 130 meq/L Potassium 5.0 meq/L Chloride 96 meq/L CO2 14 meq/L Blood urea nitrogen (BUN) 20 mg/dL Creatinine 1.3 mg/dL Glucose 450 mg/dL Pneumonic infiltrate, right lower lobe The diagnosis of the acid-base disorder should proceed in a stepwise fashion: 1. The normal anion gap (AG) is 8–10 meq/L, but in this case, the AG is elevated (20 meq/L). Therefore, the change in AG (ΔAG) = ~10 meq/L. 2. Compare the ΔAG and the Δ[HCO3−]. In this case, the ΔAG, as noted above, is 10, and the Δ[HCO3−] (25 – 14) is 11. Therefore, the increment in the AG is approximately equal to the decrement in bicarbonate. 3. Estimate the respiratory compensatory response. In this case, the predicted Paco2 for an [HCO3−] of 14 should be approximately 29 mmHg. This value is obtained by adding 15 to the measured [HCO3−] (15 + 14 = 29) or by calculating the predicted Paco2 from the Winter equation: 1.5 × [HCO3−] + 8. In either case, the predicted value for Paco2 of 29 is significantly higher than the measured value of 24. Therefore, the prevailing Paco2 exceeds the range for compensation alone and is too low, indicating a superimposed respiratory alkalosis. 4. Therefore, this patient has a mixed acid-base disturbance with two components: (a) high AG acidosis secondary to ketoacidosis and (b) respiratory alkalosis (which was secondary to community-acquired pneumonia in this case). The latter resulted in an additional compo-64e-1 nent of hyperventilation that exceeded the compensatory response driven by metabolic acidosis, explaining the normal pH. The finding of respiratory alkalosis in the setting of a high AG acidosis suggests another cause of the respiratory component. Respiratory alkalosis frequently accompanies community-acquired pneumonia. The clinical features in this case include hyperglycemia, hypovolemia, ketoacidosis, central nervous system (CNS) signs of confusion, and superimposed pneumonia. This clinical scenario is consistent with diabetic ketoacidosis (DKA) developing in a patient with known type 1 diabetes mellitus. Infections in DKA are common and may be a precipitating feature in the development of ketoacidosis. The diagnosis of DKA is usually not challenging but should be considered in all patients with an elevated AG and metabolic acidosis. Hyperglycemia and ketonemia (positive acetoacetate at a dilution of 1:8 or greater) are sufficient criteria for diagnosis in patients with type 1 diabetes mellitus. The Δ[HCO3−] should approximate the increase in the plasma AG (ΔAG), but this equality can be modified by several factors. For example, the ΔAG will often decrease with IV hydration, as glomerular filtration increases and ketones are excreted into the urine. The decrement in plasma sodium is the result of hyperglycemia, which induces the movement of water into the extracellular compartment from the intracellular compartment of cells that require insulin for the transport of glucose. Additionally, a natriuresis occurs in response to an osmotic diuresis associated with hyperglycemia. Moreover, in patients with DKA, thirst is very common and water ingestion often continues. The plasma potassium concentration is usually mildly elevated, but in the face of acidosis, and as a result of the ongoing osmotic diuresis, a significant total-body deficit of potassium is almost always present. Recognition of the total-body deficit of potassium is critically important. The inclusion of potassium replacement in the therapeutic regimen at the appropriate time and with the appropriate indications (see below) is essential. Volume depletion is a very common finding in DKA and is a pivotal component in the pathogenesis of the disorder. Patients with DKA often have a sustained and significant deficit of sodium, potassium, water, bicarbonate, and phosphate. The general approach to treatment requires attention to all of these abnormalities. Successful treatment of DKA involves a stepwise approach, as follows: 1. Replace extracellular fluid (ECF) volume deficits. Because most patients present with actual or relative hypotension and, at times, impending shock, the initial fluid administered should be 0.9% NaCl infused rapidly until the systolic blood pressure is >100 mmHg or until 2–3 L cumulatively have been administered. During the initial 2–3 h of infusion of saline, the decline in blood glucose can be accounted for by dilution and increased renal excretion. Glucose should be added to the infusion as D5 normal saline (NS) or D5 0.45% NS once the plasma glucose declines to 230 mg/dL or below. 2. Abate the production of ketoacids. Regular insulin is required during DKA as an initial bolus of 0.1 U/kg body weight (BW) IV, followed immediately by a continuous infusion of 0.1 U/kg BW per hour in NS. The effectiveness of IV insulin (not subcutaneous) can be tracked by observing the decline in plasma ketones. Because the increment in the AG above the normal value of 10 meq/L represents accumulated ketoacids in DKA, the disappearance of ketoacid anions is reflected by the narrowing and eventual correction of the AG. Typically, the plasma AG returns to normal within 8–12 h. 3. Replace potassium deficits. Although patients with DKA often have hyperkalemia due to insulin deficiency, they are usually severely K+ depleted. KCl (20 meq/L) should be added to each liter of IV fluids when urine output is established and insulin has been administered. 4. Correct the metabolic acidosis. The plasma bicarbonate concentration will usually not increase for several hours because of dilution from administered IV NaCl. The plasma [HCO−] approaches 18 meq/L once ketoacidosis disappears. Sodium bicarbonate therapy is often not recommended or necessary and is contraindicated CHAPTER 64e Fluid and Electrolyte Imbalances and Acid-Base Disturbances: Case Examples 64e-2 for children. Bicarbonate is administered to adults with DKA for extreme acidemia (pH <7.1); for elderly patients (>70 years old), a threshold pH of 7.20 is recommended. Sodium bicarbonate, if administered, should only be given in small amounts. Because ketoacids are metabolized in response to insulin therapy, bicarbonate will be added to the ECF as ketoacids are converted. Overshoot alkalosis may occur from the combination of exogenously administered sodium bicarbonate plus metabolic production of bicarbonate. 5. Phosphate. In the first 6–8 h of therapy, it may be necessary to infuse potassium with phosphate because of the unmasking of phosphate depletion during combined insulin and glucose therapy. The latter drives phosphate into the cell. Therefore, in patients with DKA, the plasma phosphate level should be followed closely, but phosphate should never be replaced empirically. Phosphate should be administered to patients with a declining plasma phosphate once the phosphate level declines into the low-normal level. Therapy is advisable in the form of potassium phosphate at a rate of 6 mmol/h. 6. Always seek underlying factors, such as infection, myocardial infarction, pancreatitis, cessation of insulin therapy, or other events, responsible for initiating DKA. The case presented here is illustrative of this common scenario. 7. Volume overexpansion with IV fluid administration is not uncommon and contributes to the development of hyperchloremic acidosis during the later stages of treatment of DKA. Volume overexpansion should be avoided. A 25-year-old man with a 6-year history of HIV-AIDS complicated recently by Pneumocystis jiroveci pneumonia (PCP) was treated with intravenous trimethoprim-sulfamethoxazole (20 mg trimethoprim/kg per day). On day 4 of treatment, the following laboratory data were PART 2 Cardinal Manifestations and Presentation of Diseases H2O AQP-2AQP-3, 4 H2O FIGuRE 64e-1 Water, sodium, potassium, ammonia, and proton transport in principal cells (PC) and adjacent type A intercalated cells (A-IC). Water is absorbed down the osmotic gradient by principal cells, through the apical aquaporin-2 (AQP-2) and basolateral aquaporin-3 (AQP-3) and aquaporin-4 (AQP-4) channels. The absorption of Na+ via the amiloride-sensitive epithelial sodium channel (ENaC) generates a lumen-negative potential difference, which drives K+ excretion through the apical secretory K+ channel, ROMK (renal outer medullary K+ channel), and/or the flow-dependent maxi-K channel. Transepithelial ammonia (NH3) transport and proton transport occur in adjacent type A intercalated cells, via apical and basolateral ammonia channels and apical H+-ATPase pumps, respectively; NH4+ is ultimately excreted in the urine, in the defense of systemic pH. Electrogenic proton secretion by type A intercalated cells is also affected by the lumen-negative potential difference generated by the adjacent principal cells, such that reduction of this lumen-negative electrical gradient can reduce H+ excretion. Type A intercalated cells also reabsorb filtered K+ in potassium-deficient states, via apical H+/K+-ATPase. hyperkalemia and metabolic acidosis is not uncommon in this setting. H+ secretion via apical H+-ATPase pumps in adjacent type A intercalated cells (Fig. 64e-1) is also electrogenic, such that the reduction in the lumen-negative potential difference due to trimethoprim inhibits distal H+ secretion; this is often referred to as a “voltage defect” form of dRTA. Systemic hyperkalemia also suppresses renal ammoniagenesis, ammonium excretion, and, thus, acid excretion; i.e., hyperkalemia per se has multiple effects on urinary acidification. The inhibitory effect of trimethoprim on K+ and H+ secretion in the cortical collecting tubule follows a dose-response relationship, and therefore, the higher doses of this agent used in HIV/AIDS patients with PCP or in deep tissue infections with methicillin-resistant Staphylococcus aureus (MRSA) result in a higher prevalence of hyperkalemia and acidosis. Conventional does of trimethoprim can also induce hyperkalemia and/or acidosis in predisposed patients, in particular the elderly, patients with renal insufficiency, and/or those with baseline hyporeninemic hypoaldosteronism. One means by which to assess the role of the kidney in the development of hyperkalemia is to calculate, from a spot urine and coincident plasma sample, the transtubular potassium gradient (TTKG). The TTKG is calculated as (Posmol ). The expected values of the TTKG are <3 in the presence of hypokalemia (see also Case 7 and Case 8) and >7–8 in the presence of hyperkalemia. obtained: 135 60 6.5 15 110 43 15 0 7.30 5.5 14 — 0.9 — 268 270 What caused the hyperkalemia and metabolic acidosis in this patient? What other medications may be associated with a similar presentation? How does one use the urine electrolyte data to determine if the hyperkalemia is of renal origin or due to a shift from the cell to the extracellular compartment? Hyperkalemia occurs in 15–20% of hospitalized patients with HIV/ AIDS. The usual causes are either adrenal insufficiency, the syndrome of hyporeninemic hypoaldosteronism, or one of several drugs, including trimethoprim, pentamidine, nonsteroidal anti-inflammatory drugs, angiotensin-converting enzyme (ACE) inhibitors, angiotensin II receptor blockers, spironolactone, and eplerenone. Trimethoprim is usually given in combination with sulfamethoxazole or dapsone for PCP and, on average, increases the plasma K+ concentration by about 1 meq/L; however, the hyperkalemia may be severe. Trimethoprim is structurally and chemically related to amiloride and triamterene and, in this way, may function as a potassium-sparing diuretic. This effect results in inhibition of the epithelial sodium channel (ENaC) in the principal cell of the collecting duct. By blocking the Na+ channel, K+ secretion is also inhibited; K+ secretion is dependent on the lumen-negative potential difference generated by Na+ entry through the ENaC (Fig. 64e-1). Trimethoprim is associated with a non-AG acidosis that parallels development of hyperkalemia such that the co-occurrence of In this case, the value for the TTKG of approximately 2 indicates that renal excretion of potassium is abnormally low for the prevailing hyperkalemia. Therefore, the inappropriately low TTKG indicates that the hyperkalemia is of renal tubular origin. Knowledge of the factors controlling potassium secretion by the cortical collecting tubule principal cell can be helpful in understanding the basis for treatment of the hyperkalemia, especially if discontinuing the offending agent is not a reasonable clinical option. Potassium secretion is encouraged by a higher urine flow rate, increased distal delivery of sodium, distal delivery of a poorly reabsorbed anion (such as bicarbonate), and/or administration of a loop diuretic. Therefore, the approach to treatment in this patient should include intravenous 0.9% NaCl to expand the ECF and deliver more Na+ and Cl− to the cortical collecting tubule. In addition, because the trimethoprim molecule must be protonated to inhibit ENaC, alkalinization of the renal tubule fluid enhances distal tubular K+ secretion. As an alternative to inducing bicarbonaturia to assist in potassium secretion, a carbonic anhydrase inhibitor may be administered to induce a kaliuresis. However, in the case presented here, for acetazolamide to be effective, the non-AG metabolic acidosis in this patient would first need to be corrected; Acetazolamide would, thus, require the coadministration of intravenous sodium bicarbonate for maximal benefit. Finally, systemic hyperkalemia directly suppresses renal ammoniagenesis, ammonium excretion, and, thus, acid excretion. Correcting the hyperkalemia with a potassium-binding resin (Kayexalate) is sometimes appropriate in these patients; the subsequent decline in the plasma K+ concentration will also increase urinary ammonium excretion, helping correct the acidosis. A 63-year-old man was admitted to the intensive care unit (ICU) with a severe aspiration pneumonia. Past medical history included schizophrenia, for which he required institutional care; treatment had included neuroleptics and intermittent lithium, the latter restarted 6 months before admission. The patient was treated with antibiotics and intubated for several days, with the development of polyuria (3–5 L/d), hypernatremia, and acute renal insufficiency; the peak plasma Na+ concentration was 156 meq/L, and peak creatinine was 2.6 mg/dL. Urine osmolality was measured once and reported as 157 mOsm/kg, with a coincident plasma osmolality of 318 mOsm/kg. Lithium was stopped on admission to the ICU. On physical examination, the patient was alert, extubated, and thirsty. Weight was 97.5 kg. Urine output for the previous 24 h had been 3.4 L, with an IV intake of 2 L/d of D5W. After 3 days of intravenous hydration, a water deprivation test was performed. A single dose of 2 μg IV desmopressin (DDAVP) was given at 9 h (+9): Why did the patient develop hypernatremia, polyuria, and acute renal insufficiency? What does the water deprivation test demonstrate? What is the underlying pathophysiology of this patient’s hypernatremic syndrome? This patient became polyuric after admission to the ICU with severe pneumonia, developing significant hypernatremia and acute renal insufficiency. Polyuria can result from either an osmotic diuresis or a water diuresis. An osmotic diuresis can be caused by excessive excretion of Na+-Cl−, mannitol, glucose, and/or urea, with a daily solute excretion of >750–1000 mOsm/d (>15 mOsm/kg body water per day). In this case, however, the patient was excreting large volumes of very hypotonic urine, with a urine osmolality that was substantially lower than that of plasma; this, by definition, was a water diuresis, resulting in inappropriate excretion of free water and hypernatremia. The appropriate response to hypernatremia and a plasma osmolality >295 mOsm/kg is an increase in circulating vasopressin (AVP) and the excretion of low volumes (<500 mL/d) of maximally concentrated urine, i.e., urine with osmolality >800 mOsm/kg. This patient’s response to hypernatremia was clearly inappropriate, due to either a loss of circulating AVP (central diabetes insipidus [CDI]) or renal resistance to AVP (nephrogenic diabetes insipidus [NDI]). Ongoing loss of free water was sufficiently severe in this patient that absolute hypovolemia ensued, despite the fact that approximately two-thirds of the excreted water was derived from the intracellular fluid compartment rather than the ECF compartment. Hypovolemia led to an acute decrease in the glomerular filtration rate (GFR), i.e., acute renal insufficiency, with gradual improvement following hydration (see below). Following the correction of hypernatremia and acute renal insufficiency with appropriate hydration (see below), the patient was subjected to a water deprivation test followed by administration of DDAVP. This test helps determine whether an inappropriate water diuresis is caused by CDI or NDI. The patient was water restricted beginning in the early morning, with careful monitoring of vital signs and urine output; overnight water deprivation of patients with diabetes insipidus is unsafe and clinically inappropriate, given the potential for severe hypernatremia. The plasma Na+ concentration, which is more accurate and more immediately available than plasma osmolality, was monitored hourly during water deprivation. A baseline AVP sample was drawn at the beginning of the test, with a second sample drawn once the plasma Na+ reached 148–150 meq/L. At this point, a single 2-μg dose of the V2 AVP receptor agonist DDAVP was administered. An alternative approach would have been to measure AVP and administer DDAVP when the patient was initially hypernatremic; however, it would have been less safe to administer DDAVP in the setting of renal impairment because clearance of DDAVP is renal dependent. The patient’s water deprivation test was consistent with NDI, with an AVP level within the normal range in the setting of hypernatremia (i.e., no evidence of CDI) and an inappropriately low urine osmolality that failed to increase by >50% or >150 mOsm/kg after both water deprivation and the administration of DDAVP. This defect would be considered compatible with complete NDI; patients with partial NDI can achieve urine osmolalities of 500–600 mOsm/kg after DDAVP treatment but will not maximally concentrate their urine to 800 mOsm/kg or higher. NDI has a number of genetic and acquired causes, which all share interference with some aspect of the renal concentrating mechanism. For example, loss-of-function mutations in the V2 AVP receptor cause X-linked NDI. This patient suffered from NDI due to lithium therapy, perhaps the most common cause of NDI in adult medicine. Lithium causes NDI via direct inhibition of renal glycogen synthase kinase-3 (GSK3), a kinase thought to be the pharmacologic target of lithium in psychiatric disease; renal GSK3 is required for the response of principal cells to AVP. Lithium also induces the expression of cyclooxygenase-2 (COX2) in the renal medulla; COX2-derived prostaglandins inhibit AVP-stimulated salt transport by the thick ascending limb and AVP-stimulated water transport by the collecting duct, thereby CHAPTER 64e Fluid and Electrolyte Imbalances and Acid-Base Disturbances: Case Examples 64e-4 exacerbating lithium-associated polyuria. The entry of lithium through the amiloride-sensitive Na+ channel ENaC (Fig. 64e-1) is required for the effect of the drug on principal cells, such that combined therapy with lithium and amiloride can mitigate lithium-associated NDI. However, lithium causes chronic tubulointerstitial scarring and chronic kidney disease after prolonged therapy, such that patients may have a persistent NDI long after stopping the drug, with a reduced therapeutic benefit from amiloride. Notably, this particular patient had been treated intermittently for several years with lithium, with the development of chronic kidney disease (baseline creatinine of 1.3–1.4) and NDI that persisted after stopping the drug. How should this patient be treated? What are the major pitfalls of therapy? This patient developed severe hypernatremia due to a water diuresis from lithium-associated NDI. Treatment of hypernatremia must include both replacement of the existing free water deficit and daily replacement of ongoing free water loss. The first step is to estimate total-body water (TBW), typically estimated as 50% of the body weight in women and 60% in men. The free water deficit is then calculated as ([Na+ − 140]/140) × TBW. In this patient, the free water deficit was 4.2 L at a weight of 97.5 kg and plasma Na+ concentration of 150 meq/L. This free water deficit should be replaced slowly over 48–72 h to avoid increasing the plasma Na+ concentration by >10 meq/L per 24 h. A common mistake is to replace this deficit while neglecting to replace ongoing losses of free water, such that plasma Na+ concentration either fails to correct or, in fact, increases. Ongoing losses of free water can be estimated using the equation for electrolyte-free water clearance: PART 2 Cardinal Manifestations and Presentation of Diseases where V is urinary volume, UNa is urinary [Na+], UK is urinary [K+], and PNa is plasma [Na+]. For this patient, the CeH2O was 2.5 L/d when initially evaluated, i.e., with urine Na+ and K+ concentrations of 34 and 5.2 meq/L, plasma Na+ concentration of 150 meq/L, and a urinary volume of 3.4 L. Therefore, the patient was given 2.5 L of D5W over the first 24 h to replace ongoing free water losses, along with 2.1 L of D5W to replace half his free water deficit. Daily random urine electrolytes and urinary volume measurement can be used to monitor CeH2O and adjust daily fluid administration in this manner, while following plasma Na+ concentration. Physicians often calculate the free water deficit to guide therapy of hypernatremia, providing half the deficit in the first 24 h. This approach can be adequate in patients who do not have significant ongoing losses of free water, e.g., with hypernatremia due to decreased free water intake. This case illustrates how free water requirements can be grossly underestimated in hypernatremic patients if ongoing, daily free water losses are not taken into account. A 78-year-old man was admitted with pneumonia and hyponatremia. Plasma Na+ concentration was initially 129 meq/L, decreasing within 3 days to 118–120 meq/L despite fluid restriction to 1 L/d. A chest computed tomography (CT) revealed a right 2.8 × 1.6 cm infrahilar mass and postobstructive pneumonia. The patient was an active smoker. Past medical history was notable for laryngeal carcinoma treated 15 years prior with radiation therapy, renal cell carcinoma, peripheral vascular disease, and hypothyroidism. On review of systems, he denied headache, nausea, and vomiting. He had chronic hip pain, managed with acetaminophen with codeine. Other medications included cilostazol, amoxicillin/clavulanate, digoxin, diltiazem, and thyroxine. He was euvolemic on examination, with no lymphadenopathy and a normal chest examination. Na+ 120 K+ 4.3 Cl− 89 HCO3− 23 BUN 8 Creat 1.0 Glu 93 Alb 3.1 Ca 8.9 Phos 2.8 Mg 2.0 Plasma osm 248 mOsm/kg Cortisol 25 μg/dL TSH 2.6 Uric acid 2.7 mg/dL Urine: Na+ 97 K+ 22 Cl− 86 Osm 597 The patient was treated with furosemide, 20 mg PO bid, and salt tablets. The plasma Na+ concentration increased to 129 meq/L with this therapy; however, the patient developed orthostatic hypotension and dizziness. He was started on demeclocycline, 600 mg PO in the morning and 300 mg in the evening, just before discharge from hospital. Plasma Na+ concentration increased to 140 meq/L with a BUN of 23 and creatinine of 1.4, at which point demeclocycline was reduced to 300 mg PO bid. Bronchoscopic biopsy eventually showed small-cell lung cancer; the patient declined chemotherapy and was admitted to hospice. What factors contributed to this patient’s hyponatremia? What are the therapeutic options? This patient developed hyponatremia in the context of a central lung mass and postobstructive pneumonia. He was clinically euvolemic, with a generous urine Na+ concentration and low plasma uric acid concentration. He was euthyroid, with no evidence of pituitary dysfunction or secondary adrenal insufficiency. The clinical presentation is consistent with the syndrome of inappropriate antidiuresis (SIAD). Although pneumonia was a potential contributor to the SIAD, it was notable that the plasma Na+ concentration decreased despite a clinical response to antibiotics. It was suspected that this patient had SIAD due to small-cell lung cancer, with a central lung mass on chest CT and a significant smoking history. There was a history of laryngeal cancer and renal cancer but with no evidence of recurrent disease; these malignancies were not considered contributory to his SIAD. Biopsy of the lung mass ultimately confirmed the diagnosis of small-cell lung cancer, which is responsible for ~75% of malignancy-associated SIAD; ~10% of patients with this neuroendocrine tumor will have a plasma Na+ concentration of <130 meq/L at presentation. The patient had no other “nonosmotic” stimuli for an increase in AVP, with no medications associated with SIAD and minimal pain or nausea. The patient had no symptoms attributable to hyponatremia but was judged at risk for worsening hyponatremia from severe SIAD. Persistent, chronic hyponatremia (duration >48 h) results in an efflux of organic osmolytes (creatine, betaine, glutamate, myoinositol, and taurine) from brain cells; this response reduces intracellular osmolality and the osmotic gradient favoring water entry. This cellular response does not fully protect patients from symptoms, which can include vomiting, nausea, confusion, and seizures, usually at plasma Na+ concentration <125 meq/L. Even patients who are judged “asymptomatic” can manifest subtle gait and cognitive defects that reverse with correction of hyponatremia. Chronic hyponatremia also increases the risk of bony fractures due to an increased risk of falls and to a hyponatremia-associated reduction in bone density. Therefore, every attempt should be made to correct plasma Na+ concentration safely in patients with chronic hyponatremia. This is particularly true in malignancy-associated SIAD, where it can take weeks to months for a tissue diagnosis and the subsequent reduction in AVP following initiation of chemotherapy, radiotherapy, and/or surgery. What are the therapeutic options in SIAD? Water deprivation, a cornerstone of therapy for SIAD, had little effect on the plasma Na+ concentration in this patient. The urine:plasma electrolyte ratio (urinary [Na+] + [K+]/plasma [Na+]) can be used to estimate electrolyte-free water excretion and the required degree of water restriction; patients with a ratio of >1 should be more aggressively restricted (<500 mL/d), those with a ratio of ~1 should be restricted to 500–700 mL/d, and those with a ratio <1 should be restricted to <1 L/d. This patient had a urine:plasma electrolyte ratio of 1 and predictably did not respond to a moderate water restriction of ~1 L/d. A more aggressive water restriction would have theoretically been successful; however, this can be very difficult for patients with SIAD to tolerate, given that their thirst is also inappropriately stimulated. Combined therapy with furosemide and salt tablets can often increase the plasma Na+ concentration in SIAD; furosemide reduces maximal urinary concentrating ability by inhibiting the countercurrent mechanism, whereas the salt tablets mitigate diuretic-associated NaCl loss and amplify the ability to excrete free water by increasing urinary solute excretion. This regimen is not always successful and requires careful titration of salt tablets to avoid volume depletion; indeed, in this particular patient, the plasma Na+ concentration remained <130 meq/L and the patient became orthostatic. The principal cell toxin, demeclocycline, is an alternative oral agent in SIAD. Treatment with demeclocycline was very successful in this patient, with an increase in plasma Na+ concentration to 140 meq/L. However, demeclocycline can be natriuretic, leading to a prerenal decrease in GFR. Demeclocycline has also been implicated in nephrotoxic injury, particularly in patients with cirrhosis and chronic liver disease, in whom the drug accumulates. Notably, this particular patient developed a significant but stable decrease in GFR while on demeclocycline, necessitating a reduction in the administered dose. A major advance in the management of hyponatremia was the clinical development of AVP antagonists (vaptans). These agents inhibit the effect of AVP on renal V2 receptors, resulting in the excretion of electrolyte-free water and correction of hyponatremia. The specific indications for these agents are not as yet clear, despite U.S. Food and Drug Administration (FDA) approval for the management of both euvolemic and hypervolemic hyponatremia. It is, however, anticipated that the vaptans will have an increasing role in the management of SIAD and other causes of hyponatremia. Indeed, if this particular patient had continued with active therapy for his cancer, substitution of demeclocycline with oral tolvaptan (a V2-specific oral vaptan) would have been the next appropriate step, given the development of renal insufficiency with demeclocycline. As with other measures to correct hyponatremia (e.g., hypertonic saline, demeclocycline), the vaptans have the potential to “overcorrect” plasma Na+ concentration (a rise of >8–10 meq/L per 24 h or 18 meq/L per 18 h), thus increasing the risk for osmotic demyelination (see Case 5). Therefore, the plasma Na+ concentration should be monitored closely during the initiation of therapy with these agents. In addition, long-term use of tolvaptan has been associated with abnormalities in liver function tests; hence, use of this agent should be restricted to only 1–2 months. A 76-year-old woman presented with a several-month history of diarrhea, with marked worsening over the 2–3 weeks before admission (up to 12 stools a day). Review of systems was negative for fever, orthostatic dizziness, nausea and vomiting, or headache. Past medical history included hypertension, kidney stones, and hypercholesterolemia; medications included atenolol, spironolactone, and lovastatin. She also reliably consumed >2 L of liquid per day in management of the nephrolithiasis. The patient received 1 L of saline over the first 5 h of her hospital admission. On examination at hour 6, the heart rate was 72 sitting and 90 standing, and blood pressure was 105/50 mmHg lying and standing. Her jugular venous pressure (JVP) was indistinct with no peripheral edema. On abdominal examination, the patient had a slight increase in bowel sounds but a nontender abdomen and no organomegaly. The plasma Na+ concentration on admission was 113 meq/L, with a creatinine of 2.35 (Table 64e-1). At hospital hour 7, the plasma Na+ concentration was 120 meq/L, potassium 5.4 meq/L, chloride 90 64e-5 meq/L, bicarbonate 22 meq/L, BUN 32 mg/dL, creatinine 2.02 mg/dL, glucose 89 mg/dL, total protein 5.0, and albumin 1.9. The hematocrit was 33.9, white count 7.6, and platelets 405. A morning cortisol was 19.5, with thyroid-stimulating hormone (TSH) of 1.7. The patient was treated with 1 μg of intravenous DDAVP, along with 75 mL/h of intravenous half-normal saline. After the plasma Na+ concentration dropped to 116 meq/L, intravenous fluid was switched to normal saline at the same infusion rate. The subsequent results are shown in Table 64e-1. This patient presented with hypovolemic hyponatremia and a “prerenal” reduction in GFR, with an increase in serum creatinine. She had experienced diarrhea for some time and manifested an orthostatic tachycardia after a liter of normal saline. As expected for hypovolemic hyponatremia, the urine Na+ concentration was <20 meq/L in the absence of congestive heart failure or other causes of hypervolemic hyponatremia, and she responded to saline hydration with an increase in plasma Na+ concentration and a decrease in creatinine. The initial hypovolemia increased the sensitivity of this patient’s AVP response to osmolality, both decreasing the osmotic threshold for AVP release and increasing the slope of the osmolality response curve. AVP has a half-life of only 10–20 min; therefore, the acute increase in intravascular volume after a liter of intravenous saline led to a rapid reduction in circulating AVP. The ensuing water diuresis is the primary explanation for the rapid increase in plasma Na+ concentration in the first 7 h of her hospitalization. The key concern in this case was the evident chronicity of the patient’s hyponatremia, with several weeks of diarrhea followed by 2–3 days of acute exacerbation. This patient was judged to have chronic hyponatremia, i.e., with a suspected duration of >48 h; as such, she would be predisposed to osmotic demyelination were she to undergo too rapid a correction in her plasma Na+ concentration, i.e., by >8–10 meq/L in 24 h or 18 meq/L in 48 h. At presentation, she had no symptoms that one would typically attribute to acute hyponatremia, and the plasma Na+ concentration had already increased by a sufficient amount to protect from cerebral edema; however, she had corrected by 1 meq/L per hour within the first 7 h of admission, consistent with impending overcorrection. To reduce or halt the increase in plasma Na+ concentration, the patient received 1 μg of intravenous DDAVP along with intravenous free water. Given the hypovolemia and resolving acute renal insufficiency, a decision was made to administer half-normal saline as a source of free water, rather than D5W; this was switched to normal saline when plasma Na+ concentration acutely dropped to 117 meq/L (Table 64e-1). Overcorrection of chronic hyponatremia is a major risk factor for the development of osmotic demyelination syndrome (ODS). Animal studies show a neurologic and survival benefit in ODS of “re-lowering” plasma Na+ concentration with DDAVP and free water administration; this approach is demonstrably safe in patients with hyponatremia, with no evident risk of seizure or other sequelae. This combination can be used to prevent an overcorrection or to re-lower plasma Na+ concentration in patients who have already overcorrected. DDAVP is required because in most of these patients endogenous AVP levels have plummeted, resulting in a free water diuresis; the administration of free water alone has minimal effect in this setting, given the relative absence of circulating AVP. An alternative approach in patients who present with severe hyponatremia is to treat them CHAPTER 64e Fluid and Electrolyte Imbalances and Acid-Base Disturbances: Case Examples ity, coadministering hypertonic saline to increase slowly the plasma Na+ con- centration in a more controlled fashion. Creatinine (mg/dL) 1.2 2.35 2.10 2.02 1.97 1.79 1.53 1.20 1.13 days after DDAVP administration. It is 64e-6 conceivable that residual hypovolemic hyponatremia attenuated the recovery of the plasma Na+ concentration. Alternatively, attenuated recovery was due to persistent effects of the single dose of DDAVP. Of note, although the plasma half-life of DDAVP is only 1–2 h, pharmacodynamic studies indicate a much more prolonged effect on urine output and/or urine osmolality. One final consideration is the effect of the patient’s initial renal dysfunction on the pharmacokinetics and pharmacodynamics of the administered DDAVP, which is renally excreted; DDAVP should be administered with caution for the reinduction of hyponatremia in patients with chronic kidney disease or acute renal dysfunction. A 44-year-old woman was referred from a local hospital after presenting with flaccid paralysis. Severe hypokalemia was documented (2.0 meq/L), and an infusion containing KCl was initiated. PART 2 Cardinal Manifestations and Presentation of Diseases Sodium 140 meq/L Potassium 2.6 meq/L Chloride 115 meq/L Bicarbonate 15 meq/L Anion gap 10 meq/L BUN 22 mg/dL Creatinine 1.4 mg/dL pH 7.32 U PaCO2 30 mmHg HCO3− 15 meq/L Rheumatoid factor positive, anti-Ro/SS-A positive, and anti-La/SS-B positive pH = 6.0, normal sediment without white or red blood cell casts and no bacteria. The urine protein-to-creatinine ratio was 0.150 g/g. Urinary electrolyte values were: Na+ 35, K+ 40, Cl− 18 meq/L. Therefore, the urine anion gap was positive, indicating low urine NH4+ excretion. The diagnosis in this case is classic hypokalemic dRTA from Sjögren’s syndrome. This patient presented with a non-AG metabolic acidosis. The urine AG was positive, indicating an abnormally low excretion of ammonium in the face of systemic acidosis. The urine pH was inappropriately alkaline, yet there was no evidence of hypercalciuria, nephrocalcinosis, or bone disease. The patient was subsequently shown to exhibit hyperglobulinemia. These findings, taken together, indicate that the cause of this patient’s hypokalemia and non-AG metabolic acidosis was a renal tubular abnormality. The hypokalemia and abnormally low excretion of ammonium, as estimated by the urine AG, in the absence of glycosuria, phosphaturia, or aminoaciduria (Fanconi’s syndrome), defines the entity of classic distal renal tubular acidosis (dRTA), also known as type 1 RTA. Because of the hyperglobulinemia, additional serology was obtained, providing evidence for the diagnosis of primary Sjögren’s syndrome. Furthermore, additional history indicated a 5-year history of xerostomia and keratoconjunctivitis sicca but without synovitis, arthritis, or rash. Classic dRTA occurs frequently in patients with Sjögren’s syndrome and is a result of an immunologic attack on the collecting tubule, causing failure of the H+-ATPase to be inserted into the apical membrane of type A intercalated cells. Sjögren’s syndrome is one of the best-documented acquired causes of classic dRTA. The loss of H+-ATPase function also occurs with certain inherited forms of classic dRTA. There was no family history in the present case, and other family members were not affected. A number of autoantibodies have been associated with Sjögren’s syndrome; it is likely that these autoantibodies prevent trafficking or function of the H+-ATPase in the type A intercalated cell of the collecting tubule. Although proximal RTA has also been reported in patients with Sjögren’s syndrome, it is much less frequent, and there were no features of proximal tubule dysfunction (Fanconi’s syndrome) in this patient. The hypokalemia is due to secondary hyperaldosteronism from volume depletion. The long-term renal prognosis for patients with classic dRTA due to Sjögren’s syndrome has not been established. Nevertheless, the metabolic acidosis and the hypokalemia respond to alkali replacement with either sodium citrate solution (Shohl’s solution) or sodium bicarbonate tablets. Obviously, potassium deficits must be replaced initially, but potassium replacement is usually not required in dRTA patients long term because sodium bicarbonate (or citrate) therapy expands volume and corrects the secondary hyperaldosteronism. A consequence of the interstitial infiltrate seen in patients with Sjögren’s syndrome and classic dRTA is progression of chronic kidney disease. Cytotoxic therapy plus glucocorticoids has been the mainstay of therapy in Sjögren’s syndrome for many years, although B lymphocyte infiltration in salivary gland tissue subsides and urinary acidification improves after treatment with rituximab. A 32-year-old man was admitted to the hospital with weakness and hypokalemia. The patient had been very healthy until 2 months previously when he developed intermittent leg weakness. His review of systems was otherwise negative. He denied drug or laxative abuse and was on no medications. Past medical history was unremarkable, with no history of neuromuscular disease. Family history was notable for a sister with thyroid disease. Physical examination was notable only for reduced deep tendon reflexes. Sodium 139 143 meq/L Potassium 2.0 3.8 meq/L Chloride 105 107 meq/L Bicarbonate 26 29 meq/L BUN 11 16 mg/dL Creatinine 0.6 1.0 mg/dL Calcium 8.8 8.8 mg/dL Phosphate 1.2 mg/dL Albumin 3.8 meq/L TSH 0.08 μIU/L (normal 0.2–5.39) Free T4 41 pmol/L (normal 10–27) This patient developed hypokalemia due to a redistribution of potassium between the intracellular and extracellular compartments; this pathophysiology was readily apparent following calculation of the TTKG. The TTKG is calculated as (P × U )/(P × U ). The expected values for the TTKG are <3 in the presence of hypokalemia and >7–8 in the presence of hyperkalemia (see also Case 2 and Case 8). Alternatively, a urinary K+-to-creatinine ratio of >13 mmol/g creatinine (>1.5 mmol/mmol creatinine) is compatible with excessive renal K+ excretion. In this case, the calculated TTKG was 2.5, consistent with appropriate renal conservation of K+ and a nonrenal cause for hypokalemia. In the absence of significant gastrointestinal loss of K+, the patient was diagnosed with a “redistributive” subtype of hypokalemia. More than 98% of total-body potassium is intracellular; regulated buffering of extracellular K+ by this large intracellular pool plays a crucial role in the maintenance of a stable plasma K+ concentration. Clinically, changes in the exchange and distribution of intraand extracellular K+ can cause significant hypoor hyperkalemia. Insulin, β2-adrenergic activity, thyroid hormone, and alkalosis promote cellular uptake of K+ by multiple interrelated mechanisms, leading to hypokalemia. In particular, alterations in the activity of the endogenous sympathetic nervous system can cause hypokalemia in several settings, including alcohol withdrawal, hyperthyroidism, acute myocardial infarction, and severe head injury. Weakness is common in severe hypokalemia; hypokalemia causes hyperpolarization of muscle, thereby impairing the capacity to depolarize and contract. In this particular patient, Graves’ disease caused hyperthyroidism and hypokalemic paralysis (thyrotoxic periodic paralysis [TPP]). TPP develops more frequently in patients of Asian or Hispanic origin. This predisposition has been linked to genetic variation in Kir2.6, a muscle-specific, thyroid hormone–induced K+ channel; however, the pathophysiologic mechanisms that link dysfunction of this ion channel to TPP have yet to be elucidated. The hypokalemia in TPP is attributed to both direct and indirect activation of the Na+/ K+-ATPase by thyroid hormone, resulting in increased uptake of K+ by muscle and other tissues. Thyroid hormone induces expression of multiple subunits of the Na+/K+-ATPase in skeletal muscle, increasing the capacity for uptake of K+; hyperthyroid increases in β-adrenergic activity are also thought to play an important role in TPP. Clinically, patients with TPP present with weakness of the extremities and limb girdle, with paralytic episodes that occur most frequently between 1 and 6 am. Precipitants of weakness include high carbohydrate loads and strenuous exercise. Signs and symptoms of hyperthyroidism are not always present, often leading to delays in diagnosis. Hypokalemia is often profound and usually accompanied by redistributive hypophosphatemia, as in this case. A TTKG of <2–3 separates patients with TPP from those with hypokalemia due to renal potassium wasting, who will have TTKG values that are >4. This distinction is of considerable importance for therapy; patients with large potassium deficits require aggressive repletion with K+-Cl−, which has a significant risk of rebound hyperkalemia in TPP and related disorders. Ultimately, definitive therapy for TPP requires treatment of the associated hyperthyroidism. In the short term, however, potassium replacement is necessary to hasten muscle recovery and prevent cardiac arrhythmias. The average recovery time of an acute attack is reduced by ~50% in patients treated with intravenous K+-Cl− at a rate of 10 meq/h; however, this incurs a significant risk of rebound hyperkalemia, with up to 70% developing a plasma K+ concentration of >5.0 meq/L. This potential for rebound hyperkalemia is a general problem in the management of all causes of redistributive hypokalemia, resulting in the need to distinguish these patients accurately and rapidly from those with a large K+ deficit due to renal or extrarenal loss of K+. An attractive alternative to K+-Cl− replacement in TPP is treatment with high-dose propranolol (3 mg/kg), which rapidly reverses the associated hypokalemia, hypophosphatemia, and paralysis. Notably, rebound hyperkalemia is not associated with this treatment. A 66-year-old man was admitted to hospital with a plasma K+ concentration of 1.7 meq/L and profound weakness. The patient had noted progressive weakness over several days, to the point that he was unable to rise from bed. Past medical history was notable for small-cell lung cancer with metastases to brain, liver, and adrenals. The patient had been treated with one cycle of cisplatin/etoposide 1 year before this admission, which was complicated by acute kidney injury (peak creatinine of 5, with residual chronic kidney disease), and three subsequent cycles of cyclophosphamide/doxorubicin/vincristine, in addition to 15 treatments with whole-brain radiation. On physical examination, the patient was jaundiced. Blood pressure was 130/70 mmHg, increasing to 160/98 mmHg after 1 L of saline, with a JVP at 8 cm. There was generalized muscle weakness. Potassium 3.7 1.7 3.5 meq/L 7.47 Creatinine 2.8 2.9 2.3 mg/dL Magnesium 1.3 1.6 2.4 mg/dL Albumin 3.4 2.8 2.3 Total bilirubin 0.65 5.19 5.5 Abbreviations: ACTH, adrenocorticotropic hormone; HD2, hospital day 2; PTA, prior to admission. The patient’s hospital course was complicated by acute respiratory failure attributed to pulmonary embolism; he died 2 weeks after admission. Why was this patient hypokalemic? Why was he weak? Why did he have an alkalosis? This patient suffered from metastatic small-cell lung cancer, which was persistent despite several rounds of chemotherapy and radiotherapy. He presented with profound hypokalemia, alkalosis, hypertension, severe weakness, jaundice, and worsening liver function tests. With respect to the hypokalemia, there was no evident cause of nonrenal potassium loss, e.g., diarrhea. The urinary TTKG was 11.7, at a plasma K+ concentration of 1.7 meq/L; this TTKG value is consistent with inappropriate renal K+ secretion, despite severe hypokalemia. The TTKG is calculated as (P × U )/(P × U ). The expected values for the TTKG are <3 in the presence of hypokalemia and >7–8 in the presence of hyperkalemia (see also Case 2 and Case 6). The patient had several explanations for excessive renal loss of potassium. First, he had had a history of cisplatin-associated acute kidney injury, with residual chronic kidney disease. Cisplatin can cause persistent renal tubular defects, with prominent hypokalemia and hypomagnesemia; however, this patient had not previously required potassium or magnesium repletion, suggesting that cisplatin-associated renal tubular defects did not play a major role in this presentation with severe hypokalemia. Second, he was hypomagnesemic on presentation, suggesting total-body magnesium depletion. Magnesium depletion has inhibitory effects on muscle Na+/K+-ATPase activity, reducing influx into muscle cells and causing a secondary increase in K+ excretion. Magnesium depletion also increases K+ secretion by the distal nephron; this is attributed to a reduction in the magnesium-dependent, intracellular block of K+ efflux through the secretory K+ channel of principal cells (ROMK, Fig. 64e-1). Clinically, hypomagnesemic patients are CHAPTER 64e Fluid and Electrolyte Imbalances and Acid-Base Disturbances: Case Examples 64e-8 refractory to K+ replacement in the absence of Mg2+ repletion. Again, however, this patient had not previously developed significant hypokalemia, despite periodic hypomagnesemia, such that other factors must have caused the severe hypokalemia. The associated hypertension in this case suggested an increase in mineralocorticoid activity, causing increased activity of ENaC channels in principal cells, NaCl retention, hypertension, and hypokalemia. The increase in ENaC-mediated Na+ transport in principal cells would have led to an increase in the lumen-negative potential difference in the connecting tubule and cortical collecting duct, driving an increase in K+ secretion through apical K+ channels (Fig. 64e-1). This explanation is compatible with the very high TTKG, i.e., an increase in K+ excretion that is inappropriate for the plasma K+ concentration. What caused an increase in mineralocorticoid activity in this patient? The patient had bilateral adrenal metastases, indicating that primary hyperaldosteronism was unlikely. The clinical presentation (hypokalemia, hypertension, and alkalosis) and the history of small-cell lung cancer suggested Cushing’s syndrome, with a massive increase in circulating glucocorticoids, in response to ectopic adrenocorticotropic hormone (ACTH) secretion by his small-cell lung cancer tumor. Confirmation of this diagnosis was provided by a very high plasma cortisol level, high ACTH level, and increased urinary cortisol (see the laboratory data above). Why would an increase in circulating cortisol cause an apparent increase in mineralocorticoid activity? Cortisol and aldosterone have equal affinity for the mineralocorticoid receptor (MLR); thus, cortisol has mineralocorticoid-like activity; however, cells in the aldosteronesensitive distal nephron (the distal convoluted tubule [DCT]), connecting tubule (CNT), and collecting duct are protected from circulating cortisol by the enzyme 11β-hydroxysteroid dehydrogenase-2 (11βHSD-2), which converts cortisol to cortisone (Fig. 64e-2); cortisone has minimal affinity for the MLR. Activation of the MLR causes activation of the basolateral Na+/K+-ATPase, activation of the thiazide-sensitive Na+-Cl− cotransporter in the DCT, and activation of apical ENaC PART 2 Cardinal Manifestations and Presentation of Diseases FIGuRE 64e-2 11β-Hydroxysteroid dehydrogenase-2 (11βHSD-2) and syndromes of apparent mineralocorticoid excess. The enzyme 11βHSD-2 protects cells in the aldosterone-sensitive distal nephron (the distal convoluted tubule [DCT ], connecting tubule [CNT], and collecting duct) from the illicit activation of mineralocorticoid receptors (MLR) by cortisol. Binding of aldosterone to the MLR leads to activation of the thiazide-sensitive Na+-Cl− cotransporter in DCT cells and the amiloride-sensitive epithelial sodium channel (ENaC) in principal cells (CNT and collecting duct). Aldosterone also activates basolateral Na+/K+-ATPase and, to a lesser extent, the apical secretory K+ channel ROMK (renal outer medullary K+ channel). Cortisol has equivalent affinity for the MLR to that of aldosterone; metabolism of cortisol to cortisone, which has no affinity for the MLR, prevents these cells from activation by circulating cortisol. Genetic deficiency of 11βHSD-2 or inhibition of its activity causes the syndromes of apparent mineralocorticoid excess (see Case 8). channels in principal cells of the CNT and collecting duct (Fig. 64e-2). Recessive loss-of-function mutations in the 11βHSD-2 gene lead to cortisol-dependent activation of the MLR and the syndrome of apparent mineralocorticoid excess (SAME), comprising hypertension, hypokalemia, hypercalciuria, and metabolic alkalosis, with suppressed plasma renin activity (PRA) and suppressed aldosterone. A similar syndrome is caused by biochemical inhibition of 11βHSD-2 by glycyrrhetinic/ glycyrrhizinic acid (found in licorice, for example) and/or carbenoxolone. In Cushing’s syndrome caused by increases in pituitary ACTH, the incidence of hypokalemia is only 10%, whereas it is ~70% in patients with ectopic secretion of ACTH, despite a similar incidence of hypertension. The activity of renal 11βHSD-2 is reduced in patients with ectopic ACTH compared with Cushing’s syndrome, resulting in SAME; the prevailing theory is that the much greater cortisol production in ectopic ACTH syndromes overwhelms the renal 11βHSD-2 enzyme, resulting in activation of renal MLRs by unmetabolized cortisol (Fig. 64e-2). Why was the patient so weak? The patient was profoundly weak due to the combined effect of hypokalemia and increased cortisol. Hypokalemia causes hyperpolarization of muscle, thereby impairing the capacity to depolarize and contract. Weakness and even ascending paralysis can frequently complicate severe hypokalemia. Hypokalemia also causes a myopathy and predisposes to rhabdomyolysis; notably, however, the patient had a normal creatine phosphokinase (CPK) level. Cushing’s syndrome is often accompanied by a proximal myopathy, due to the protein-wasting effects of cortisol excess. The patient presented with a mixed acid-base disorder, with a significant metabolic alkalosis and a bicarbonate concentration of 44 meq/L. A venous blood gas was drawn soon after his presentation; venous and arterial blood gases demonstrate a high level of agreement in hemodynamically stable patients, allowing for the interpretation of acid-base disorders with venous blood gas results. In response to his metabolic alkalosis, the Pco2 should have increased by 0.75 mmHg for each 1-meq/L increase in bicarbonate; the expected Pco2 should have been ~55 mmHg. Given the Pco2 of 62 mmHg, he had an additional respiratory acidosis, likely caused by respiratory muscle weakness from his acute hypokalemia and subacute hypercortisolism. The patient’s albumin-adjusted AG was 21 + ([4 – 2.8] × 2.5) = 24; this suggests a third acid-base disorder, AG acidosis. Notably, the measured AG can increase in alkalosis, due to both increases in plasma protein concentrations (in hypovolemic alkalosis) and to the alkalemiaassociated increase in net negative charge of plasma proteins, both causing an increase in unmeasured anions; however, this patient was neither volume-depleted nor particularly alkalemic, suggesting that these effects played a minimal role in his increased AG. Alkalosis also stimulates an increase in lactic acid production, due to activation of phosphofructokinase and accelerated glycolysis; unfortunately, however, a lactic acid level was not measured in this patient. It should be noted in this regard that alkalosis typically increases lactic acid levels by a mere 1.5–3 meq/L and that the patient was not significantly alkalemic. Regardless of the underlying pathophysiology, the increased AG was likely related to the metabolic alkalosis, given that the AG had decreased to 18 by hospital day 2, coincident with a reduction in plasma bicarbonate. Why did the patient have a metabolic alkalosis? The activation of MLRs in the distal nephron increases distal nephron acidification and net acid secretion. In consequence, mineralocorticoid excess causes a saline-resistant metabolic alkalosis, which is exacerbated significantly by the development of hypokalemia. Hypokalemia plays a key role in the generation of most forms of metabolic alkalosis, stimulating proximal tubular ammonium production, proximal tubular bicarbonate reabsorption, and distal tubular H+/K+-ATPase activity. The first priority in the management of this patient was to increase his plasma K+ and magnesium concentrations rapidly; hypomagnesemic patients are refractory to K+ replacement alone, resulting in the need to correct hypomagnesemia immediately. This was accomplished via the administration of both oral and intravenous K+-Cl−, giving a total of 240 meq over the first 18 h; 5 g of intravenous magnesium sulfate was also administered. Multiple 100-mL “minibags” of saline containing 20 meq each were infused, with cardiac monitoring and frequent measurement of plasma electrolytes. Of note, intravenous K+-Cl− should always be given in saline solutions because dextrose-containing solutions can increase insulin levels and exacerbate hypokalemia. This case illustrates the difficulty in predicting the whole-body deficit of K+ in hypokalemic patients. In the absence of abnormal K+ redistribution, the total deficit correlates with plasma K+ concentration, which drops by approximately 0.27 mM for every 100-mmol reduction in total-body stores; this would suggest a deficit of ~650 meq of K+ in this patient, at the admission plasma K+ concentration of 1.7 meq/L. Notably, however, alkalemia induces a modest intracellular shift of circulating K+ such that this patient’s initial plasma K+ concentration was not an ideal indicator of the total potassium deficit. Regardless of the underlying pathophysiology in this case, close monitoring of plasma K+ concentration is always essential during the correction of severe hypokalemia in order to gauge the adequacy of repletion and to avoid overcorrection. Subsequent management of this patient’s Cushing’s syndrome and ectopic ACTH secretion was complicated by the respiratory issues. The prognosis in patients with ectopic ACTH secretion depends on the tumor histology and the presence or absence of distant metastases. This patient had an exceptionally poor prognosis, with widely metastatic small-cell lung cancer that had failed treatment; other patients with ectopic ACTH secretion caused by more benign, isolated tumors, most commonly bronchial carcinoid tumors, have a much better prognosis. In the absence of successful surgical resection of the causative tumor, management of this syndrome can include surgical adrenalectomy or medical therapy to block adrenal steroid production. A stuporous 22-year-old man was admitted with a history of behaving strangely. His friends indicated he experienced recent emotional problems stemming from a failed relationship and had threatened suicide. There was a history of alcohol abuse, but his friends were unaware of recent alcohol consumption. The patient was obtunded on admission, with no evident focal neurologic deficits. The remainder of the physical examination was unremarkable. Na+ 140 meq/L K+ 5 meq/L Cl− 95 meq/L HCO3− 10 meq/L Glucose 125 mg/dL BUN 15 mg/dL Creatinine 0.9 mg/dL Ionized calcium 4.0 mg/dL Plasma osmolality 325 mOsm kg/H2O Urinalysis revealed crystalluria, with a mixture of envelope-shaped and needle-shaped crystals. This patient presented with CNS manifestations and a history of suspicious behavior, suggesting ingestion of a toxin. The AG was strikingly elevated at 35 meq/L. The ΔAG of 25 significantly exceeded the ΔHCO3− of 15. The fact that the Δ values were significantly disparate indicates that the most likely acid-base diagnosis in this patient is a mixed high-AG metabolic acidosis and a metabolic alkalosis. The metabolic alkalosis in this case may have been the result of vomiting. Nevertheless, the most useful finding is that the osmolar gap is elevated. The osmolar gap of 33 (difference in measured and calculated osmolality or 325 – 292) in the face of a high-AG metabolic acidosis is diagnostic of an osmotically active metabolite in plasma; a difference of >10 mOsm/kg indicates a significant concentration of an unmeasured osmolyte. Examples of toxic osmolytes include ethylene glycol, diethylene glycol, methanol, and propylene glycol. Several caveats apply to the interpretation of the osmolar gap and 64e-9 AG in the differential diagnosis of toxic alcohol ingestions. First, unmeasured, neutral osmolytes can also accumulate in lactic acidosis and alcoholic ketoacidosis; i.e., an elevated osmolar gap is not specific to AG acidoses associated with toxic alcohol ingestions. Second, patients can present having extensively metabolized the ingested toxin, with an insignificant osmolar gap but a large AG; i.e., the absence of an elevated osmolar gap does not rule out toxic alcohol ingestion. Third, the converse can also be seen in patients who present earlier after ingestion of the toxin, i.e., a large osmolar gap with minimal elevation of the AG. Finally, clinicians should be aware of the effect of co-ingested ethanol, which can itself elevate the osmolar gap and can reduce metabolism of the toxic alcohols via competitive inhibition of alcohol dehydrogenase (see below), thus attenuating the expected increase in the AG. Ethylene glycol is commonly available as antifreeze or solvents and may be ingested accidently or as a suicide attempt. The metabolism of ethylene glycol by alcohol dehydrogenase generates acids such as glycoaldehyde, glycolic acid, and oxalic acid. The initial effects of intoxication are on the CNS and, in the earliest stages, mimic inebriation, but may quickly progress to full-blown coma. Delay in treatment is one of the most common causes of mortality with toxic alcohol poisoning. The kidney shows evidence of acute tubular injury with widespread deposition of calcium oxalate crystals within tubular epithelial cells. Cerebral edema is common, as is crystal deposition in the brain; the latter is irreversible. The co-occurrent crystalluria is typical of ethylene glycol intoxication; both needle-shaped monohydrate and envelope-shaped dihydrate calcium oxalate crystals can be seen in the urine as the process evolves. Circulating oxalate can also complex with plasma calcium, reducing the ionized calcium as in this case. Although ethylene glycol intoxication should be verified by measuring ethylene glycol levels, therapy must be initiated immediately in this life-threatening situation. Although therapy can be initiated with confidence in cases with known or witnessed ingestions, such histories are rarely available. Therapy should thus be initiated in patients with severe metabolic acidosis and elevated anion and osmolar gaps. Other diagnostic features, such as hypocalcemia or acute renal failure with crystalluria, can provide important confirmation for urgent, empiric therapy. Because all four osmotically active toxic alcohols—ethylene glycol, diethylene glycol, methanol, and propylene glycol—are metabolized by alcohol dehydrogenase to generate toxic products, competitive inhibition of this key enzyme is common to the treatment of all four intoxications. The most potent inhibitor of alcohol dehydrogenase, and the drug of choice in this circumstance, is fomepizole (4-methyl pyrazole). Fomepizole should be administered intravenously as a loading dose (15 mg/kg) followed by doses of 10 mg/kg every 12 h for four doses, and then 15 mg/kg every 12 h thereafter until ethylene glycol levels have been reduced to <20 mg/dL and the patient is asymptomatic with a normal pH. Additional important components of the treatment of toxic alcohol ingestion include fluid resuscitation, thiamine, pyridoxine, folate, sodium bicarbonate, and hemodialysis. Hemodialysis is used to remove both the parent compound and toxic metabolites, but it also removes administered fomepizole, necessitating adjustment of dosage frequency. Gastric aspiration, induced emesis, or the use of activated charcoal is only effective if initiated within 30–60 min after ingestion of the toxin. When fomepizole is not available, ethanol, which has more than 10-fold affinity for alcohol dehydrogenase compared to other alcohols, may be substituted and is quite effective. Ethanol must be administered IV to achieve a blood level of 22 meq/L (100 mg/dL). A disadvantage of ethanol is the obtundation that follows its administration, which is additive to the CNS effects of ethylene glycol. Furthermore, if hemodialysis is used, the infusion rate of ethanol must be increased because it is rapidly dialyzed. In general, hemodialysis is indicated for all patients with ethylene glycol intoxication when the arterial pH is <7.3 or the osmolar gap exceeds 20 mOsm/kg H2O. CHAPTER 64e Fluid and Electrolyte Imbalances and Acid-Base Disturbances: Case Examples TABLE 65-1 CAuSES of HyPERCALCEMiA Sundeep Khosla Primary hyperparathyroidism (adenoma, hyperplasia, rarely carcinoma) The calcium ion plays a critical role in normal cellular function and signaling, regulating diverse physiologic processes such as neuromuscular signaling, cardiac contractility, hormone secretion, and blood coagulation. Thus, extracellular calcium concentrations are maintained within an exquisitely narrow range through a series of feedback mechanisms that involve parathyroid hormone (PTH) and the active vitamin D metabolite 1,25-dihydroxyvitmin D [1,25(OH)2D]. These feedback mechanisms are orchestrated by integrating signals between the parathyroid glands, kidney, intestine, and bone (Fig. 65-1; Chap. 423). Disorders of serum calcium concentration are relatively common and often serve as a harbinger of underlying disease. This chapter provides a brief summary of the approach to patients with altered serum calcium levels. See Chap. 424 for a detailed discussion of this topic. The causes of hypercalcemia can be understood and classified based on derangements in the normal feedback mechanisms that regulate serum calcium (Table 65-1). Excess PTH production, which is not appropriately suppressed by increased serum calcium concentrations, occurs in primary neoplastic disorders of the parathyroid glands (parathyroid adenomas; hyperplasia; or, rarely, carcinoma) that are associated with 1,25 (OH)2D FIguRE 65-1 Feedback mechanisms maintaining extracellular calcium concentrations within a narrow, physiologic range (8.9–10.1 mg/dL [2.2–2.5 mM]). A decrease in extracellular (ECF) calcium (Ca2+) triggers an increase in parathyroid hormone (PTH) secretion (1) via the calcium sensor receptor on parathyroid cells. PTH, in turn, results in increased tubular reabsorption of calcium by the kidney (2) and resorption of calcium from bone (2) and also stimulates renal 1,25(OH)2D production (3). 1,25(OH)2D, in turn, acts principally on the intestine to increase calcium absorption (4). Collectively, these homeostatic mechanisms serve to restore serum calcium levels to normal. Tertiary hyperparathyroidism (long-term stimulation of PTH secretion in renal insufficiency) Ectopic PTH secretion (very rare) Inactivating mutations in the CaSR or in G proteins (FHH) Alterations in CaSR function (lithium therapy) Hypercalcemia of malignancy Overproduction of PTHrP (many solid tumors) Lytic skeletal metastases (breast, myeloma) Excessive 1,25(OH)2D production Granulomatous diseases (sarcoidosis, tuberculosis, silicosis) Lymphomas Vitamin D intoxication Primary increase in bone resorption Hyperthyroidism Immobilization Excessive calcium intake Milk-alkali syndrome Total parenteral nutrition Other causes Endocrine disorders (adrenal insufficiency, pheochromocytoma, VIPoma) Medications (thiazides, vitamin A, antiestrogens) Abbreviations: CaSR, calcium sensor receptor; FHH, familial hypocalciuric hypercalcemia; PTH, parathyroid hormone; PTHrP, PTH-related peptide. increased parathyroid cell mass and impaired feedback inhibition by calcium. Inappropriate PTH secretion for the ambient level of serum calcium also occurs with heterozygous inactivating calcium sensor receptor (CaSR) or G protein mutations, which impair extracellular calcium sensing by the parathyroid glands and the kidneys, resulting in familial hypocalciuric hypercalcemia (FHH). Although PTH secretion by tumors is extremely rare, many solid tumors produce PTH-related peptide (PTHrP), which shares homology with PTH in the first 13 amino acids and binds the PTH receptor, thus mimicking effects of PTH on bone and the kidney. In PTHrP-mediated hypercalcemia of malignancy, PTH levels are suppressed by the high serum calcium levels. Hypercalcemia associated with granulomatous disease (e.g., sarcoidosis) or lymphomas is caused by enhanced conversion of 25(OH) D to the potent 1,25(OH)2D. In these disorders, 1,25(OH)2D enhances intestinal calcium absorption, resulting in hypercalcemia and suppressed PTH. Disorders that directly increase calcium mobilization from bone, such as hyperthyroidism or osteolytic metastases, also lead to hypercalcemia with suppressed PTH secretion as does exogenous calcium overload, as in milk-alkali syndrome, or total parenteral nutrition with excessive calcium supplementation. Mild hypercalcemia (up to 11–11.5 mg/dL) is usually asymptomatic and recognized only on routine calcium measurements. Some patients may complain of vague neuropsychiatric symptoms, including trouble concentrating, personality changes, or depression. Other presenting symptoms may include peptic ulcer disease or nephrolithiasis, and fracture risk may be increased. More severe hypercalcemia (>12–13 mg/dL), particularly if it develops acutely, may result in lethargy, stupor, or coma, as well as gastrointestinal symptoms (nausea, anorexia, constipation, or pancreatitis). Hypercalcemia decreases renal concentrating ability, which may cause polyuria and polydipsia. With longstanding hyperparathyroidism, patients may present with bone pain or pathologic fractures. Finally, hypercalcemia can result in significant electrocardiographic changes, including bradycardia, AV block, and short QT interval; changes in serum calcium can be monitored by following the QT interval. The first step in the diagnostic evaluation of hyperor hypocalcemia is to ensure that the alteration in serum calcium levels is not due to abnormal albumin concentrations. About 50% of total calcium is ionized, and the rest is bound principally to albumin. Although direct measurements of ionized calcium are possible, they are easily influenced by collection methods and other artifacts; thus, it is generally preferable to measure total calcium and albumin to “correct” the serum calcium. When serum albumin concentrations are reduced, a corrected calcium concentration is calculated by adding 0.2 mM (0.8 mg/dL) to the total calcium level for every decrement in serum albumin of 1.0 g/dL below the reference value of 4.1 g/dL for albumin, and, conversely, for elevations in serum albumin. A detailed history may provide important clues regarding the etiology of the hypercalcemia (Table 65-1). Chronic hypercalcemia is most commonly caused by primary hyperparathyroidism, as opposed to the second most common etiology of hypercalcemia, an underlying malignancy. The history should include medication use, previous neck surgery, and systemic symptoms suggestive of sarcoidosis or lymphoma. Once true hypercalcemia is established, the second most important laboratory test in the diagnostic evaluation is a PTH level using a two-site assay for the intact hormone. Increases in PTH are often accompanied by hypophosphatemia. In addition, serum creatinine should be measured to assess renal function; hypercalcemia may impair renal function, and renal clearance of PTH may be altered depending on the fragments detected by the assay. If the PTH level is increased (or “inappropriately normal”) in the setting of elevated calcium and low phosphorus, the diagnosis is almost always primary hyperparathyroidism. Because individuals with FHH may also present with mildly elevated PTH levels and hypercalcemia, this diagnosis should be considered and excluded because parathyroid surgery is ineffective in this condition. A calcium/creatinine clearance ratio (calculated as urine calcium/ serum calcium divided by urine creatinine/serum creatinine) of <0.01 is suggestive of FHH, particularly when there is a family history of mild, asymptomatic hypercalcemia. In addition, sequence analysis of the CaSR gene is now commonly performed for the definitive diagnosis of FHH, although in some families, FHH may be caused by mutations in G proteins that mediate signaling by the CaSR. Ectopic PTH secretion is extremely rare. A suppressed PTH level in the face of hypercalcemia is consistent with non-parathyroid-mediated hypercalcemia, most often due to underlying malignancy. Although a tumor that causes hypercalcemia is generally overt, a PTHrP level may be needed to establish the diagnosis of hypercalcemia of malignancy. Serum 1,25(OH)2D levels are increased in granulomatous disorders, and clinical evaluation in combination with laboratory testing will generally provide a diagnosis for the various disorders listed in Table 65-1. PART 2 Cardinal Manifestations and Presentation of Diseases Mild, asymptomatic hypercalcemia does not require immediate therapy, and management should be dictated by the underlying diagnosis. By contrast, significant, symptomatic hypercalcemia usually requires therapeutic intervention independent of the etiology of hypercalcemia. Initial therapy of significant hypercalcemia begins with volume expansion because hypercalcemia invariably leads to dehydration; 4–6 L of intravenous saline may be required over the first 24 h, keeping in mind that underlying comorbidities (e.g., congestive heart failure) may require the use of loop diuretics to enhance sodium and calcium excretion. However, loop diuretics should not be initiated until the volume status has been restored to normal. If there is increased calcium mobilization from bone (as in malignancy or severe hyperparathyroidism), drugs that inhibit bone resorption should be considered. Zoledronic acid (e.g., 4 mg intravenously over ∼30 min), pamidronate (e.g., 60–90 mg intravenously over 2–4 h), and ibandronate (2 mg intravenously over 2 h) are bisphosphonates that are commonly used for the treatment of hypercalcemia of malignancy in adults. Onset of action is within 1–3 days, with normalization of serum calcium levels occurring in 60–90% of patients. Bisphosphonate infusions may need to be repeated if hypercalcemia relapses. An alternative to the bisphosphonates is gallium nitrate (200 mg/m2 intravenously daily for 5 days), which is also effective, but has potential nephrotoxicity. In rare instances, dialysis may be necessary. Finally, although intravenous phosphate chelates calcium and decreases serum calcium levels, this therapy can be toxic because calcium-phosphate complexes may deposit in tissues and cause extensive organ damage. In patients with 1,25(OH)2D-mediated hypercalcemia, glucocorticoids are the preferred therapy, as they decrease 1,25(OH)2D production. Intravenous hydrocortisone (100–300 mg daily) or oral prednisone (40–60 mg daily) for 3–7 days is used most often. Other drugs, such as ketoconazole, chloroquine, and hydroxychloroquine, may also decrease 1,25(OH)2D production and are used occasionally. The causes of hypocalcemia can be differentiated according to whether serum PTH levels are low (hypoparathyroidism) or high (secondary hyperparathyroidism). Although there are many potential causes of hypocalcemia, impaired PTH production and impaired vitamin D production are the most common etiologies (Table 65-2) (Chap. 424). Because PTH is the main defense against hypocalcemia, disorders associated with deficient PTH production or secretion may be associated with profound, life-threatening hypocalcemia. In adults, hypoparathyroidism most commonly results from inadvertent damage to all four glands during thyroid or parathyroid gland surgery. Hypoparathyroidism is a cardinal feature of autoimmune endocrinopathies (Chap. 408); rarely, it Vitamin D deficiency or impaired 1,25(OH)2D production/action Nutritional vitamin D deficiency (poor intake or absorption) Renal insufficiency with impaired 1,25(OH)2D production Vitamin D resistance, including receptor defects Drugs Calcium chelators Inhibitors of bone resorption (bisphosphonates, plicamycin) Altered vitamin D metabolism (phenytoin, ketoconazole) Miscellaneous causes Acute pancreatitis Acute rhabdomyolysis Hungry bone syndrome after parathyroidectomy Osteoblastic metastases with marked stimulation of bone formation (prostate cancer) Abbreviations: CaSR, calcium sensor receptor; PTH, parathyroid hormone. may be associated with infiltrative diseases such as sarcoidosis. Impaired PTH secretion may be secondary to magnesium deficiency or to activating mutations in the CaSR or in the G proteins that mediate CaSR signaling, which suppress PTH, leading to effects that are opposite to those that occur in FHH. Vitamin D deficiency, impaired 1,25(OH)2D production (primarily secondary to renal insufficiency), or vitamin D resistance also cause hypocalcemia. However, the degree of hypocalcemia in these disorders is generally not as severe as that seen with hypoparathyroidism because the parathyroids are capable of mounting a compensatory increase in PTH secretion. Hypocalcemia may also occur in conditions associated with severe tissue injury such as burns, rhabdomyolysis, tumor lysis, or pancreatitis. The cause of hypocalcemia in these settings may include a combination of low albumin, hyperphosphatemia, tissue deposition of calcium, and impaired PTH secretion. Patients with hypocalcemia may be asymptomatic if the decreases in serum calcium are relatively mild and chronic, or they may present with life-threatening complications. Moderate to severe hypocalcemia is associated with paresthesias, usually of the fingers, toes, and circumoral regions, and is caused by increased neuromuscular irritability. On physical examination, a Chvostek’s sign (twitching of the circumoral muscles in response to gentle tapping of the facial nerve just anterior to the ear) may be elicited, although it is also present in ∼10% of normal individuals. Carpal spasm may be induced by inflation of a blood pressure cuff to 20 mmHg above the patient’s systolic blood pressure for 3 min (Trousseau’s sign). Severe hypocalcemia can induce seizures, carpopedal spasm, bronchospasm, laryngospasm, and prolongation of the QT interval. In addition to measuring serum calcium, it is useful to determine albumin, phosphorus, and magnesium levels. As for the evaluation of hypercalcemia, determining the PTH level is central to the evaluation of hypocalcemia. A suppressed (or “inappropriately low”) PTH level in the setting of hypocalcemia establishes absent or reduced PTH secretion (hypoparathyroidism) as the cause of the hypocalcemia. Further history will often elicit the underlying cause (i.e., parathyroid agenesis vs. destruction). By contrast, an elevated PTH level (secondary hyperparathyroidism) should direct attention to the vitamin D axis as the cause of the hypocalcemia. Nutritional vitamin D deficiency is best assessed by obtaining serum 25-hydroxyvitamin D levels, which reflect vitamin D stores. In the setting of renal insufficiency or suspected vitamin D resistance, serum 1,25(OH)2D levels are informative. The approach to treatment depends on the severity of the hypocalcemia, the rapidity with which it develops, and the accompanying complications (e.g., seizures, laryngospasm). Acute, symptomatic hypocalcemia is initially managed with calcium gluconate, 10 mL 10% wt/vol (90 mg or 2.2 mmol) intravenously, diluted in 50 mL of 5% dextrose or 0.9% sodium chloride, given intravenously over 5 min. Continuing hypocalcemia often requires a constant intravenous infusion (typically 10 ampules of calcium gluconate or 900 mg of calcium in 1 L of 5% dextrose or 0.9% sodium chloride administered over 24 h). Accompanying hypomagnesemia, if present, should be treated with appropriate magnesium supplementation. Chronic hypocalcemia due to hypoparathyroidism is treated with calcium supplements (1000–1500 mg/d elemental calcium in divided doses) and either vitamin D2 or D3 (25,000–100,000 U daily) or calcitriol [1,25(OH)2D, 0.25–2 μg/d]. Other vitamin D metabolites (dihydrotachysterol, alfacalcidiol) are now used less frequently. Vitamin D deficiency, however, is best treated using vitamin D supplementation, with the dose depending on the severity of the deficit and the underlying cause. Thus, nutritional vitamin D deficiency generally responds to relatively low doses of vitamin D (50,000 U, 2–3 times per week for several months), whereas vitamin D deficiency due to malabsorption may require much higher doses 315 (100,000 U/d or more). The treatment goal is to bring serum calcium into the low normal range and to avoid hypercalciuria, which may lead to nephrolithiasis. In countries with more limited access to health care or screening laboratory testing of serum calcium levels, primary hyperparathyroidism often presents in its severe form with skeletal complications (osteitis fibrosa cystica) in contrast to the asymptomatic form that is common in developed countries. In addition, vitamin D deficiency is paradoxically common in some countries despite extensive sunlight (e.g., India) due to avoidance of sun exposure and poor dietary vitamin D intake. Thomas D. DuBose, Jr. Systemic arterial pH is maintained between 7.35 and 7.45 by extracellular and intracellular chemical buffering together with respiratory and renal regulatory mechanisms. The control of arterial CO2 tension (Paco2) by the central nervous system (CNS) and respiratory system and the control of plasma bicarbonate by the kidneys stabilize the arterial pH by excretion or retention of acid or alkali. The metabolic and respiratory components that regulate systemic pH are described by the Henderson-Hasselbalch equation: P = 6.1+ logPaco2 × 0.0301 Under most circumstances, CO2 production and excretion are matched, and the usual steady-state Paco2 is maintained at 40 mmHg. Underexcretion of CO2 produces hypercapnia, and overexcretion causes hypocapnia. Nevertheless, production and excretion are again matched at a new steady-state Paco2. Therefore, the Paco2 is regulated primarily by neural respiratory factors and is not subject to regulation by the rate of CO2 production. Hypercapnia is usually the result of hypoventilation rather than of increased CO2 production. Increases or decreases in Paco2 represent derangements of neural respiratory control or are due to compensatory changes in response to a primary alteration in the plasma [HCO3 -]. The most common clinical disturbances are simple acid-base disorders; i.e., metabolic acidosis or alkalosis or respiratory acidosis or alkalosis. Primary respiratory disturbances (primary changes in Paco2) invoke compensatory metabolic responses (secondary changes in [HCO3 -]), and primary metabolic disturbances elicit predictable compensatory respiratory responses (secondary changes in Paco2). Physiologic compensation can be predicted from the relationships displayed in Table 66-1. In general, with one exception, compensatory responses return the pH toward, but not to, the normal value. Chronic respiratory alkalosis when prolonged is an exception to this rule and often returns the pH to a normal value. Metabolic acidosis due to an increase in endogenous acids (e.g., ketoacidosis) lowers extracellular fluid [HCO3 -] and decreases extracellular pH. This stimulates the medullary chemoreceptors to increase ventilation and to return PART 2 Cardinal Manifestations and Presentation of Diseases Range of Values Prediction of the ratio of [HCO3 -] to Paco2, and thus pH, toward, but not to, normal. The degree of respiratory compensation expected in a simple form of metabolic acidosis can be predicted from the relationship: Paco2 = (1.5 × [HCO3 -]) + 8 ± 2. Thus, a patient with metabolic acidosis and [HCO3 -] of 12 mmol/L would be expected to have a Paco2 between 24 and 28 mmHg. Values for Paco2 <24 or >28 mmHg define a mixed disturbance (metabolic acidosis and respiratory alkalosis or metabolic alkalosis and respiratory acidosis, respectively). Compensatory responses for primary metabolic disorders move the Paco2 in the same direction as the change in [HCO3 -], whereas, conversely, compensation for primary respiratory disorders moves the [HCO3 -] in the same direction as the primary change in Paco2 (Table 66-1). Therefore, changes in Paco2 and [HCO3 -] in opposite directions (i.e., Paco2 or [HCO3 -] is increased, whereas the other value is decreased) indicate a mixed disturbance. Another way to judge the appropriateness of the response in [HCO3 -] or Paco2 is to use an acid-base nomogram (Fig. 66-1). While the shaded areas of the nomogram show the 95% confidence limits for normal compensation in simple disturbances, finding acid-base values within the shaded area does not necessarily rule out a mixed disturbance. Imposition of one disorder over another may result in values lying within the area of a third. Thus, the nomogram, while convenient, is not a substitute for the equations in Table 66-1. Mixed acid-base disorders—defined as independently coexisting disorders, not merely compensatory responses—are often seen in patients in critical care units and can lead to dangerous extremes of pH (Table 66-2). A patient with diabetic ketoacidosis (metabolic acidosis) may develop an independent respiratory problem (e.g., limits (range of values) of the normal respiratory and metabolic compensations for primary acid-base disturbances. (From TD DuBose Jr: Acid-base disorders, in Brenner and Rector’s The Kidney, 8th ed, BM Brenner [ed]. Philadelphia, Saunders, 2008, pp 505–546, with permission.) Key: Highor normal-AG metabolic acidosis; prevailing Paco2 below predicted value (Table 66-1) Example: Na+, 140; K+, 4.0; Cl-, 106; HCO3 -, 14; AG, 20; Paco2, 24; pH, 7.39 (lactic acidosis, sepsis in ICU) Metabolic acidosis—respiratory acidosis Key: Highor normal-AG metabolic acidosis; prevailing Paco2 above predicted value (Table 66-1) Example: Na+, 140; K+, 4.0; Cl-, 102; HCO3 -, 18; AG, 20; Paco2, 38; pH, 7.30 (severe pneumonia, pulmonary edema) Metabolic alkalosis—respiratory alkalosis Key: Paco2 does not increase as predicted; pH higher than expected Example: Na+, 140; K+, 4.0; Cl-, 91; HCO3 -, 33; AG, 16; Paco2, 38; pH, 7.55 (liver Metabolic alkalosis—respiratory acidosis Key: Paco2 higher than predicted; pH normal Example: Na+, 140; K+, 3.5; Cl-, 88; HCO3 -, 42; AG, 10; Paco2, 67; pH, 7.42 Key: Only detectable with high-AG acidosis; ∆AG >> ∆HCO3 Example: Na+, 140; K+, 3.0; Cl-, 95; HCO3 -, 25; AG, 20; Paco2, 40; pH, 7.42 (uremia with vomiting) Key: Mixed high-AG—normal-AG acidosis; ∆HCO3 accounted for by combined change in ∆AG and ∆Cl- Example: Na+, 135; K+, 3.0; Cl-, 110; HCO3 -, 10; AG, 15; Paco2, 25; pH, 7.20 (diarrhea and lactic acidosis, toluene toxicity, treatment of diabetic ketoacidosis) Abbreviations: AG, anion gap; COPD, chronic obstructive pulmonary disease; ICU, intensive care unit. pneumonia) leading to respiratory acidosis or alkalosis. Patients with underlying pulmonary disease (e.g., chronic obstructive pulmonary disease) may not respond to metabolic acidosis with an appropriate ventilatory response because of insufficient respiratory reserve. Such imposition of respiratory acidosis on metabolic acidosis can lead to severe acidemia. When metabolic acidosis and metabolic alkalosis coexist in the same patient, the pH may be normal or near normal. When the pH is normal, an elevated anion gap (AG; see below) reliably denotes the presence of an AG metabolic acidosis at a normal serum albumin of 4.5 g/dL. Assuming a normal AG of 10 mmol/L, a discrepancy in the ∆AG (prevailing minus normal AG) and the ∆HCO3 (normal value of 25 mmol/L minus abnormal HCO3 in the patient) indicates the presence of a mixed high-gap acidosis— metabolic alkalosis (see example below). A diabetic patient with ketoacidosis may have renal dysfunction resulting in simultaneous metabolic acidosis. Patients who have ingested an overdose of drug combinations such as sedatives and salicylates may have mixed disturbances as a result of the acid-base response to the individual drugs (metabolic acidosis mixed with respiratory acidosis or respiratory alkalosis, respectively). Triple acid-base disturbances are more complex. For example, patients with metabolic acidosis due to alcoholic ketoacidosis may develop metabolic alkalosis due to vomiting and superimposed respiratory alkalosis due to the hyperventilation of hepatic dysfunction or alcohol withdrawal. APPROACH TO THE PATIENT: A stepwise approach to the diagnosis of acid-base disorders follows (Table 66-3). Care should be taken when measuring blood gases to obtain the arterial blood sample without using excessive heparin. Blood for electrolytes and arterial blood gases should be drawn simultaneously prior to therapy, because an increase in [HCO3 -] occurs with metabolic alkalosis and respiratory acidosis. Conversely, a decrease in [HCO3 -] occurs in metabolic acidosis and respiratory alkalosis. In the determination of arterial blood gases by the clinical laboratory, both pH and Paco2 are measured, and the [HCO3 -] is calculated from the Henderson-Hasselbalch equation. This calculated value should be compared with the measured [HCO3 -] (total CO2) on the electrolyte panel. These two values should agree within 2 mmol/L. If they do not, the values may not have been drawn simultaneously, a laboratory error may be present, or an error could have been made in calculating the [HCO3 -]. After verifying the blood acid-base values, the precise acid-base disorder can then be identified. All evaluations of acid-base disorders should include a simple calculation of the AG; it represents those unmeasured anions in plasma (normally 8-10 mmol/L) and is calculated as follows: AG = Na+ – (Cl-+ HCO3 -). The unmeasured anions include anionic proteins (e.g., albumin), phosphate, sulfate, and organic anions. When acid anions, such as acetoacetate and lactate, accumulate in extracellular fluid, the AG increases, causing a high-AG 1. Obtain arterial blood gas (ABG) and electrolytes simultaneously. 2. Compare [HCO3 -] on ABG and electrolytes to verify accuracy. 3. Calculate anion gap (AG). 4. Know four causes of high-AG acidosis (ketoacidosis, lactic acid acidosis, renal failure, and toxins). 5. Know two causes of hyperchloremic or nongap acidosis (bicarbonate loss from gastrointestinal tract, renal tubular acidosis). 6. Estimate compensatory response (Table 66-1). 7. Compare ∆AG and ∆HCO3 . 8. Compare change in [Cl-] with change in [Na+]. acidosis. An increase in the AG is most often due to an increase in unmeasured anions and, less commonly, is due to a decrease in unmeasured cations (calcium, magnesium, potassium). In addition, the AG may increase with an increase in anionic albumin, because of either increased albumin concentration or alkalosis, which alters albumin charge. A decrease in the AG can be due to (1) an increase in unmeasured cations; (2) the addition to the blood of abnormal cations, such as lithium (lithium intoxication) or cationic immunoglobulins (plasma cell dyscrasias); (3) a reduction in the major plasma anion albumin concentration (nephrotic syndrome); (4) a decrease in the effective anionic charge on albumin by acidosis; or (5) hyperviscosity and severe hyperlipidemia, which can lead to an underestimation of sodium and chloride concentrations. A fall in serum albumin by 1 g/dL from the normal value (4.5 g/dL) decreases the AG by 2.5 meq/L. Know the common causes of a high-AG acidosis (Table 66-3). In the face of a normal serum albumin, a high AG is usually due to non–chloride-containing acids that contain inorganic (phosphate, sulfate), organic (ketoacids, lactate, uremic organic anions), exogenous (salicylate or ingested toxins with organic acid production), or unidentified anions. The high AG is significant even if an additional acid-base disorder is superimposed to modify the [HCO3 -] independently. Simultaneous metabolic acidosis of the high-AG variety plus either chronic respiratory acidosis or metabolic alkalosis represents such a situation in which [HCO3 -] may be normal or even high (Table 66-3). Compare the change in [HCO3 -] (∆HCO3 -) and the change in the AG (∆AG). Similarly, normal values for [HCO3 -], Paco2, and pH do not ensure the absence of an acid-base disturbance. For instance, an alcoholic who has been vomiting may develop a metabolic alkalosis with a pH of 7.55, Paco2 of 47 mmHg, [HCO3 -] of 40 mmol/L, [Na+] of 135, [Cl-] of 80, and [K+] of 2.8. If such a patient were then to develop a superimposed alcoholic ketoacidosis with a β-hydroxybutyrate concentration of 15 mM, arterial pH would fall to 7.40, [HCO3 -] to 25 mmol/L, and the Paco2 to 40 mmHg. Although these blood gases are normal, the AG is elevated at 30 mmol/L, indicating a mixed metabolic alkalosis and metabolic acidosis. A mixture of high-gap acidosis and metabolic alkalosis is recognized easily by comparing the differences (∆ values) in the normal to prevailing patient values. In this example, the ∆HCO3 is 0 (25 25 mmol/L), but the ∆AG is 20 (30 – 10 mmol/L). Therefore, 20 mmol/L is unaccounted for in the ∆/∆ value (∆AG to ∆HCO3 -). Metabolic acidosis can occur because of an increase in endogenous acid production (such as lactate and ketoacids), loss of bicarbonate (as in diarrhea), or accumulation of endogenous acids (as in renal failure). Metabolic acidosis has profound effects on the respiratory, cardiac, and nervous systems. The fall in blood pH is accompanied by a char acteristic increase in ventilation, especially the tidal volume (Kussmaul respiration). Intrinsic cardiac contractility may be depressed, but ino tropic function can be normal because of catecholamine release. Both present; the decrease in central and pulmonary vascular compliance predisposes to pulmonary edema with even minimal volume overload. CNS function is depressed, with headache, lethargy, stupor, and, in some cases, even coma. Glucose intolerance may also occur. There are two major categories of clinical metabolic acidosis: high-AG and non-AG, or hyperchloremic, acidosis (Table 66-3 and Table 66-4). Treatment of metabolic acidosis with alkali should be reserved for severe acidemia except when the patient has no “potential HCO3 -” in plasma. Potential [HCO3 -] can be estimated from the increment (∆) in the AG (∆AG = patient’s AG – 10). It must be determined if CAuSES of HigH-Anion gAP METABoLiC ACiDoSiS the acid anion in plasma is metabolizable (i.e., β-hydroxybutyrate, acetoacetate, and lactate) or nonmetabolizable (anions that accumulate in chronic renal failure and after toxin ingestion). The latter requires return of renal function to replenish the [HCO3 -] deficit, a slow and often unpredictable process. Consequently, patients with a normal AG acidosis (hyperchloremic acidosis), a slightly elevated AG (mixed hyperchloremic and AG acidosis), or an AG attributable to a nonmetabolizable anion in the face of renal failure should receive alkali therapy, either PO (NaHCO3 or Shohl’s solution) or IV (NaHCO3), in an amount necessary to slowly increase the plasma [HCO3 -] into the 20–22 mmol/L range. Overcorrection must be avoided. Controversy exists, however, in regard to the use of alkali in patients with a pure AG acidosis owing to accumulation of a metabolizable organic acid anion (ketoacidosis or lactic acidosis). In general, severe acidosis (pH <7.10) warrants the IV administration of 50–100 meq of NaHCO3, over 30–45 min, during the initial 1–2 h of therapy. Provision of such modest quantities of alkali in this situation seems to provide an added measure of safety, but it is essential to monitor plasma electrolytes during the course of therapy, because the [K+] may decline as pH rises. The goal is to increase the [HCO3 -] to 10 meq/L and the pH to approximately 7.20, not to increase these values to normal. PART 2 Cardinal Manifestations and Presentation of Diseases APPROACH TO THE PATIENT: There are four principal causes of a high-AG acidosis: (1) lactic acidosis, (2) ketoacidosis, (3) ingested toxins, and (4) acute and chronic renal failure (Table 66-4). Initial screening to differentiate the high-AG acidoses should include (1) a probe of the history for evidence of drug and toxin ingestion and measurement of arterial blood gas to detect coexistent respiratory alkalosis (salicylates); (2) determination of whether diabetes mellitus is present (diabetic ketoacidosis); (3) a search for evidence of alcoholism or increased levels of β-hydroxybutyrate (alcoholic ketoacidosis); (4) observation for clinical signs of uremia and determination of the blood urea nitrogen (BUN) and creatinine (uremic acidosis); (5) inspection of the urine for oxalate crystals (ethylene glycol); and (6) recognition of the numerous clinical settings in which lactate levels may be increased (hypotension, shock, cardiac failure, leukemia, cancer, and drug or toxin ingestion). Lactic Acidosis An increase in plasma l-lactate may be secondary to poor tissue perfusion (type A)—circulatory insufficiency (shock, cardiac failure), severe anemia, mitochondrial enzyme defects, and inhibitors (carbon monoxide, cyanide)—or to aerobic disorders (type B)— malignancies, nucleoside analogue reverse transcriptase inhibitors in HIV, diabetes mellitus, renal or hepatic failure, thiamine deficiency, severe infections (cholera, malaria), seizures, or drugs/toxins (biguanides, ethanol, methanol, propylene glycol, isoniazid, and fructose). Unrecognized bowel ischemia or infarction in a patient with severe atherosclerosis or cardiac decompensation receiving vasopressors is a common cause of lactic acidosis. Pyroglutamic acidemia has been reported in critically ill patients receiving acetaminophen, which is associated with depletion of glutathione. d-Lactic acid acidosis, which may be associated with jejunoileal bypass, short bowel syndrome, or intestinal obstruction, is due to formation of d-lactate by gut bacteria. APPROACH TO THE PATIENT: The underlying condition that disrupts lactate metabolism must first be corrected; tissue perfusion must be restored when inadequate. Vasoconstrictors should be avoided, if possible, because they may worsen tissue perfusion. Alkali therapy is generally advocated for acute, severe acidemia (pH <7.15) to improve cardiac function and lactate use. However, NaHCO3 therapy may paradoxically depress cardiac performance and exacerbate acidosis by enhanc ing lactate production (HCO3 stimulates phosphofructokinase). While the use of alkali in moderate lactic acidosis is controversial, it is generally agreed that attempts to return the pH or [HCO3 -] to normal by administration of exogenous NaHCO3 are deleterious. A reasonable approach is to infuse sufficient NaHCO3 to raise the arterial pH to no more than 7.2 over 30–40 min. NaHCO3 therapy can cause fluid overload and hypertension because the amount required can be massive when accumulation of lactic acid is relentless. Fluid administration is poorly tolerated because of central venoconstriction, especially in the oliguric patient. When the underlying cause of the lactic acidosis can be remedied, blood lactate will be converted to HCO3 and may result in an overshoot alkalosis. Ketoacidosis • dIABETIC kETOACIdOSIS (dkA) This condition is caused by increased fatty acid metabolism and the accumulation of ketoacids (acetoacetate and β-hydroxybutyrate). DKA usually occurs in insulin-dependent diabetes mellitus in association with cessation of insulin or an intercurrent illness such as an infection, gastroenteritis, pancreatitis, or myocardial infarction, which increases insulin requirements temporarily and acutely. The accumulation of ketoacids accounts for the increment in the AG and is accompanied most often by hyperglycemia (glucose >17 mmol/L [300 mg/dL]). The relationship between the ∆AG and ∆HCO3 is typically ∼1:1 in DKA. It should be noted that, because insulin prevents production of ketones, bicarbonate therapy is rarely needed except with extreme acidemia (pH < 7.1), and then in only limited amounts. Patients with DKA are typically volume depleted and require fluid resuscitation with isotonic saline. Volume overexpansion with IV fluid administration is not uncommon, however, and contributes to the development of a hyperchloremic acidosis during treatment of DKA. The mainstay for treatment of this condition is IV regular insulin and is described in Chap. 417 in more detail. ALCOHOLIC kETOACIdOSIS (AkA) Chronic alcoholics can develop ketoacidosis when alcohol consumption is abruptly curtailed and nutrition is poor. AKA is usually associated with binge drinking, vomiting, abdominal pain, starvation, and volume depletion. The glucose concentration is variable, and acidosis may be severe because of elevated ketones, predominantly β-hydroxybutyrate. Hypoperfusion may enhance lactic acid production, chronic respiratory alkalosis may accompany liver disease, and metabolic alkalosis can result from vomiting (refer to the relationship between ∆AG and ∆HCO3 -). Thus, mixed acid-base disorders are common in AKA. As the circulation is restored by administration of isotonic saline, the preferential accumulation of β-hydroxybutyrate is then shifted to acetoacetate. This explains the common clinical observation of an increasingly positive nitroprusside reaction as the patient improves. The nitroprusside ketone reaction (Acetest) can detect acetoacetic acid but not β-hydroxybutyrate, so that the degree of ketosis and ketonuria can not only change with therapy, but can be underestimated initially. Patients with AKA usually present with relatively normal renal function, as opposed to DKA, where renal function is often compromised because of volume depletion (osmotic diuresis) or diabetic nephropathy. The AKA patient with normal renal function may excrete relatively large quantities of ketoacids in the urine and, therefore, may have a relatively normal AG and a discrepancy in the ∆AG/∆HCO3 relationship. Extracellular fluid deficits almost always accompany AKA and should be repleted by IV administration of saline and glucose (5% dextrose in 0.9% NaCl). Hypophosphatemia, hypokalemia, and hypomagnesemia may coexist and should be corrected. Hypophosphatemia usually emerges 12–24 h after admission, may be exacerbated by glucose infusion, and, if severe, may induce rhabdomyolysis or even respiratory arrest. Upper gastrointestinal hemorrhage, pancreatitis, and pneumonia may accompany this disorder. Drugand Toxin-Induced Acidosis • SALICYLATES (See also Chap. 472e) Salicylate intoxication in adults usually causes respiratory alkalosis or a mixture of high-AG metabolic acidosis and respiratory alkalosis. Only a portion of the AG is due to salicylates. Lactic acid production is also often increased. Vigorous gastric lavage with isotonic saline (not NaHCO3) should be initiated immediately, followed by administration of activated charcoal per nasogastric tube. In the acidotic patient, to facilitate removal of salicylate, IV NaHCO3 is administered in amounts adequate to alkalinize the urine and to maintain urine output (urine pH >7.5). While this form of therapy is straightforward in acidotic patients, a coexisting respiratory alkalosis may make this approach hazardous. Alkalemic patients should not receive NaHCO3. Acetazolamide may be administered in the face of alkalemia, when an alkaline diuresis cannot be achieved, or to ameliorate volume overload associated with NaHCO3 administration, but this drug can cause systemic metabolic acidosis if HCO3 is not replaced. Hypokalemia should be anticipated with an alkaline diuresis and should be treated promptly and aggressively. Glucose-containing fluids should be administered because of the danger of hypoglycemia. Excessive insensible fluid losses may cause severe volume depletion and hypernatremia. If renal failure prevents rapid clearance of salicylate, hemodialysis can be performed against a bicarbonate dialysate. ALCOHOLS Under most physiologic conditions, sodium, urea, and glucose generate the osmotic pressure of blood. Plasma osmolality is calculated according to the following expression: Posm = 2Na+ + Glu + BUN (all in mmol/L), or, using conventional laboratory values in which glucose and BUN are expressed in milligrams per deciliter: Posm = 2Na+ + Glu/18 + BUN/2.8. The calculated and determined osmolality should agree within 10–15 mmol/kg H2O. When the measured osmolality exceeds the calculated osmolality by >10–15 mmol/kg H2O, one of two circumstances prevails. Either the serum sodium is spuriously low, as with hyperlipidemia or hyperproteinemia (pseudohyponatremia), or osmolytes other than sodium salts, glucose, or urea have accumulated in plasma. Examples of such osmolytes include mannitol, radiocontrast media, ethanol, isopropyl alcohol, ethylene glycol, propylene glycol, methanol, and acetone. In this situation, the difference between the calculated osmolality and the measured osmolality (osmolar gap) is proportional to the concentration of the unmeasured solute. With an appropriate clinical history and index of suspicion, identification of an osmolar gap is helpful in identifying the presence of poison-associated AG acidosis. Three alcohols may cause fatal intoxications: ethylene glycol, methanol, and isopropyl alcohol. All cause an elevated osmolal gap, but only the first two cause a high-AG acidosis. ETHYLENE gLYCOL (See also Chap. 472e) Ingestion of ethylene glycol (commonly used in antifreeze) leads to a metabolic acidosis and severe damage to the CNS, heart, lungs, and kidneys. The increased AG and osmolar gap are attributable to ethylene glycol and its metabolites, oxalic acid, glycolic acid, and other organic acids. Lactic acid production increases secondary to inhibition of the tricarboxylic acid cycle and altered intracellular redox state. Diagnosis is facilitated by recognizing oxalate crystals in the urine, the presence of an osmolar gap in serum, and a high-AG acidosis. Although use of a Wood’s lamp to 319 visualize the fluorescent additive to commercial antifreeze in the urine of patients with ethylene glycol ingestion, this is rarely reproducible. The combination of a high AG and high osmolar gap in a patient suspected of ethylene glycol ingestion should be taken as evidence of ethylene glycol toxicity. Treatment should not be delayed while awaiting measurement of ethylene glycol levels in this setting. This includes the prompt institution of a saline or osmotic diuresis, thiamine and pyridoxine supplements, fomepizole, and usually, hemodialysis. The IV administration of the alcohol dehydrogenase inhibitor fomepizole (4-methylpyrazole; 15 mg/kg as a loading dose) is the agent of choice and offers the advantages of a predictable decline in ethylene glycol levels without excessive obtundation as seen during ethyl alcohol infusion. If used, ethanol IV should be infused to achieve a blood level of 22 mmol/L (100 mg/dL). Both fomepizole and ethanol reduce toxicity because they compete with ethylene glycol for metabolism by alcohol dehydrogenase. Hemodialysis is indicated when the arterial pH is <7.3 or the osmolar gap exceeds 20 mOsm/kg. METHANOL (See also Chap. 472e) The ingestion of methanol (wood alcohol) causes metabolic acidosis, and its metabolites formaldehyde and formic acid cause severe optic nerve and CNS damage. Lactic acid, ketoacids, and other unidentified organic acids may contribute to the acidosis. Due to its low molecular mass (32 Da), an osmolar gap is usually present. This is similar to that for ethylene glycol intoxication, including general supportive measures, fomepizole, and hemodialysis (as above). PROPYLENE gLYCOL Propylene glycol is the vehicle used in IV administration of diazepam, lorazepam, phenobarbital, nitroglycerine, etomidate, enoximone, and phenytoin. Propylene glycol is generally safe for limited use in these IV preparations, but toxicity has been reported, most often in the setting of the intensive care unit in patients receiving frequent or continuous therapy. This form of high-gap acidosis should be considered in patients with unexplained high-gap acidosis, hyperosmolality, and clinical deterioration. Propylene glycol, like ethylene glycol and methanol, is metabolized by alcohol dehydrogenase. With intoxication by propylene glycol, the first response is to stop the offending infusion. Additionally, fomepizole should also be administered in acidotic patients. ISOPROPYL ALCOHOL Ingested isopropanol is absorbed rapidly and may be fatal when as little as 150 mL of rubbing alcohol, solvent, or deicer is consumed. A plasma level >400 mg/dL is life-threatening. Isopropyl alcohol is metabolized by alcohol dehydrogenase to acetone. The characteristic features differ from ethylene glycol and methanol in that the parent compound, not the metabolites, causes toxicity, and an AG acidosis is not present because acetone is rapidly excreted. Both isopropyl alcohol and acetone increase the osmolal gap, and hypoglycemia is common. Alternative diagnoses should be considered if the patient does not improve significantly within a few hours. Patients with hemodynamic instability with plasma levels above 400 mg/dL should be considered for hemodialysis. Isopropanol alcohol toxicity is treated by watchful waiting and supportive therapy, IV fluids, pressors, ventilatory support if needed, and occasionally hemodialysis for prolonged coma, hemodynamic instability, or levels >400 mg/dL. 320 PYROgLUTAMIC ACId Acetaminophen-induced high-AG metabolic acidosis is uncommon but is being recognized more often in either patients with acetaminophen overdose or malnourished or critically ill patients receiving acetaminophen in typical dosage. 5-Oxoproline accumulation after acetaminophen should be suspected in the setting of an unexplained high-AG acidosis without elevation of the osmolar gap in patients receiving acetaminophen. The first step in treatment is to immediately discontinue the drug. Additionally, sodium bicarbonate IV should be given. Although N-acetylcysteine has been suggested, it is not known if it hastens the metabolism of 5-oxoproline by increasing intracellular glutathione concentrations in this setting. Renal Failure (See also Chap. 335) The hyperchloremic acidosis of moderate renal insufficiency is eventually converted to the high-AG acidosis of advanced renal failure. Poor filtration and reabsorption of organic anions contribute to the pathogenesis. As renal disease progresses, the number of functioning nephrons eventually becomes insufficient to keep pace with net acid production. Uremic acidosis is characterized, therefore, by a reduced rate of NH4+ production and excretion. The acid retained in chronic renal disease is buffered by alkaline salts from bone. Despite significant retention of acid (up to 20 mmol/d), the serum [HCO3 -] does not decrease further, indicating participation of buffers outside the extracellular compartment. Chronic metabolic acidosis results in significant loss of bone mass due to reduction in bone calcium carbonate. Chronic acidosis also increases urinary calcium excretion, proportional to cumulative acid retention. Because of the association of renal failure acidosis with muscle catabolism and bone disease, both uremic acidosis and the hyperchloremic acidosis of renal failure require oral alkali replacement to maintain the [HCO3 -] >22 mmol/L. This can be accomplished with relatively modest amounts of alkali (1.0–1.5 mmol/kg body weight per day). Sodium citrate (Shohl’s solution) or NaHCO3 tablets (650-mg tablets contain 7.8 meq) are equally effective alkalinizing salts. Citrate enhances the absorption of aluminum from the gastrointestinal tract and should never be given together with aluminum-containing antacids because of the risk of aluminum intoxication. PART 2 Cardinal Manifestations and Presentation of Diseases Alkali can be lost from the gastrointestinal tract from diarrhea or from the kidneys (renal tubular acidosis, RTA). In these disorders (Table 66-5), reciprocal changes in [Cl-] and [HCO3 -] result in a normal AG. In pure non–AG acidosis, therefore, the increase in [Cl-] above the normal value approximates the decrease in [HCO3 -]. The absence of such a relationship suggests a mixed disturbance. In diarrhea, stools contain a higher [HCO3 -] and decomposed HCO3 than plasma so that metabolic acidosis develops along with volume depletion. Instead of an acid urine pH (as anticipated with systemic acidosis), urine pH is usually >6 because metabolic acidosis and hypokalemia increase renal synthesis and excretion of NH4+, thus providing a urinary buffer that increases urine pH. Metabolic acidosis due to gastrointestinal losses with a high urine pH can be differentiated from RTA because urinary NH4+ excretion is typically low in RTA and high with diarrhea. Urinary NH4+ levels can be estimated by calculating the urine anion gap (UAG): UAG = [Na+ + K+]u – [Cl-]u. When [Cl-]u > [Na+ + K+]u, the UAG is negative by definition. This indicates that the urine ammonium level is appropriately increased, suggesting an extrarenal cause of the acidosis. Conversely, when the UAG is positive, the urine ammonium level is low, suggesting a renal cause of the acidosis. CAuSES of non–Anion gAP ACiDoSiS 1. Gastrointestinal bicarbonate loss A. Diarrhea B. External pancreatic or small-bowel drainage C. Ureterosigmoidostomy, jejunal loop, ileal loop D. Drugs 1. 2. 3. II. A. 1. Proximal RTA (type 2) Drug-induced: acetazolamide, topiramate 2. Distal (classic) RTA (type 1) Drug induced: amphotericin B, ifosfamide B. Hyperkalemia 1. Generalized distal nephron dysfunction (type 4 RTA) a. b. Mineralocorticoid resistance (PHA I, autosomal dominant) c. Voltage defect (PHA I, autosomal recessive, and PHA II) d. C. 1. Chronic progressive kidney disease III. A. Potassium-sparing diuretics (amiloride, triamterene, spironolactone, eplerenone) B. C. D. E. F. IV. A. Acid loads (ammonium chloride, hyperalimentation) B. Loss of potential bicarbonate: ketosis with ketone excretion C. D. E. Abbreviations: ACE-I, angiotensin-converting enzyme inhibitor; ARB, angiotensin receptor blocker; PHA, pseudohypoaldosteronism; RTA, renal tubular acidosis. Proximal RTA (type 2 RTA) (Chap. 339) is most often due to generalized proximal tubular dysfunction manifested by glycosuria, generalized aminoaciduria, and phosphaturia (Fanconi syndrome). With a low plasma [HCO3 -], the urine pH is acid (pH <5.5). The fractional excretion of [HCO3 -] may exceed 10–15% when the serum HCO3 - >20 mmol/L. Because HCO3 is not reabsorbed normally in the proximal tubule, therapy with NaHCO3 will enhance renal potassium wasting and hypokalemia. The typical findings in acquired or inherited forms of classic distal RTA (type 1 RTA) include hypokalemia, non-AG metabolic acidosis, low urinary NH4+ excretion (positive UAG, low urine [NH4+]), and inappropriately high urine pH (pH > 5.5). Most patients have hypocitraturia and hypercalciuria, so nephrolithiasis, nephrocalcinosis, and bone disease are common. In generalized distal RTA (type 4 RTA), hyperkalemia is disproportionate to the reduction in glomerular filtration rate (GFR) because of coexisting dysfunction of potassium and acid secretion. Urinary ammonium excretion is invariably depressed, and renal function may be compromised, for example, due to diabetic nephropathy, obstructive uropathy, or chronic tubulointerstitial disease. Hyporeninemic hypoaldosteronism typically causes non-AG metabolic acidosis, most commonly in older adults with diabetes mellitus or tubulointerstitial disease and renal insufficiency. Patients usually have mild to moderate CKD (GFR, 20–50 mL/min) and acidosis, with elevation in serum [K+] (5.2–6.0 mmol/L), concurrent hypertension, and congestive heart failure. Both the metabolic acidosis and the hyperkalemia are out of proportion to impairment in GFR. Nonsteroidal anti-inflammatory drugs, trimethoprim, pentamidine, and angiotensin-converting enzyme (ACE) inhibitors can also cause non-AG metabolic acidosis in patients with renal insufficiency (Table 66-5). Metabolic alkalosis is manifested by an elevated arterial pH, an increase in the serum [HCO3 -], and an increase in Paco2 as a result of compensatory alveolar hypoventilation (Table 66-1). It is often accompanied by hypochloremia and hypokalemia. The arterial pH establishes the diagnosis, because it is increased in metabolic alkalosis and decreased or normal in respiratory acidosis. Metabolic alkalosis frequently occurs in association with other disorders such as respiratory acidosis or alkalosis or metabolic acidosis. Metabolic alkalosis occurs as a result of net gain of [HCO3 -] or loss of nonvolatile acid (usually HCl by vomiting) from the extracellular fluid. For HCO3 to be added to the extracellular fluid, it must be administered exogenously or synthesized endogenously, in part or entirely by the kidneys. Because it is unusual for alkali to be added to the body, the disorder involves a generative stage, in which the loss of acid usually causes alkalosis, and a maintenance stage, in which the kidneys fail to compensate by excreting HCO3 . Maintenance of metabolic alkalosis represents a failure of the kid neys to eliminate HCO3 in the usual manner. The kidneys will retain, rather than excrete, the excess alkali and maintain the alkalosis if (1) volume deficiency, chloride deficiency, and K+ deficiency exist in combination with a reduced GFR; or (2) hypokalemia exists because of autonomous hyperaldosteronism. In the first example, alkalosis is corrected by administration of NaCl and KCl, whereas, in the latter, it may be necessary to repair the alkalosis by pharmacologic or surgical intervention, not with saline administration. To establish the cause of metabolic alkalosis (Table 66-6), it is necessary to assess the status of the extracellular fluid volume (ECFV), the recumbent and upright blood pressure, the serum [K+], and the renin-aldosterone system. For example, the presence of chronic hypertension and chronic hypokalemia in an alkalotic patient suggests either mineralocorticoid excess or that the hypertensive patient is receiving diuretics. Low plasma renin activity and normal urine [Na+] and [Cl-] in a patient who is not taking diuretics indicate a primary mineralocorticoid excess syndrome. The combination of hypokalemia and alkalosis in a normotensive, nonedematous patient can be due to Bartter’s or Gitelman’s syndrome, magnesium deficiency, vomiting, exogenous alkali, or diuretic ingestion. Determination of urine electrolytes (especially the urine [Cl-]) and screening of the urine for diuretics may be helpful. If the urine is alkaline, with an elevated [Na+] and [K+] but low [Cl-], the diagnosis is usually either vomiting (overt or surreptitious) or alkali ingestion. If the urine is relatively acid and has low concentrations of Na+, K+, and Cl-, the most likely possibilities are prior vomiting, the posthypercapnic state, or prior diuretic ingestion. If, on the other hand, neither the urine sodium, potassium, nor chloride concentrations are depressed, magnesium deficiency, Bartter’s or Gitelman’s syndrome, or current diuretic ingestion should be considered. Bartter’s syndrome is distinguished from Gitelman’s syndrome because of hypocalciuria and hypomagnesemia in the latter disorder. Alkali Administration Chronic administration of alkali to individuals with normal renal function rarely causes alkalosis. However, in patients with coexistent hemodynamic disturbances, alkalosis can develop because the normal capacity to excrete HCO3 may be exceeded or there may be enhanced reabsorption of HCO3 -. Such - patients include those who receive HCO3 (PO or IV), acetate loads CAuSES of METABoLiC ALKALoSiS I. Exogenous HCO3 -loads A. Acute alkali administration B. Milk-alkali syndrome II. Effective ECFV contraction, normotension, K+ deficiency, and secondary hyperreninemic hyperaldosteronism A. 1. 2. 3. 4. B. Renal origin 1. 2. 3. 4. 5. Nonreabsorbable anions including penicillin, carbenicillin 6. Mg2+ deficiency 7. 8. Bartter’s syndrome (loss of function mutations of transporters and ion channels in TALH) 9. Gitelman’s syndrome (loss of function mutation in Na+-Clcotransporter in DCT) III. ECFV expansion, hypertension, K+ deficiency, and mineralocorticoid excess A. 1. 2. 3. 4. B. Low renin 1. Primary aldosteronism a. b. c. 2. Adrenal enzyme defects a. 11β-Hydroxylase deficiency b. 17α-Hydroxylase deficiency 3. 4. a. b. c. IV. Gain-of-function mutation of renal sodium channel with ECFV expansion, hypertension, K+ deficiency, and hyporeninemic-hypoaldosteronism A. Abbreviations: DCT, distal convoluted tubule; ECFV, extracellular fluid volume; TALH, thick ascending limb of Henle’s loop. (parenteral hyperalimentation solutions), citrate loads (transfusions), or antacids plus cation-exchange resins (aluminum hydroxide and sodium polystyrene sulfonate). Nursing home patients receiving tube feedings have a higher incidence of metabolic alkalosis than nursing home patients receiving oral feedings. METABOLIC ALKALOSIS ASSOCIATED WITH ECFV CONTRACTION, K+ DEPLETION, AND SECONDARY HYPERRENINEMIC HYPERALDOSTERONISM gastrointestinal Origin Gastrointestinal loss of H+ from vomiting or gastric aspiration results in retention of HCO3 -. During active vomiting, the filtered load of bicarbonate is acutely increased to exceed the reabsorptive capacity of the proximal tubule for HCO3 so that the urine becomes alkaline and high in potassium. When vomiting ceases, the persistence of volume, potassium, and chloride depletion causes 322 maintenance of the alkalosis because of an enhanced capacity of the nephron to reabsorb HCO3 -. Correction of the contracted ECFV with NaCl and repair of K+ deficits corrects the acid-base disorder by restoring the ability of the kidney to excrete the excess bicarbonate. Renal Origin • dIURETICS (See also Chap. 279) Drugs that induce chloruresis, such as thiazides and loop diuretics (furosemide, bumetanide, torsemide, and ethacrynic acid), acutely diminish the ECFV without altering the total body bicarbonate content. The serum [HCO3 -] increases because the reduced ECFV “contracts” the [HCO3 -] in the plasma (contraction alkalosis). The chronic administration of diuretics tends to generate an alkalosis by increasing distal salt delivery, so that K+ and H+ secretion are stimulated. The alkalosis is maintained by persistence of the contraction of the ECFV, secondary hyperaldosteronism, K+ deficiency, and the direct effect of the diuretic (as long as diuretic administration continues). Repair of the alkalosis is achieved by providing isotonic saline to correct the ECFV deficit. SOLUTE LOSINg dISORdERS: BARTTER’S SYNdROME ANd gITELMAN’S SYNdROME See Chap. 339. NONREABSORBABLE ANIONS ANd MAgNESIUM dEFICIENCY Administration of large quantities of nonreabsorbable anions, such as penicillin or carbenicillin, can enhance distal acidification and K+ secretion by increasing the transepithelial potential difference. Mg2+ deficiency results in hypokalemic alkalosis by enhancing distal acidification through stimulation of renin and hence aldosterone secretion. POTASSIUM dEPLETION Chronic K+ depletion may cause metabolic alkalosis by increasing urinary acid excretion. Both NH4+ production and absorption are enhanced and HCO3 reabsorption is stimulated. Chronic K+ deficiency upregulates the renal H+, K+-ATPase to increase K+ absorption at the expense of enhanced H+ secretion. Alkalosis associated with severe K+ depletion is resistant to salt administration, but repair of the K+ deficiency corrects the alkalosis. AFTER TREATMENT OF LACTIC ACIdOSIS OR kETOACIdOSIS When an underlying stimulus for the generation of lactic acid or ketoacid is removed rapidly, as with repair of circulatory insufficiency or with insulin therapy, the lactate or ketones are metabolized to yield an equivalent amount of HCO3 -. Other sources of new HCO3 are additive with the original amount generated by organic anion metabolism to create a surfeit of HCO3 -. Such sources include (1) new HCO3 -added to the blood by the kidneys as a result of enhanced acid excretion during the preexisting period of acidosis, and (2) alkali therapy during the treatment phase of the acidosis. Acidosis-induced contraction of the ECFV and K+ deficiency act to sustain the alkalosis. POSTHYPERCAPNIA Prolonged CO2 retention with chronic respiratory acidosis enhances renal HCO3 absorption and the generation of new HCO3 (increased net acid excretion). Metabolic alkalosis results from the effect of the persistently elevated [HCO3 -] when the elevated Paco2 is abruptly returned toward normal. METABOLIC ALKALOSIS ASSOCIATED WITH ECFV EXPANSION, HYPERTENSION, AND HYPERALDOSTERONISM Increased aldosterone levels may be the result of autonomous primary adrenal overproduction or of secondary aldosterone release due to renal overproduction of renin. Mineralocorticoid excess increases net acid excretion and may result in metabolic alkalosis, which may be worsened by associated K+ deficiency. ECFV expansion from salt retention causes hypertension. The kaliuresis persists because of mineralocorticoid excess and distal Na+ absorption causing enhanced K+ excretion, continued K+ depletion with polydipsia, inability to concentrate the urine, and polyuria. Liddle’s syndrome (Chap. 339) results from increased activity of the collecting duct Na+ channel (ENaC) and is a rare monogenic form of hypertension due to volume expansion manifested as hypokalemic alkalosis and normal aldosterone levels. Symptoms With metabolic alkalosis, changes in CNS and peripheral nervous system function are similar to those of hypocalcemia (Chap. 423); symptoms include mental confusion; obtundation; and PART 2 Cardinal Manifestations and Presentation of Diseases a predisposition to seizures, paresthesia, muscular cramping, tetany, aggravation of arrhythmias, and hypoxemia in chronic obstructive pulmonary disease. Related electrolyte abnormalities include hypokalemia and hypophosphatemia. This is primarily directed at correcting the underlying stimulus for HCO3 generation. If primary aldosteronism, renal artery stenosis, or Cushing’s syndrome is present, correction of the underlying cause will reverse the alkalosis. [H+] loss by the stomach or kidneys can be mitigated by the use of proton pump inhibitors or the discontinuation of diuretics. The second aspect of treatment is to remove the factors that sustain the inappropriate increase in HCO3 reabsorption, such as ECFV contraction or K+ deficiency. K+ deficits should always be repaired. Isotonic saline is usually sufficient to reverse the alkalosis if ECFV contraction is present. If associated conditions preclude infusion of saline, renal HCO3 loss can be accelerated by administration of acetazolamide, a carbonic anhydrase inhibitor, which is usually effective in patients with adequate renal function but can worsen K+ losses. Dilute hydrochloric acid (0.1 N HCl) is also effective but can cause hemolysis, and must be delivered slowly in a central vein. Respiratory acidosis can be due to severe pulmonary disease, respiratory muscle fatigue, or abnormalities in ventilatory control and is recognized by an increase in Paco2 and decrease in pH (Table 66-7). In acute respiratory acidosis, there is an immediate compensatory elevation (due to cellular buffering mechanisms) in HCO3 -, which increases 1 mmol/L for every 10-mmHg increase in Paco2. In chronic respiratory acidosis (>24 h), renal adaptation increases the [HCO3 -] by 4 mmol/L for every 10-mmHg increase in Paco2. The serum HCO3 usually does not increase above 38 mmol/L. The clinical features vary according to the severity and duration of the respiratory acidosis, the underlying disease, and whether there is accompanying hypoxemia. A rapid increase in Paco2 may cause anxiety, dyspnea, confusion, psychosis, and hallucinations and may progress to coma. Lesser degrees of dysfunction in chronic hypercapnia include sleep disturbances; loss of memory; daytime somnolence; personality changes; impairment of coordination; and motor disturbances such as tremor, myoclonic jerks, and asterixis. Headaches and other signs that mimic raised intracranial pressure, such as papilledema, abnormal reflexes, and focal muscle weakness, are due to vasoconstriction secondary to loss of the vasodilator effects of CO2. Depression of the respiratory center by a variety of drugs, injury, or disease can produce respiratory acidosis. This may occur acutely with general anesthetics, sedatives, and head trauma or chronically with sedatives, alcohol, intracranial tumors, and the syndromes of sleep-disordered breathing including the primary alveolar and obesityhypoventilation syndromes (Chaps. 318 and 319). Abnormalities or disease in the motor neurons, neuromuscular junction, and skeletal muscle can cause hypoventilation via respiratory muscle fatigue. Mechanical ventilation, when not properly adjusted and supervised, may result in respiratory acidosis, particularly if CO2 production suddenly rises (because of fever, agitation, sepsis, or overfeeding) or alveolar ventilation falls because of worsening pulmonary function. High levels of positive end-expiratory pressure in the presence of reduced cardiac output may cause hypercapnia as a result of large increases in alveolar dead space (Chap. 306e). Permissive hypercapnia is being used with increasing frequency because of studies suggesting lower mortality rates than with conventional mechanical ventilation, especially with severe CNS or heart disease. The respiratory acidosis associated with permissive hypercapnia may require administration of NaHCO3 to increase the arterial pH to 7.25, but overcorrection of the acidemia may be deleterious. Acute hypercapnia follows sudden occlusion of the upper airway or generalized bronchospasm as in severe asthma, anaphylaxis, I. Alkalosis A. Central nervous system stimulation 1. 2. Anxiety, psychosis 3. 4. 5. Meningitis, encephalitis 6. 7. B. Hypoxemia or tissue hypoxia 1. 2. Pneumonia, pulmonary edema 3. 4. C. Drugs or hormones 1. Pregnancy, progesterone 2. 3. D. Stimulation of chest receptors 1. 2. 3. 4. E. Miscellaneous 1. 2. 3. 4. 5. II. A. 1. Drugs (anesthetics, morphine, sedatives) 2. 3. B. Airway 1. 2. C. Parenchyma 1. 2. 3. 4. 5. D. Neuromuscular 1. 2. 3. 4. E. Miscellaneous 1. 2. 3. inhalational burn, or toxin injury. Chronic hypercapnia and respiratory acidosis occur in end-stage obstructive lung disease. Restrictive disorders involving both the chest wall and the lungs can cause respiratory acidosis because the high metabolic cost of respiration causes ventilatory muscle fatigue. Advanced stages of intrapulmonary and extra-pulmonary restrictive defects present as chronic respiratory acidosis. The diagnosis of respiratory acidosis requires the measurement of 323 Paco2 and arterial pH. A detailed history and physical examination often indicate the cause. Pulmonary function studies (Chap. 306e), including spirometry, diffusion capacity for carbon monoxide, lung volumes, and arterial Paco2 and O2 saturation, usually make it possible to determine if respiratory acidosis is secondary to lung disease. The workup for nonpulmonary causes should include a detailed drug history, measurement of hematocrit, and assessment of upper airway, chest wall, pleura, and neuromuscular function. The management of respiratory acidosis depends on its severity and rate of onset. Acute respiratory acidosis can be life-threatening, and measures to reverse the underlying cause should be undertaken simultaneously with restoration of adequate alveolar ventilation. This may necessitate tracheal intubation and assisted mechanical ventilation. Oxygen administration should be titrated carefully in patients with severe obstructive pulmonary disease and chronic CO2 retention who are breathing spontaneously (Chap. 314). When oxygen is used injudiciously, these patients may experience progression of the respiratory acidosis. Aggressive and rapid correction of hypercapnia should be avoided, because the falling PaCO2 may provoke the same complications noted with acute respiratory alkalosis (i.e., cardiac arrhythmias, reduced cerebral perfusion, and seizures). The PaCO2 should be lowered gradually in chronic respiratory acidosis, aiming to restore the PaCO2 to baseline levels and to provide sufficient Cland K+ to enhance the renal excretion of HCO3 . Chronic respiratory acidosis is frequently difficult to correct, but measures aimed at improving lung function (Chap. 314) can help some patients and forestall further deterioration in most. Alveolar hyperventilation decreases Paco2 and increases the HCO3 -/ Paco2 ratio, thus increasing pH (Table 66-7). Nonbicarbonate cellular buffers respond by consuming HCO3 -. Hypocapnia develops when a sufficiently strong ventilatory stimulus causes CO2 output in the lungs to exceed its metabolic production by tissues. Plasma pH and [HCO3 -] appear to vary proportionately with Paco2 over a range from 40–15 mmHg. The relationship between arterial [H+] concentration and Paco2 is ∼0.7 mmol/L per mmHg (or 0.01 pH unit/mmHg), and that for plasma [HCO3 -] is 0.2 mmol/L per mmHg. Hypocapnia sustained for >2–6 h is further compensated by a decrease in renal ammonium and titratable acid excretion and a reduction in filtered HCO3 reabsorption. Full renal adaptation to respiratory alkalosis may take several days and requires normal volume status and renal function. The kidneys appear to respond directly to the lowered Paco2 rather than to alkalosis per se. In chronic respiratory alkalosis a 1-mmHg decrease in Paco2 causes a 0.4-to 0.5-mmol/L drop in [HCO3 -] and a 0.3-mmol/L decrease (or 0.003 increase in pH) in [H+]. The effects of respiratory alkalosis vary according to duration and severity but are primarily those of the underlying disease. Reduced cerebral blood flow as a consequence of a rapid decline in Paco2 may cause dizziness, mental confusion, and seizures, even in the absence of hypoxemia. The cardiovascular effects of acute hypocapnia in the conscious human are generally minimal, but in the anesthetized or mechanically ventilated patient, cardiac output and blood pressure may fall because of the depressant effects of anesthesia and positive-pressure ventilation on heart rate, systemic resistance, and venous return. Cardiac arrhythmias may occur in patients with heart disease as a result of changes in oxygen unloading by blood from a left shift in the hemoglobin-oxygen dissociation curve (Bohr effect). Acute respiratory alkalosis causes intracellular shifts of Na+, K+, and PO42and reduces free [Ca2+] by increasing the protein-bound fraction. Hypocapnia-induced hypokalemia is usually minor. Chronic respiratory alkalosis is the most common acid-base disturbance in critically ill patients and, when severe, portends a poor prognosis. Many cardiopulmonary disorders manifest respiratory alkalosis 324 in their early to intermediate stages, and the finding of normocapnia and hypoxemia in a patient with hyperventilation may herald the onset of rapid respiratory failure and should prompt an assessment to determine if the patient is becoming fatigued. Respiratory alkalosis is common during mechanical ventilation. The hyperventilation syndrome may be disabling. Paresthesia; circumoral numbness; chest wall tightness or pain; dizziness; inability to take an adequate breath; and, rarely, tetany may be sufficiently stressful to perpetuate the disorder. Arterial blood-gas analysis demonstrates an acute or chronic respiratory alkalosis, often with hypocapnia in the range of 15–30 mmHg and no hypoxemia. CNS diseases or injury can produce several patterns of hyperventilation and sustained Paco2 levels of 20–30 mmHg. Hyperthyroidism, high caloric loads, and exercise raise the basal metabolic rate, but ventilation usually rises in proportion so that arterial blood gases are unchanged and respiratory alkalosis does not develop. Salicylates are the most common cause of drug-induced respiratory alkalosis as a result of direct stimulation of the medullary chemoreceptor (Chap. 472e). The methylxanthines, theophylline, and aminophylline stimulate ventilation and increase the ventilatory response to CO2. Progesterone increases ventilation and lowers arterial Paco2 by as much as 5–10 mmHg. Therefore, chronic respiratory alkalosis is a common feature of pregnancy. Respiratory alkalosis is also prominent in liver failure, and the severity correlates with the degree of hepatic insufficiency. Respiratory alkalosis is often an early finding in gram-negative septicemia, before fever, hypoxemia, or hypotension develops. The diagnosis of respiratory alkalosis depends on measurement of arterial pH and Paco2. The plasma [K+] is often reduced and the [Cl-] increased. In the acute phase, respiratory alkalosis is not associated with increased renal HCO3 excretion, but within hours net acid excretion is reduced. In general, the HCO3 concentration falls by 2.0 mmol/L for each 10-mmHg decrease in Paco2. Chronic hypocapnia reduces the serum [HCO3 -] by 4.0 mmol/L for each 10-mmHg decrease in Paco2. It is unusual to observe a plasma HCO3 <12 mmol/L as a result of a pure respiratory alkalosis. When a diagnosis of respiratory alkalosis is made, its cause should be investigated. The diagnosis of hyperventilation syndrome is made by exclusion. In difficult cases, it may be important to rule out other conditions such as pulmonary embolism, coronary artery disease, and hyperthyroidism. The management of respiratory alkalosis is directed toward alleviation of the underlying disorder. If respiratory alkalosis complicates ventilator management, changes in dead space, tidal volume, and frequency can minimize the hypocapnia. Patients with the hyperventilation syndrome may benefit from reassurance, rebreathing from a paper bag during symptomatic attacks, and attention to underlying psychological stress. Antidepressants and sedatives are not recommended. β-Adrenergic blockers may ameliorate peripheral manifestations of the hyperadrenergic state. Kevin T. McVary Male sexual dysfunction affects 10–25% of middle-aged and elderly men, and female sexual dysfunction occurs with similar frequency. Demographic changes, the popularity of newer treatments, and greater awareness of sexual dysfunction by patients and society have led to increased diagnosis and associated health care expenditures for the management of this common disorder. Because many patients are reluctant to initiate discussion of their sex lives, physicians should address this topic directly to elicit a history of sexual dysfunction. Normal male sexual function requires (1) an intact libido, (2) the ability to achieve and maintain penile erection, (3) ejaculation, and (4) detumescence. Libido refers to sexual desire and is influenced by a variety of visual, olfactory, tactile, auditory, imaginative, and hormonal stimuli. Sex steroids, particularly testosterone, act to increase libido. Libido can be diminished by hormonal or psychiatric disorders and by medications. Penile tumescence leading to erection depends on an increased flow of blood into the lacunar network accompanied by complete relaxation of the arteries and corporal smooth muscle. The microarchitecture of the corpora is composed of a mass of smooth muscle (trabecula) that contains a network of endothelial-lined vessels (lacunar spaces). Subsequent compression of the trabecular smooth muscle against the fibroelastic tunica albuginea causes a passive closure of the emissary veins and accumulation of blood in the corpora. In the presence of a full erection and a competent valve mechanism, the corpora become noncompressible cylinders from which blood does not escape. The central nervous system (CNS) exerts an important influence by either stimulating or antagonizing spinal pathways that mediate erectile function and ejaculation. The erectile response is mediated by a combination of central (psychogenic) innervation and peripheral (reflexogenic) innervation. Sensory nerves that originate from receptors in the penile skin and glans converge to form the dorsal nerve of the penis, which travels to the S2-S4 dorsal root ganglia via the pudendal nerve. Parasympathetic nerve fibers to the penis arise from neurons in the intermediolateral columns of the S2-S4 sacral spinal segments. Sympathetic innervation originates from the T11 to the L2 spinal segments and descends through the hypogastric plexus. Neural input to smooth-muscle tone is crucial to the initiation and maintenance of an erection. There is also an intricate interaction between the corporal smooth-muscle cell and its overlying endothelial cell lining (Fig. 67-1). Nitric oxide, which induces vascular relaxation, promotes erection and is opposed by endothelin 1 (ET-1) and Rho kinase, which mediate vascular contraction. Nitric oxide is synthesized from l-arginine by nitric oxide synthase and is released from the nonadrenergic, noncholinergic (NANC) autonomic nerve supply to act postjunctionally on smooth-muscle cells. Nitric oxide increases the production of cyclic 3′,5′-guanosine monophosphate (cyclic GMP), which induces relaxation of smooth muscle (Fig. 67-2). Cyclic GMP is gradually broken down by phosphodiesterase type 5 (PDE-5). Inhibitors of PDE-5, such as the oral medications sildenafil, vardenafil, and tadalafil, maintain erections by reducing the breakdown of cyclic GMP. However, if nitric oxide is not produced at some level, PDE-5 inhibitors are ineffective, as these drugs facilitate, but do not initiate, the initial enzyme cascade. In addition to nitric oxide, vasoactive prostaglandins (PGE1, PGF2α) are synthesized within the cavernosal tissue and increase cyclic AMP levels, also leading to relaxation of cavernosal smooth-muscle cells. Decreased Ca2+ PDE2, 3, 4 FIguRE 67-1 Pathways that regulate penile smooth-muscle relaxation and erection. A. Outflow from the parasympathetic nervous system leads to relaxation of the cavernous sinusoids in two ways, both of which increase the concentration of nitric oxide (NO) in smooth-muscle cells. First, NO is the neurotransmitter in nonadrenergic, noncholinergic (NANC) fibers; second, stimulation of endothelial nitric oxide synthase (eNOS) through cholinergic output causes increased production of NO. The NO produced in the endothelium then diffuses into the smooth-muscle cells and decreases its intracellular calcium concentration through a pathway mediated by cyclic guanosine monophosphate (cGMP), leading to relaxation. A separate mechanism that decreases the intracellular calcium level is mediated by cyclic adenosine monophosphate (cAMP). With increased cavernosal blood flow, as well as increased levels of vascular endothelial growth factor (VEGF), the endothelial release of NO is further sustained through the phosphatidylinositol 3 (PI3) kinase pathway. Active treatments (red boxes) include drugs that affect the cGMP pathway (phosphodiesterase [PDE] type 5 inhibitors and guanylyl cyclase agonists), the cAMP pathway (alprostadil), or both pathways (papaverine), along with neural-tone mediators (phentolamine and Rho kinase inhibitors). Agents that are being developed include guanylyl cyclase agonists (to bypass the need for endogenous NO) and Rho kinase inhibitors (to inhibit tonic contraction of smooth-muscle cells mediated through endothelin). α1, α-adrenergic receptor; GPCR, G-protein–coupled receptor, GTP, guanosine triphosphate; PGE, prostaglandin E; PGF, prostaglandin F. B. Biochemical pathways of NO synthesis and action. Sildenafil, vardenafil, and tadalafil enhance erectile function by inhibiting phosphodiesterase type 5 (PDE-5), thereby maintaining high levels of cyclic 3′,5′-guanosine monophosphate (cyclic GMP). iCa2+, intracellular calcium; NOS, nitric oxide synthase. (Part A from K McVary: N Engl J Med 357:2472, 2007; with permission.) Ejaculation is stimulated by the sympathetic nervous system; this results in contraction of the epididymis, vas deferens, seminal vesicles, and prostate, causing seminal fluid to enter the urethra. Seminal fluid emission is followed by rhythmic contractions of the bulbocavernosus and ischiocavernosus muscles, leading to ejaculation. Premature ejaculation usually is related to anxiety or a learned behavior and is amenable to behavioral therapy or treatment with medications such as selective serotonin reuptake inhibitors (SSRIs). Retrograde ejaculation results when the internal urethral sphincter does not close; it may occur in men with diabetes or after surgery involving the bladder neck. Detumescence is mediated by norepinephrine from the sympathetic nerves, endothelin from the vascular surface, and smooth-muscle contraction induced by postsynaptic α-adrenergic receptors and activation of Rho kinase. These events increase venous outflow and restore the flaccid state. Venous leak can cause premature detumescence and is caused by insufficient relaxation of the corporal smooth muscle rather than a specific anatomic defect. Priapism refers to a persistent and painful erection and may be associated with sickle cell anemia, hyper-coagulable states, spinal cord injury, or injection of vasodilator agents into the penis. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 67-2 Biochemical pathways modified by phosphodiesterase type 5 (PDE-5) inhibitors. Sildenafil, vardenafil, tadalafil and avanafil enhance erectile function by inhibiting PDE-5, thereby maintaining high levels of cyclic 3′,5′-guanosine monophosphate (cyclic GMP). iCa2+, intracellular calcium; NO, nitric oxide; NOS, nitric oxide synthase. ERECTILE DYSFuNCTION Epidemiology Erectile dysfunction (ED) is not considered a normal part of the aging process. Nonetheless, it is associated with certain physiologic and psychological changes related to age. In the Massachusetts Male Aging Study (MMAS), a community-based survey of men age 40–70, 52% of responders reported some degree of ED. Complete ED occurred in 10% of respondents, moderate ED in 25%, and minimal ED in 17%. The incidence of moderate or severe ED more than doubled between the ages of 40 and 70. In the National Health and Social Life Survey (NHSLS), which included a sample of men and women age 18–59, 10% of men reported being unable to maintain an erection (corresponding to the proportion of men in the MMAS reporting severe ED). Incidence was highest among men in the age group 50–59 (21%) and men who were poor (14%), divorced (14%), and less educated (13%). The incidence of ED is also higher among men with certain medical disorders, such as diabetes mellitus, obesity, lower urinary tract symptoms secondary to benign prostatic hyperplasia (BPH), heart disease, hypertension, decreased high-density lipoprotein (HDL) levels, and diseases associated with general systemic inflammation (e.g., rheumatoid arthritis). Cardiovascular disease and ED share etiologies as well as pathophysiology (e.g., endothelial dysfunction), and the degree of ED appears to correlate with the severity of cardiovascular disease. Consequently, ED represents a “sentinel symptom” in patients with occult cardiovascular and peripheral vascular disease. Smoking is also a significant risk factor in the development of ED. Medications used in treating diabetes or cardiovascular disease are additional risk factors (see below). There is a higher incidence of ED among men who have undergone radiation or surgery for prostate cancer and in those with a lower spinal cord injury. Psychological causes of ED include depression, anger, stress from unemployment, and other stress-related causes. Pathophysiology ED may result from three basic mechanisms: (1) failure to initiate (psychogenic, endocrinologic, or neurogenic), (2) failure to fill (arteriogenic), and (3) failure to store adequate blood volume within the lacunar network (venoocclusive dysfunction). These categories are not mutually exclusive, and multiple factors contribute to ED in many patients. For example, diminished filling pressure can lead secondarily to venous leak. Psychogenic factors frequently coexist with other etiologic factors and should be considered in all cases. Diabetic, atherosclerotic, and drug-related causes account for >80% of cases of ED in older men. VASCULOgENIC The most common organic cause of ED is a disturbance of blood flow to and from the penis. Atherosclerotic or traumatic arterial disease can decrease flow to the lacunar spaces, resulting in decreased rigidity and an increased time to full erection. Excessive outflow through the veins despite adequate inflow also may contribute to ED. Structural alterations to the fibroelastic components of the corpora may cause a loss of compliance and inability to compress the tunical veins. This condition may result from aging, increased cross-linking of collagen fibers induced by nonenzymatic glycosylation, hypoxemia, or altered synthesis of collagen associated with hypercholesterolemia. NEUROgENIC Disorders that affect the sacral spinal cord or the autonomic fibers to the penis preclude nervous system relaxation of penile smooth muscle, thus leading to ED. In patients with spinal cord injury, the degree of ED depends on the completeness and level of the lesion. Patients with incomplete lesions or injuries to the upper part of the spinal cord are more likely to retain erectile capabilities than are those with complete lesions or injuries to the lower part. Although 75% of patients with spinal cord injuries have some erectile capability, only 25% have erections sufficient for penetration. Other neurologic disorders commonly associated with ED include multiple sclerosis and peripheral neuropathy. The latter is often due to either diabetes or alcoholism. Pelvic surgery may cause ED through disruption of the autonomic nerve supply. ENdOCRINOLOgIC Androgens increase libido, but their exact role in erectile function is unclear. Individuals with castrate levels of testosterone can achieve erections from visual or sexual stimuli. Nonetheless, normal levels of testosterone appear to be important for erectile function, particularly in older males. Androgen replacement therapy can improve depressed erectile function when it is secondary to hypogonadism; however, it is not useful for ED when endogenous testosterone levels are normal. Increased prolactin may decrease libido by suppressing gonadotropin-releasing hormone (GnRH), and it also leads to decreased testosterone levels. Treatment of hyperprolactinemia with dopamine agonists can restore libido and testosterone. dIABETIC ED occurs in 35–75% of men with diabetes mellitus. Pathologic mechanisms are related primarily to diabetes-associated vascular and neurologic complications. Diabetic macrovascular complications are related mainly to age, whereas microvascular complications correlate with the duration of diabetes and the degree of glycemic control (Chap. 417). Individuals with diabetes also have reduced amounts of nitric oxide synthase in both endothelial and neural tissues. PSYCHOgENIC Two mechanisms contribute to the inhibition of erections in psychogenic ED. First, psychogenic stimuli to the sacral cord may inhibit reflexogenic responses, thereby blocking activation of vasodilator outflow to the penis. Second, excess sympathetic stimulation in an anxious man may increase penile smooth-muscle tone. The most common causes of psychogenic ED are performance anxiety, depression, relationship conflict, loss of attraction, sexual inhibition, conflicts over sexual preference, sexual abuse in childhood, and fear of pregnancy or sexually transmitted disease. Almost all patients with ED, even when it has a clear-cut organic basis, develop a psychogenic component as a reaction to ED. MEdICATION-RELATEd Medication-induced ED (Table 67-1) is estimated to occur in 25% of men seen in general medical outpatient clinics. The adverse effects related to drug therapy are additive, especially in older men. In addition to the drug itself, the disease being treated is likely to contribute to sexual dysfunction. Among the antihypertensive agents, the thiazide diuretics and beta blockers have been implicated most frequently. Calcium channel blockers and angiotensin converting-enzyme inhibitors are cited less frequently. These drugs may act directly at the corporal level (e.g., calcium channel blockers) or indirectly by reducing pelvic blood pressure, which is important in the development of penile rigidity. α-Adrenergic blockers are less likely to Abbreviation: GnRH, gonadotropin-releasing hormone. cause ED. Estrogens, GnRH agonists, H2 antagonists, and spironolactone cause ED by suppressing gonadotropin production or by blocking androgen action. Antidepressant and antipsychotic agents—particularly neuroleptics, tricyclics, and SSRIs—are associated with erectile, ejaculatory, orgasmic, and sexual desire difficulties. If there is a strong association between the institution of a drug and the onset of ED, alternative medications should be considered. Otherwise, it is often practical to treat the ED without attempting multiple changes in medications, as it may be difficult to establish a causal role for a drug. APPROACH TO THE PATIENT: A good physician-patient relationship helps unravel the possible causes of ED, many of which require discussion of personal and sometimes embarrassing topics. For this reason, a primary care provider is often ideally suited to initiate the evaluation. However, a significant percentage of men experience ED and remain undiagnosed unless specifically questioned about this issue. By far the two most common reasons for underreporting of ED are patient embarrassment and perceptions of physicians’ inattention to the disease. Once the topic is initiated by the physician, patients are more willing to discuss their potency issues. A complete medical and sexual History: Medical, sexual, and psychosocial Physical examination Serum: Testosterone and prolactin levels Lifestyle risk management Medication review FIguRE 67-3 Algorithm for the evaluation and management of patients with erectile dysfunction. PDE, phosphodiesterase. history should be taken in an effort to assess whether the cause of ED is organic, psychogenic, or multifactorial (Fig. 67-3). Both the patient and his sexual partner should be interviewed regarding sexual history. ED should be distinguished from other sexual problems, such as premature ejaculation. Lifestyle factors such as sexual orientation, the patient’s distress from ED, performance anxiety, and details of sexual techniques should be addressed. Standardized questionnaires are available to assess ED, including the International Index of Erectile Function (IIEF) and the more easily administered Sexual Health Inventory for Men (SHIM), a validated abridged version of the IIEF. The initial evaluation of ED begins with a review of the patient’s medical, surgical, sexual, and psychosocial histories. The history should note whether the patient has experienced pelvic trauma, surgery, or radiation. In light of the increasing recognition of the relationship between lower urinary tract symptoms and ED, it is advisable to evaluate for the presence of symptoms of bladder outlet obstruction. Questions should focus on the onset of symptoms, the presence and duration of partial erections, and the progression of ED. A history of nocturnal or early morning erections is useful for distinguishing physiologic ED from psychogenic ED. Nocturnal erections occur during rapid eye movement (REM) sleep and require intact neurologic and circulatory systems. Organic causes of ED generally are characterized by a gradual and persistent change in rigidity or the inability to sustain nocturnal, coital, or self-stimulated erections. The patient should be questioned about the presence of penile curvature or pain with coitus. It is also important to address libido, as decreased sexual drive and ED are sometimes the earliest signs of endocrine abnormalities (e.g., increased prolactin, decreased testosterone levels). It is useful to ask whether the problem is confined to coitus with one partner or also involves other partners; ED not uncommonly arises in association with new or extramarital sexual relationships. Situational ED, as opposed to consistent ED, suggests psychogenic causes. Ejaculation is much less commonly affected than erection, but questions should be asked about whether ejaculation is normal, premature, delayed, or absent. Relevant risk factors should be identified, such as diabetes mellitus, coronary artery disease (CAD), and neurologic disorders. The patient’s surgical history should be explored with an emphasis on bowel, bladder, prostate, and vascular procedures. A complete drug history is also important. Social changes that may precipitate ED are also crucial to the evaluation, including health worries, spousal death, divorce, relationship difficulties, and financial concerns. Because ED commonly involves a host of endothelial cell risk factors, men with ED report higher rates of overt and silent myocardial infarction. Therefore, ED in an otherwise asymptomatic male warrants consideration of other vascular disorders, including CAD. The physical examination is an essential element in the assessment of ED. Signs of hypertension as well as evidence of thyroid, hepatic, hematologic, cardiovascular, or renal diseases should be sought. An assessment should be made of the endocrine and vascular systems, the external genitalia, and the prostate gland. The penis should be palpated carefully along the corpora to detect fibrotic plaques. Reduced testicular size and loss of secondary sexual characteristics are suggestive of hypogonadism. Neurologic examination should include assessment of anal sphincter tone, investigation of the bulbocavernosus reflex, and testing for peripheral neuropathy. Although hyperprolactinemia is uncommon, a serum prolactin level should be measured, as decreased libido and/or ED may be the presenting symptoms of a prolactinoma or another mass lesion of the sella (Chap. 403). The serum testosterone level should be measured, and if it is low, gonadotropins should be measured to determine whether hypogonadism is primary (testicular) or secondary (hypothalamic-pituitary) in origin (Chap. 411). If not performed recently, serum chemistries, complete blood count (CBC), and lipid profiles may be of value, as they can yield evidence of anemia, diabetes, hyperlipidemia, or other systemic diseases associated with ED. Determination of serum prostate-specific antigen (PSA) should be conducted according to recommended clinical guidelines (Chap. 115). Additional diagnostic testing is rarely necessary in the evaluation of ED. However, in selected patients, specialized testing may provide insight into pathologic mechanisms of ED and aid in the selection of treatment options. Optional specialized testing includes studies of nocturnal penile tumescence and rigidity, (2) vascular testing (in-office injection of vasoactive substances, penile Doppler ultrasound, penile angiography, dynamic infusion cavernosography/ cavernosometry), (3) neurologic testing (biothesiometry-graded vibratory perception, somatosensory evoked potentials), and psychological diagnostic tests. The information potentially gained from these procedures must be balanced against their invasiveness and cost. PART 2 Cardinal Manifestations and Presentation of Diseases Patient and partner education is essential in the treatment of ED. In goal-directed therapy, education facilitates understanding of the disease, the results of the tests, and the selection of treatment. Discussion of treatment options helps clarify how treatment is best offered and stratify firstand second-line therapies. Patients with high-risk lifestyle issues such as obesity, smoking, alcohol abuse, and recreational drug use should be counseled on the role those factors play in the development of ED. Therapies currently employed for the treatment of ED include oral PDE-5 inhibitor therapy (most commonly used), injection therapies, testosterone therapy, penile devices, and psychological therapy. In addition, limited data suggest that treatments for underlying risk factors and comorbidities—for example, weight loss, exercise, stress reduction, and smoking cessation—may improve erectile function. Decisions regarding therapy should take into account the preferences and expectations of patients and their partners. Sildenafil, tadalafil, vardenafil, and avanafil are the only approved and effective oral agents for the treatment of ED. These four medications have markedly improved the management of ED because they are effective for the treatment of a broad range of causes, including psychogenic, diabetic, vasculogenic, post-radical prostatectomy (nerve-sparing procedures), and spinal cord injury. They belong to a class of medications that are selective and potent inhibitors of PDE-5, the predominant phosphodiesterase isoform found in the penis. They are administered in graduated doses and enhance erections after sexual stimulation. The onset of action is approximately 30–120 min, depending on the medication used and other factors, such as recent food intake. Reduced initial doses should be considered for patients who are elderly, are taking concomitant alpha blockers, have renal insufficiency, or are taking medications that inhibit the CYP3A4 metabolic pathway in the liver (e.g., erythromycin, cimetidine, ketoconazole, and possibly itraconazole and mibefradil), as they may increase the serum concentration of the PDE-5 inhibitors (PDE-5i) or promote hypotension. Initially, there were concerns about the cardiovascular safety of PDE-5i drugs. These agents can act as a mild vasodilator, and warnings exist about orthostatic hypotension with concomitant use of alpha blockers. The use of PDE-5i is not contraindicated in men who are also receiving alpha blockers, but they must be stabilized on this blood pressure medication prior to initiating therapy. Concerns also existed that use of PDE-5i would increase cardiovascular events. However, the safety of these drugs has been confirmed in several controlled trials with no increase in myocardial ischemic events or overall mortality compared to the general population. Several randomized trials have demonstrated the efficacy of this class of medications. There are no compelling data to support the superiority of one PDE-5i over another. Subtle differences between agents have variable clinical relevance (Table 67-2). Patients may fail to respond to a PDE-5i for several reasons (Table 67-3). Some patients may not tolerate PDE-5i secondary to adverse events from vasodilation in nonpenile tissues expressing PDE-5 or from the inhibition of homologous nonpenile isozymes (i.e., PDE-6 found in the retina). Abnormal vision attributed to the effects of PDE-5i on retinal PDE-6 is of short duration, reported only with sildenafil and not thought to be clinically significant. A more serious concern is the possibility that PDE-5i may cause nonarteritic anterior ischemic optic neuropathy; although data to support that association are limited, it is prudent to avoid the use of these agents in men with a prior history of nonarteritic anterior ischemic optic neuropathy. Testosterone supplementation combined with a PDE-5i may be beneficial in improving erectile function in hypogonadal men with ED who are unresponsive to PDE-5i alone. These drugs do not affect ejaculation, orgasm, or sexual drive. Side effects associated with PDE-5i include headaches (19%), facial flushing (9%), dyspepsia (6%), and nasal congestion (4%). Approximately 7% of men using sildenafil may experience transient altered color vision (blue halo effect), and 6% of men taking tadalafil may experience loin pain. PDE-5i is contraindicated in men receiving nitrate therapy for cardiovascular disease, including agents delivered by the oral, sub-lingual, transnasal, and topical routes. These agents can potentiate its hypotensive effect and may result in profound shock. Likewise, amyl/butyl nitrate “poppers” may have a fatal synergistic effect on blood pressure. PDE-5i also should be avoided in patients with congestive heart failure and cardiomyopathy because of the risk of vascular collapse. Because sexual activity leads to an increase in physiologic expenditure (5–6 metabolic equivalents [METS]), physicians have been advised to exercise caution in prescribing any drug for sexual activity to those with active coronary disease, heart failure, borderline hypotension, or hypovolemia and to those on complex antihypertensive regimens. Although the various forms of PDE-5i have a common mechanism of action, there are a few differences among the four agents (Table 67-2). Tadalafil is unique in its longer half-life, whereas avanafil appears to have the most rapid onset of action. All four drugs are effective for patients with ED of all ages, severities, and etiologies. Although there are pharmacokinetic and pharmacodynamic differences among these agents, clinically relevant differences are not clear. Testosterone replacement is used to treat both primary and secondary causes of hypogonadism (Chap. 411). Androgen supplementation Drug Onset of Action Half-Life Dose Adverse Effects Contraindications Sildenafil Tmax, 30-120 min 2–5 h 25–100 mg Headache, flushing, dyspepsia, nasal congestion, Duration, 4 h Starting dose, 50 mg altered vision Duration, 2 h Abbreviations: ETOH, alcohol; Tmax, time to maximum plasma concentration. in the setting of normal testosterone is rarely efficacious in the treatment of ED and is discouraged. Methods of androgen replacement include transdermal patches and gels, parenteral administration of long-acting testosterone esters (enanthate and cypionate), and oral preparations (17 α-alkylated derivatives) (Chap. 411). Oral androgen preparations have the potential for hepatotoxicity and should be avoided. Men who receive testosterone should be reevaluated after 1–3 months and at least annually thereafter for testosterone levels, erectile function, and adverse effects, which may include gynecomastia, sleep apnea, development or exacerbation of lower urinary tract symptoms or BPH, prostate cancer, lowering of HDL, erythrocytosis, elevations of liver function tests, and reduced fertility. Periodic reevaluation should include measurement of CBC and PSA and digital rectal exam. Therapy should be discontinued in patients who do not respond within 3 months. Vacuum constriction devices (VCDs) are a well-established noninvasive therapy. They are a reasonable treatment alternative for select patients who cannot take sildenafil or do not desire other interventions. VCDs draw venous blood into the penis and use a constriction ring to restrict venous return and maintain tumescence. Adverse events with VCD include pain, numbness, bruising, and altered ejaculation. Additionally, many patients complain that the devices are cumbersome and that the induced erections have a nonphysiologic appearance and feel. If a patient fails to respond to oral agents, a reasonable next choice is intraurethral or self-injection of vasoactive substances. iSSuES To ConSiDER if PATiEnTS REPoRT fAiLuRE of PDE-5i To iMPRovE ERECTiLE DySfunCTion A trial of medication on at least 6 different days at the maximal dose should be made before declaring patient nonresponsive to PDE-5i use Failure to include physical and psychic stimulation at the time of foreplay to induce endogenous NO Unrecognized hypogonadism Abbreviations: NO, nitric oxide; PDE-5i, phosphodiesterase type 5 inhibitor. congestion, nasopharyngitis, back pain Nitrates Hypotension Cardiovascular risk factors Retinitis pigmentosa Change dose with some Should be on stable dose of alpha blockers Same as sildenafil May have minor prolongation Concomitant use of Class I antiarrhythmic Same as sildenafil Same as sildenafil Intraurethral prostaglandin E1 (alprostadil), in the form of a semisolid pellet (doses of 125–1000 μg), is delivered with an applicator. Approximately 65% of men receiving intraurethral alprostadil respond with an erection when tested in the office, but only 50% achieve successful coitus at home. Intraurethral insertion is associated with a markedly reduced incidence of priapism in comparison to intracavernosal injection. Injection of synthetic formulations of alprostadil is effective in 70–80% of patients with ED, but discontinuation rates are high because of the invasive nature of administration. Doses range between 1 and 40 μg. Injection therapy is contraindicated in men with a history of hypersensitivity to the drug and men at risk for priapism (hypercoagulable states, sickle cell disease). Side effects include local adverse events, prolonged erections, pain, and fibrosis with chronic use. Various combinations of alprostadil, phentolamine, and/or papaverine sometimes are used. A less frequently used form of therapy for ED involves the surgical implantation of a semirigid or inflatable penile prosthesis. The choice of prosthesis is dependent on patient preference and should take into account body habitus and manual dexterity, which may affect the ability of the patient to manipulate the device. Because of the permanence of prosthetic devices, patients should be advised to first consider less invasive options for treatment. These surgical treatments are invasive, are associated with potential complications, and generally are reserved for treatment of refractory ED. Despite their high cost and invasiveness, penile prostheses are associated with high rates of patient and partner satisfaction. A course of sex therapy may be useful for addressing specific interpersonal factors that may affect sexual functioning. Sex therapy generally consists of in-session discussion and at-home exercises specific to the person and the relationship. Psychosexual therapy involves techniques such as sensate focus (nongenital massage), sensory awareness exercises, correction of misconceptions about sexuality, and interpersonal difficulties therapy (e.g., open communication about sexual issues, physical intimacy scheduling, and 330 behavioral interventions). These approaches may be useful in patients who have psychogenic or social components to their ED, although data from randomized trials are scanty and inconsistent. It is preferable if therapy includes both partners if the patient is involved in an ongoing relationship. PART 2 Cardinal Manifestations and Presentation of Diseases Female sexual dysfunction (FSD) has traditionally included disorders of desire, arousal, pain, and muted orgasm. The associated risk factors for FSD are similar to those in males: cardiovascular disease, endocrine disorders, hypertension, neurologic disorders, and smoking (Table 67-4). Epidemiologic data are limited, but the available estimates suggest that as many as 43% of women complain of at least one sexual problem. Despite the recent interest in organic causes of FSD, desire and arousal phase disorders (including lubrication complaints) remain the most common presenting problems when surveyed in a community-based population. The female sexual response requires the presence of estrogens. A role for androgens is also likely but less well established. In the CNS, estrogens and androgens work synergistically to enhance sexual arousal and response. A number of studies report enhanced libido in women during preovulatory phases of the menstrual cycle, suggesting that hormones involved in the ovulatory surge (e.g., estrogens) increase desire. Sexual motivation is heavily influenced by context, including the environment and partner factors. Once sufficient sexual desire is reached, sexual arousal is mediated by the central and autonomic nervous systems. Cerebral sympathetic outflow is thought to increase desire, and peripheral parasympathetic activity results in clitoral vasocongestion and vaginal secretion (lubrication). The neurotransmitters for clitoral corporal engorgement are similar to those in the male, with a prominent role for neural, smooth-muscle, and endothelial released nitric oxide (NO). A fine network of vaginal nerves and arterioles promotes a vaginal transudate. The major transmitters of this complex vaginal response are not certain, but roles for NO and vasointestinal polypeptide (VIP) are suspected. Investigators studying the normal female sexual response have challenged the long-held construct of a linear and unmitigated relationship between initial desire, arousal, vasocongestion, lubrication, and eventual orgasm. Caregivers Neurologic disease: stroke, spinal cord injury, parkinsonism Trauma, genital surgery, radiation Endocrinopathies: diabetes, hyperprolactinemia Psychological factors and interpersonal relationship disorders: sexual abuse, life stressors Antiandrogens: cimetidine, spironolactone Antidepressants, alcohol, hypnotics, sedatives Antihistamines, sympathomimetic amines Antihypertensives: diuretics, calcium channel blockers Abbreviation: GnRH, gonadotropin-releasing hormone. should consider a paradigm of a positive emotional and physical outcome with one, many, or no orgasmic peak and release. Although there are anatomic differences as well as variation in the density of vascular and neural beds in males and females, the primary effectors of sexual response are strikingly similar. Intact sensation is important for arousal. Thus, reduced levels of sexual functioning are more common in women with peripheral neuropathies (e.g., diabetes). Vaginal lubrication is a transudate of serum that results from the increased pelvic blood flow associated with arousal. Vascular insufficiency from a variety of causes may compromise adequate lubrication and result in dyspareunia. Cavernosal and arteriole smooth-muscle relaxation occurs via increased nitric oxide synthase (NOS) activity and produces engorgement in the clitoris and the surrounding vestibule. Orgasm requires an intact sympathetic outflow tract; hence, orgasmic disorders are common in female patients with spinal cord injuries. APPROACH TO THE PATIENT: Many women do not volunteer information about their sexual response. Open-ended questions in a supportive atmosphere are helpful in initiating a discussion of sexual fitness in women who are reluctant to discuss such issues. Once a complaint has been voiced, a comprehensive evaluation should be performed, including a medical history, a psychosocial history, a physical examination, and limited laboratory testing. The history should include the usual medical, surgical, obstetric, psychological, gynecologic, sexual, and social information. Past experiences, intimacy, knowledge, and partner availability should also be ascertained. Medical disorders that may affect sexual health should be delineated. They include diabetes, cardiovascular disease, gynecologic conditions, obstetric history, depression, anxiety disorders, and neurologic disease. Medications should be reviewed as they may affect arousal, libido, and orgasm. The need for counseling and recognizing life stresses should be identified. The physical examination should assess the genitalia, including the clitoris. Pelvic floor examination may identify prolapse or other disorders. Laboratory studies are needed, especially if menopausal status is uncertain. Estradiol, follicle-stimulating hormone (FSH), and luteinizing hormone (LH) are usually obtained, and dehydroepiandrosterone (DHEA) should be considered as it reflects adrenal androgen secretion. A CBC, liver function assessment, and lipid studies may be useful, if not otherwise obtained. Complicated diagnostic evaluations such as clitoral Doppler ultrasonography and biothesiometry require expensive equipment and are of uncertain utility. It is important for the patient to identify which symptoms are most distressing. The evaluation of FSD previously occurred mainly in a psychosocial context. However, inconsistencies between diagnostic categories based only on psychosocial considerations and the emerging recognition of organic etiologies have led to a new classification of FSD. This diagnostic scheme is based on four components that are not mutually exclusive: (1) hypoactive sexual desire—the persistent or recurrent lack of sexual thoughts and/or receptivity to sexual activity, which causes personal distress; hypoactive sexual desire may result from endocrine failure or may be associated with psychological or emotional disorders; (2) sexual arousal disorder—the persistent or recurrent inability to attain or maintain sexual excitement, which causes personal distress; (3) orgasmic disorder—the persistent or recurrent loss of orgasmic potential after sufficient sexual stimulation and arousal, which causes personal distress; and (4) sexual pain disorder—persistent or recurrent genital pain associated with noncoital sexual stimulation, which causes personal distress. This newer classification emphasizes “personal distress” as a requirement for dysfunction and provides clinicians with an organized framework for evaluation before or in conjunction with more traditional counseling methods. CAuSES of HiRSuTiSM An open discussion with the patient is important as couples may need to be educated about normal anatomy and physiologic responses, including the role of orgasm, in sexual encounters. Physiologic changes associated with aging and/or disease should be explained. Couples may need to be reminded that clitoral stimulation rather than coital intromission may be more beneficial. Behavioral modification and nonpharmacologic therapies should be a first step. Patient and partner counseling may improve communication and relationship strains. Lifestyle changes involving known risk factors can be an important part of the treatment process. Emphasis on maximizing physical health and avoiding lifestyles (e.g., smoking, alcohol abuse) and medications likely to produce FSD is important (Table 67-4). The use of topical lubricants may address complaints of dyspareunia and dryness. Contributing medications such as antidepressants may need to be altered, including the use of medications with less impact on sexual function, dose reduction, medication switching, or drug holidays. In postmenopausal women, estrogen replacement therapy may be helpful in treating vaginal atrophy, decreasing coital pain, and improving clitoral sensitivity (Chap. 413). Estrogen replacement in the form of local cream is the preferred method, as it avoids systemic side effects. Androgen levels in women decline substantially before menopause. However, low levels of testosterone or DHEA are not effective predictors of a positive therapeutic outcome with androgen therapy. The widespread use of exogenous androgens is not supported by the literature except in select circumstances (premature ovarian failure or menopausal states) and in secondary arousal disorders. The efficacy of PDE-5i in FDS has been a marked disappointment in light of the proposed role of nitric oxide–dependent physiology in the normal female sexual response. The use of PDE-5i for FSD should be discouraged pending proof that it is effective. In patients with arousal and orgasmic difficulties, the option of using a clitoral vacuum device may be explored. This handheld battery-operated device has a small soft plastic cup that applies a vacuum over the stimulated clitoris. This causes increased cavernosal blood flow, engorgement, and vaginal lubrication. Hirsutism David A. Ehrmann Hirsutism, which is defined as androgen-dependent excessive male-pattern hair growth, affects approximately 10% of women. Hirsutism is most often idiopathic or the consequence of androgen excess associ-ated with the polycystic ovarian syndrome (PCOS). Less frequently, it 68 may result from adrenal androgen overproduction as occurs in non-classic congenital adrenal hyperplasia (CAH) (Table 68-1). Rarely, it is a sign of a serious underlying condition. Cutaneous manifestations commonly associated with hirsutism include acne and male-pattern balding (androgenic alopecia). Virilization refers to a condition in which androgen levels are sufficiently high to cause additional signs and symptoms, such as deepening of the voice, breast atrophy, Syndromes of extreme insulin resistance (e.g., lipodystrophy) Thecoma of pregnancy increased muscle bulk, clitoromegaly, and increased libido; virilization is an ominous sign that suggests the possibility of an ovarian or adrenal neoplasm. Hair can be categorized as either vellus (fine, soft, and not pigmented) or terminal (long, coarse, and pigmented). The number of hair follicles does not change over an individual’s lifetime, but the follicle size and type of hair can change in response to numerous factors, particularly androgens. Androgens are necessary for terminal hair and sebaceous gland development and mediate differentiation of pilosebaceous units (PSUs) into either a terminal hair follicle or a sebaceous gland. In the former case, androgens transform the vellus hair into a terminal hair; in the latter case, the sebaceous component proliferates and the hair remains vellus. There are three phases in the cycle of hair growth: (1) anagen (growth phase), (2) catagen (involution phase), and (3) telogen (rest phase). Depending on the body site, hormonal regulation may play an important role in the hair growth cycle. For example, the eyebrows, eyelashes, and vellus hairs are androgen-insensitive, whereas the axillary and pubic areas are sensitive to low levels of androgens. Hair growth on the face, chest, upper abdomen, and back requires higher levels of androgens and is therefore more characteristic of the pattern typically seen in men. Androgen excess in women leads to increased hair growth in most androgen-sensitive sites except in the scalp region, where hair loss occurs because androgens cause scalp hairs to spend less time in the anagen phase. Although androgen excess underlies most cases of hirsutism, there is only a modest correlation between androgen levels and the quantity of hair growth. This is due to the fact that hair growth from the follicle also 332 depends on local growth factors, and there is variability in end organ (PSU) sensitivity. Genetic factors and ethnic background also influence hair growth. In general, dark-haired individuals tend to be more hirsute than blond or fair individuals. Asians and Native Americans have relatively sparse hair in regions sensitive to high androgen levels, whereas people of Mediterranean descent are more hirsute. Historic elements relevant to the assessment of hirsutism include the age at onset and rate of progression of hair growth and associated symptoms or signs (e.g., acne). Depending on the cause, excess hair growth typically is first noted during the second and third decades of life. The growth is usually slow but progressive. Sudden development and rapid progression of hirsutism suggest the possibility of an androgen-secreting neoplasm, in which case virilization also may be present. The age at onset of menstrual cycles (menarche) and the pattern of the menstrual cycle should be ascertained; irregular cycles from the time of menarche onward are more likely to result from ovarian rather than adrenal androgen excess. Associated symptoms such as galactorrhea should prompt evaluation for hyperprolactinemia (Chap. 403) and possibly hypothyroidism (Chap. 405). Hypertension, striae, easy bruising, centripetal weight gain, and weakness suggest hypercortisolism (Cushing’s syndrome; Chap. 406). Rarely, patients with growth hormone excess (i.e., acromegaly) present with hirsutism. Use of medications such as phenytoin, minoxidil, and cyclosporine may be associated with androgen-independent excess hair growth (i.e., hypertrichosis). A family history of infertility and/or hirsutism may indicate disorders such as nonclassic CAH (Chap. 406). Lipodystrophy is often associated with increased ovarian androgen production that occurs as a consequence of insulin resistance. Patients with lipodystrophy have a preponderance of central fat distribution together with scant subcutaneous adipose tissue in the upper and lower extremities. Physical examination should include measurement of height and weight and calculation of body mass index (BMI). A BMI >25 kg/m2 is indicative of excess weight for height, and values >30 kg/m2 are often seen in association with hirsutism, probably the result of increased conversion of androgen precursors to testosterone. Notation should be made of blood pressure, as adrenal causes may be associated with hypertension. Cutaneous signs sometimes associated with androgen excess and insulin resistance include acanthosis nigricans and skin tags. Body fat distribution should also be noted. An objective clinical assessment of hair distribution and quantity is central to the evaluation in any woman presenting with hirsutism. This assessment permits the distinction between hirsutism and hypertrichosis and provides a baseline reference point to gauge the response to treatment. A simple and commonly used method to grade hair growth is the modified scale of Ferriman and Gallwey (Fig. 68-1), in which each of nine androgen-sensitive sites is graded from 0 to 4. Approximately 95% of white women have a score below 8 on this scale; thus, it is normal for most women to have some hair growth in androgen-sensitive sites. Scores above 8 suggest excess androgen-mediated hair growth, a finding that should be assessed further by means of hormonal evaluation (see below). In racial/ethnic groups that are less likely to manifest hirsutism (e.g., Asian women), additional cutaneous evidence of androgen excess should be sought, including pustular acne and thinning scalp hair. Androgens are secreted by the ovaries and adrenal glands in response to their respective tropic hormones: luteinizing hormone (LH) and adrenocorticotropic hormone (ACTH). The principal circulating steroids involved in the etiology of hirsutism are testosterone, androstenedione, and dehydroepiandrosterone (DHEA) and its sulfated form (DHEAS). The ovaries and adrenal glands normally contribute about equally to testosterone production. Approximately half of the total testosterone originates from direct glandular secretion, and the remainder is derived from the peripheral conversion of androstenedione and DHEA (Chap. 411). PART 2 Cardinal Manifestations and Presentation of Diseases Although it is the most important circulating androgen, testosterone is in effect the penultimate androgen in mediating hirsutism; it is converted to the more potent dihydrotestosterone (DHT) by the enzyme 5α-reductase, which is located in the PSU. DHT has a higher affinity for, and slower dissociation from, the androgen receptor. The local production of DHT allows it to serve as the primary mediator of androgen action at the level of the pilosebaceous unit. There are two isoenzymes of 5α-reductase: Type 2 is found in the prostate gland and in hair follicles, and type 1 is found primarily in sebaceous glands. One approach to the evaluation of hirsutism is depicted in Fig. 68-2. In addition to measuring blood levels of testosterone and DHEAS, it is important to measure the level of free (or unbound) testosterone. The fraction of testosterone that is not bound to its carrier protein, sex hormone–binding globulin (SHBG), is biologically available for conversion to DHT and binding to androgen receptors. Hyperinsulinemia and/or androgen excess decrease hepatic production of SHBG, resulting in levels of total testosterone within the high-normal range, whereas the unbound hormone is elevated more substantially. Although there is a decline in ovarian testosterone production after menopause, ovarian estrogen production decreases to an even greater extent, and the concentration of SHBG is reduced. Consequently, there is an increase in the relative proportion of unbound testosterone, and it may exacerbate hirsutism after menopause. A baseline plasma total testosterone level >12 nmol/L (>3.5 ng/mL) usually indicates a virilizing tumor, whereas a level >7 nmol/L (>2 ng/ mL) is suggestive. A basal DHEAS level >18.5 μmol/L (>7000 μg/L) suggests an adrenal tumor. Although DHEAS has been proposed as a “marker” of predominant adrenal androgen excess, it is not unusual to find modest elevations in DHEAS among women with PCOS. Computed tomography (CT) or magnetic resonance imaging (MRI) should be used to localize an adrenal mass, and transvaginal ultrasound usually suffices to identify an ovarian mass if clinical evaluation and hormonal levels suggest these possibilities. PCOS is the most common cause of ovarian androgen excess (Chap. 412). An increased ratio of LH to follicle-stimulating hormone (FSH) is characteristic in carefully studied patients with PCOS. However, because of the pulsatile nature of gonadotropin secretion, this finding may be absent in up to half of women with PCOS. Therefore, measurement of plasma LH and FSH is not needed to make a diagnosis of PCOS. Transvaginal ultrasound classically shows enlarged ovaries and increased stroma in women with PCOS. However, cystic ovaries also may be found in women without clinical or laboratory features of PCOS. It has been suggested that the measurement of circulating levels of antimüllerian hormone (AMH) may help in making the diagnosis of PCOS; however, this remains controversial. AMH levels reflect ovarian reserve and correlate with follicular number. Measurement of AMH can be useful when considering premature ovarian insufficiency in a patient who presents with oligomenorrhea, in which case a subnormal level of AMH will be present. Because adrenal androgens are readily suppressed by low doses of glucocorticoids, the dexamethasone androgen-suppression test may broadly distinguish ovarian from adrenal androgen overproduction. A blood sample is obtained before and after the administration of dexamethasone (0.5 mg orally every 6 h for 4 days). An adrenal source is suggested by suppression of unbound testosterone into the normal range; incomplete suppression suggests ovarian androgen excess. An overnight 1-mg dexamethasone suppression test, with measurement of 8:00 a.m. serum cortisol, is useful when there is clinical suspicion of Cushing’s syndrome (Chap. 406). Nonclassic CAH is most commonly due to 21-hydroxylase deficiency but also can be caused by autosomal recessive defects in other steroidogenic enzymes necessary for adrenal corticosteroid synthesis (Chap. 406). Because of the enzyme defect, the adrenal gland cannot secrete glucocorticoids (especially cortisol) efficiently. This results in diminished negative feedback inhibition of ACTH, leading to compensatory adrenal hyperplasia and the accumulation of steroid precursors that subsequently are converted to androgen. Deficiency FIguRE 68-1 Hirsutism scoring scale of Ferriman and Gallwey. The nine body areas that have androgen-sensitive areas are graded from 0 (no terminal hair) to 4 (frankly virile) to obtain a total score. A normal hirsutism score is <8. (Modified from DA Ehrmann et al: Hyperandrogenism, hirsutism, and polycystic ovary syndrome, in LJ DeGroot and JL Jameson [eds], Endocrinology, 5th ed. Philadelphia, Saunders, 2006; with permission.) of 21-hydroxylase can be reliably excluded by determining a morning 17-hydroxyprogesterone level <6 nmol/L (<2 μg/L) (drawn in the follicular phase). Alternatively, 21-hydroxylase deficiency can be diagnosed by measurement of 17-hydroxyprogesterone 1 h after the administration of 250 μg of synthetic ACTH (cosyntropin) intravenously. Treatment of hirsutism may be accomplished pharmacologically or by mechanical means of hair removal. Nonpharmacologic treatments should be considered in all patients either as the only treatment or as an adjunct to drug therapy. PART 2 Cardinal Manifestations and Presentation of Diseases Laboratory Evaluation• Total, free testosterone• DHEAS ReassuranceNonpharmacologic approaches Rule out ovarian oradrenal neoplasmNormalIncreased Treat empirically or Consider further testing• Dexamethasone suppression ˜ adrenal vsovarian causes; R/O Cushing’s • ACTH stimulation ˜ assess nonclassic CAH Marked elevationTotal testosterone >7 nmol/L(>2 ng/mL)DHEAS >18.5 °mol/L (>7000 °g/L)Yes FIguRE 68-2 Algorithm for the evaluation and differential diagnosis of hirsutism. ACTH, adrenocorticotropic hormone; CAH, congenital adrenal hyperplasia; DHEAS, sulfated form of dehydroepiandrosterone; PCOS, polycystic ovarian syndrome. Nonpharmacologic treatments include (1) bleaching; (2) depilatory (removal from the skin surface), such as shaving and chemical treatments; and (3) epilatory (removal of the hair including the root), such as plucking, waxing, electrolysis, and laser therapy. Despite perceptions to the contrary, shaving does not increase the rate or density of hair growth. Chemical depilatory treatments may be useful for mild hirsutism that affects only limited skin areas, though they can cause skin irritation. Wax treatment removes hair temporarily but is uncomfortable. Electrolysis is effective for more permanent hair removal, particularly in the hands of a skilled electrologist. Laser phototherapy appears to be efficacious for hair removal. It delays hair regrowth and causes permanent hair removal in most patients. The long-term effects and complications associated with laser treatment are being evaluated. Pharmacologic therapy is directed at interrupting one or more of the steps in the pathway of androgen synthesis and action: suppression of adrenal and/or ovarian androgen production; enhancement of androgen-binding to plasma-binding proteins, particularly SHBG; (3) impairment of the peripheral conversion of androgen precursors to active androgen; and (4) inhibition of androgen action at the target tissue level. Attenuation of hair growth is typically not evident until 4–6 months after initiation of medical treatment and in most cases leads to only a modest reduction in hair growth. Combination estrogen-progestin therapy in the form of an oral contraceptive is usually the first-line endocrine treatment for hirsutism and acne, after cosmetic and dermatologic management. The estrogenic component of most oral contraceptives currently in use is either ethinyl estradiol or mestranol. The suppression of LH leads to reduced production of ovarian androgens. The reduced androgen levels also result in a dose-related increase in SHBG, thus lowering the fraction of unbound plasma testosterone. Combination therapy also has been demonstrated to decrease DHEAS, perhaps by reducing ACTH levels. Estrogens also have a direct, dose-dependent suppressive effect on sebaceous cell function. The choice of a specific oral contraceptive should be predicated on the progestational component, as progestins vary in their suppressive effect on SHBG levels and in their androgenic potential. Ethynodiol diacetate has relatively low androgenic potential, whereas progestins such as norgestrel and levonorgestrel are particularly androgenic, as judged from their attenuation of the estrogen-induced increase in SHBG. Norgestimate exemplifies the newer generation of progestins that are virtually nonandrogenic. Drospirenone, an analogue of spironolactone that has both antimineralocorticoid and antiandrogenic activities, has been approved for use as a progestational agent in combination with ethinyl estradiol. Oral contraceptives are contraindicated in women with a history of thromboembolic disease and women with increased risk of breast or other estrogen-dependent cancers (Chap. 413). There is a relative contraindication to the use of oral contraceptives in smokers and those with hypertension or a history of migraine headaches. In most trials, estrogen-progestin therapy alone improves the extent of acne by a maximum of 50–70%. The effect on hair growth may not be evident for 6 months, and the maximum effect may require 9–12 months owing to the length of the hair growth cycle. Improvements in hirsutism are typically in the range of 20%, but there may be an arrest of further progression of hair growth. Adrenal androgens are more sensitive than cortisol to the suppressive effects of glucocorticoids. Therefore, glucocorticoids are the mainstay of treatment in patients with CAH. Although glucocorticoids have been reported to restore ovulatory function in some women with PCOS, this effect is highly variable. Because of side effects from excessive glucocorticoids, low doses should be used. Dexamethasone (0.2– 0.5 mg) or prednisone (5–10 mg) should be taken at bedtime to achieve maximal suppression by inhibiting the nocturnal surge of ACTH. Cyproterone acetate is the prototypic antiandrogen. It acts mainly by competitive inhibition of the binding of testosterone and DHT to the androgen receptor. In addition, it may enhance the metabolic clearance of testosterone by inducing hepatic enzymes. Although not available for use in the United States, cyproterone acetate is widely used in Canada, Mexico, and Europe. Cyproterone (50–100 mg) is given on days 1–15 and ethinyl estradiol (50 μg) is given on days 5–26 of the menstrual cycle. Side effects include irregular uterine bleeding, nausea, headache, fatigue, weight gain, and decreased libido. Menstrual Disorders and Pelvic Pain Janet E. Hall Menstrual dysfunction can signal an underlying abnormality that may have long-term health consequences. Although frequent or prolonged 69 Spironolactone, which usually is used as a mineralocorticoid antagonist, is also a weak antiandrogen. It is almost as effective as cyproterone acetate when used at high enough doses (100–200 mg daily). Patients should be monitored intermittently for hyperkalemia or hypotension, although these side effects are uncommon. Pregnancy should be avoided because of the risk of feminization of a male fetus. Spironolactone can also cause menstrual irregularity. It often is used in combination with an oral contraceptive, which suppresses ovarian androgen production and helps prevent pregnancy. Flutamide is a potent nonsteroidal antiandrogen that is effective in treating hirsutism, but concerns about the induction of hepatocellular dysfunction have limited its use. Finasteride is a competitive inhibitor of 5α-reductase type 2. Beneficial effects on hirsutism have been reported, but the predominance of 5α-reductase type 1 in the PSU appears to account for its limited efficacy. Finasteride would also be expected to impair sexual differentiation in a male fetus, and it should not be used in women who may become pregnant. Eflornithine cream (Vaniqa) has been approved as a novel treatment for unwanted facial hair in women, but long-term efficacy remains to be established. It can cause skin irritation under exaggerated conditions of use. Ultimately, the choice of any specific agent(s) must be tailored to the unique needs of the patient being treated. As noted previously, pharmacologic treatments for hirsutism should be used in conjunction with nonpharmacologic approaches. It is also helpful to review the pattern of female hair distribution in the normal population to dispel unrealistic expectations. bleeding usually prompts a woman to seek medical attention, infrequent or absent bleeding may seem less troubling and the patient may not bring it to the attention of the physician. Thus, a focused menstrual history is a critical part of every encounter with a female patient. Pelvic pain is a common complaint that may relate to an abnormality of the reproductive organs but also may be of gastrointestinal, urinary tract, or musculoskeletal origin. Depending on its cause, pelvic pain may require urgent surgical attention. Amenorrhea refers to the absence of menstrual periods. Amenorrhea is classified as primary if menstrual bleeding has never occurred in the absence of hormonal treatment or secondary if menstrual periods cease for 3–6 months. Primary amenorrhea is a rare disorder that occurs in <1% of the female population. However, between 3 and 5% of women experience at least 3 months of secondary amenorrhea in any specific year. There is no evidence that race or ethnicity influences the prevalence of amenorrhea. However, because of the importance of adequate nutrition for normal reproductive function, both the age at menarche and the prevalence of secondary amenorrhea vary significantly in different parts of the world. Oligomenorrhea is defined as a cycle length >35 days or <10 menses per year. Both the frequency and the amount of vaginal bleeding are irregular in oligomenorrhea, and moliminal symptoms (premenstrual breast tenderness, food cravings, mood lability), suggestive of ovulation, are variably present. Anovulation can also present with intermenstrual 335 intervals <24 days or vaginal bleeding for >7 days. Frequent or heavy irregular bleeding is termed dysfunctional uterine bleeding if anatomic uterine and outflow tract lesions or a bleeding diathesis has been excluded. Primary Amenorrhea The absence of menses by age 16 has been used traditionally to define primary amenorrhea. However, other factors, such as growth, secondary sexual characteristics, the presence of cyclic pelvic pain, and the secular trend toward an earlier age of menarche, particularly in African-American girls, also influence the age at which primary amenorrhea should be investigated. Thus, an evaluation for amenorrhea should be initiated by age 15 or 16 in the presence of normal growth and secondary sexual characteristics; age 13 in the absence of secondary sexual characteristics or if height is less than the third percentile; age 12 or 13 in the presence of breast development and cyclic pelvic pain; or within 2 years of breast development if menarche, defined by the first menstrual period, has not occurred. Secondary Amenorrhea or Oligomenorrhea Anovulation and irregular cycles are relatively common for up to 2 years after menarche and for 1–2 years before the final menstrual period. In the intervening years, menstrual cycle length is ~28 days, with an intermenstrual interval normally ranging between 25 and 35 days. Cycle-to-cycle variability in an individual woman who is ovulating consistently is generally +/− 2 days. Pregnancy is the most common cause of amenorrhea and should be excluded early in any evaluation of menstrual irregularity. However, many women occasionally miss a single period. Three or more months of secondary amenorrhea should prompt an evaluation, as should a history of intermenstrual intervals >35 or <21 days or bleeding that persists for >7 days. Evaluation of menstrual dysfunction depends on understanding the interrelationships between the four critical components of the reproductive tract: (1) the hypothalamus, (2) the pituitary, (3) the ovaries, and (4) the uterus and outflow tract (Fig. 69-1; Chap. 412). This system is maintained by complex negative and positive feedback loops involving the ovarian steroids (estradiol and progesterone) and peptides (inhibin B and inhibin A) and the hypothalamic (gonadotropin-releasing hormone [GnRH]) and pituitary (follicle-stimulating hormone [FSH] and luteinizing hormone [LH]) components of this system (Fig. 69-1). Disorders of menstrual function can be thought of in two main categories: disorders of the uterus and outflow tract and disorders of ovulation. Many of the conditions that cause primary amenorrhea are congenital but go unrecognized until the time of normal puberty (e.g., genetic, chromosomal, and anatomic abnormalities). All causes of secondary amenorrhea also can cause primary amenorrhea. Disorders of the uterus or Outflow Tract Abnormalities of the uterus and outflow tract typically present as primary amenorrhea. In patients with normal pubertal development and a blind vagina, the differential diagnosis includes obstruction by a transverse vaginal septum or imperforate hymen; müllerian agenesis (Mayer-Rokitansky-Kuster-Hauser syndrome), which has been associated with mutations in the WNT4 gene; and androgen insensitivity syndrome (AIS), which is an X-linked recessive disorder that accounts for ~10% of all cases of primary amenorrhea (Chap. 411). Patients with AIS have a 46,XY karyotype, but because of the lack of androgen receptor responsiveness, those with complete AIS have severe underandrogenization and female external genitalia. The absence of pubic and axillary hair distinguishes them clinically from patients with müllerian agenesis, as does an elevated testosterone level. Asherman’s syndrome presents as secondary amenorrhea or hypomenorrhea and results from partial or complete obliteration of the uterine cavity by adhesions that prevent normal growth and shedding of the endometrium. Curettage performed for pregnancy complications accounts for >90% of cases; genital tuberculosis is an important cause in regions where it is endemic. or secondary amenorrhea. They may occur in association with other features suggestive of hypothalamic or pituitary dysfunction, such as short stature, diabetes insipidus, galactorrhea, and headache. Hypogonadotropic hypogonadism also may be seen after cranial irradiation. In the postpartum period, it may be caused by pituitary necrosis (Sheehan’s syndrome) or lymphocytic hypophysitis. Because reproductive dysfunction is commonly associated with hyperprolactinemia from neuroanatomic lesions or medications, prolactin should be measured in all patients with hypogonadotropic hypogonadism (Chap. 403). Isolated hypogonadotropic hypogonadism (IHH) occurs in women, although it is three times more common in men. IHH generally presents with primary amenorrhea, although 50% have some degree of breast development, and one to two menses have been described in ~10%. IHH is associated with anosmia in about 50% of women (termed Kallmann’s syndrome). Genetic causes of IHH have been identified in ~60% of patients (Chaps. 411 and 412). Functional hypothalamic amenorrhea (HA) is caused by a mismatch between energy expen- FIguRE 69-1 Role of the hypothalamic-pituitary-gonadal axis in the etiology of diture and energy intake. Recent studies suggest amenorrhea. Gonadotropin-releasing hormone (GnRH) secretion from the hypothalamus that variants in genes associated with IHH may increase susceptibility to these environmental the pituitary to induce ovarian folliculogenesis and steroidogenesis. Ovarian secretion of inputs, accounting in part for the clinical vari estradiol and progesterone controls the shedding of the endometrium, resulting in menses, ability in this disorder. Leptin secretion may and, in combination with the inhibins, provides feedback regulation of the hypothalamus play a key role in transducing the signals from and pituitary to control secretion of FSH and LH. The prevalence of amenorrhea resulting the periphery to the hypothalamus in HA. The from abnormalities at each level of the reproductive system (hypothalamus, pituitary, ovary, uterus, and outflow tract) varies depending on whether amenorrhea is primary or secondary. PART 2 Cardinal Manifestations and Presentation of Diseases PCOS, polycystic ovarian syndrome. Obstruction of the outflow tract requires surgical correction. The risk of endometriosis is increased with this condition, perhaps because of retrograde menstrual flow. Mlerian agenesis also may require surgical intervention to allow sexual intercourse, although vaginal dilatation is adequate in some patients. Because ovarian function is normal, assisted reproductive techniques can be used with a surrogate carrier. Androgen resistance syndrome requires gonadectomy because there is risk of gonadoblastoma in the dysgenetic gonads. Whether this should be performed in early childhood or after completion of breast development is controversial. Estrogen replacement is indicated after gonadectomy, and vaginal dilatation may be required to allow sexual intercourse. Disorders of Ovulation Once uterus and outflow tract abnormalities have been excluded, other causes of amenorrhea involve disorders of ovulation. The differential diagnosis is based on the results of initial tests, including a pregnancy test, an FSH level (to determine whether the cause is likely to be ovarian or central), and assessment of hyperandrogenism (Fig. 69-2). HYPOgONAdOTROPIC HYPOgONAdISM Low estrogen levels in combination with normal or low levels of LH and FSH are seen with anatomic, genetic, or functional abnormalities that interfere with hypothalamic GnRH secretion or normal pituitary responsiveness to GnRH. Although relatively uncommon, tumors and infiltrative diseases should be considered in the differential diagnosis of hypogonadotropic hypogonadism (Chap. 403). These disorders may present with primary play a role. The diagnosis of HA generally can be made on the basis of a careful history, a physical examination, and the demonstration of low levels of gonadotropins and normal prolactin levels. Eating disorders and chronic disease must be specifically excluded. An atypical history, headache, signs of other hypothalamic dysfunction, or hyperprolactinemia, even if mild, necessitates cranial imaging with computed tomography (CT) or magnetic resonance imaging (MRI) to exclude a neuroanatomic cause. HYPERgONAdOTROPIC HYPOgONAdISM Ovarian failure is considered premature when it occurs in women <40 years old and accounts for ~10% of secondary amenorrhea. Primary ovarian insufficiency (POI) has generally replaced the terms premature menopause and premature ovarian failure in recognition that this disorder represents a continuum of impaired ovarian function. Ovarian insufficiency is associated with the loss of negative-feedback restraint on the hypothalamus and pituitary, resulting in increased FSH and LH levels. FSH is a better marker of ovarian failure as its levels are less variable than those of LH. Antimüllerian hormone (AMH) levels will also be low in patients with POI, but are more frequently used in management of infertility. As with natural menopause, POI may wax and wane, and serial measurements may be necessary to establish the diagnosis. Once the diagnosis of POI has been established, further evaluation is indicated because of other health problems that may be associated with POI. For example, POI occurs in association with a variety of chromosomal abnormalities, including Turner’s syndrome, autoimmune polyglandular failure syndromes, radioand chemotherapy, and galactosemia. The recognition that early ovarian failure occurs in premutation carriers of the fragile X syndrome is important because of the increased risk of severe mental retardation in male children with FMR1 mutations. In the majority of cases, a cause for POI is not determined. Although there are increasing reports of genetic mutations in individuals and families with POI, testing for other than chromosomal abnormalities and FMR1 mutations is not recommended. Neuroanatomic abnormality or idiopathic hypogonadotropic hypogonadism Hypothalamic amenorrhea 2° amenorrheaR/O drugs, °TSH 1° amenorrhea, short stature or clinical suspicion Hyperandrogenism ° testosterone, hirsutism, acne R/O tumor R/O 21 hydroxylase deficiency Polycystic ovarian syndrome Increased MRI Normal PRL Ovarian insufficiency Normal or low Increased (x2) GYN referral GYN referral Asherman’s syndrome FSH ˛-hCGPregnancy Normal Abnormal Normal PRL, FSH Negative trial of estrogen/ progesterone Mlerian agenesis, cervical stenosis, vaginal septum, imperfo-rate hymen Uterine instrumentation Androgen insensitivity syndrome – + R/O eating disorder, chronic disease FIguRE 69-2 Algorithm for evaluation of amenorrhea. β-hCG, human chorionic gonadotropin; FSH, follicle-stimulating hormone; GYN, gynecologist; MRI, magnetic resonance imaging; PRL, prolactin; R/O, rule out; TSH, thyroid-stimulating hormone. Hypergonadotropic hypogonadism occurs rarely in other disorders, such as mutations in the FSH or LH receptors. Aromatase deficiency and 17α-hydroxylase deficiency are associated with decreased estrogen and elevated gonadotropins and with hyperandrogenism and hypertension, respectively. Gonadotropin-secreting tumors in women of reproductive age generally present with high, rather than low, estrogen levels and cause ovarian hyperstimulation or dysfunctional bleeding. Amenorrhea almost always is associated with chronically low levels of estrogen, whether it is caused by hypogonadotropic hypogonadism or ovarian insufficiency. Development of secondary sexual characteristics requires gradual titration of estradiol replacement with eventual addition of progestin. Hormone replacement with either low-dose estrogen/progesterone regimens or oral contraceptive pills is recommended until the usual age of menopause for bone and cardiovascular protection. Patients with hypogonadotropic hypogonadism who are interested in fertility require treatment with exogenous FSH combined with LH or pulsatile GnRH. Patients with ovarian failure can consider oocyte donation, which has a high rate of success in this population, although its use in women with Turner’s syndrome is limited by significant maternal cardiovascular risk. POLYCYSTIC OVARIAN SYNdROME (PCOS) PCOS is diagnosed based on a combination of clinical or biochemical evidence of hyperandrogenism, amenorrhea or oligomenorrhea, and the ultrasound appearance of polycystic ovaries. Approximately half of patients with PCOS are obese, and abnormalities in insulin dynamics are common, as is metabolic syndrome. Symptoms generally begin shortly after menarche and are slowly progressive. Lean oligo-ovulatory patients with PCOS generally have high LH levels in the presence of normal to low levels of FSH and estradiol. The LH/FSH ratio is less pronounced in obese patients in whom insulin resistance is a more prominent feature. A major abnormality in patients with PCOS is the failure of regular, predictable ovulation. Thus, these patients are at risk for the development of dysfunctional bleeding and endometrial hyperplasia associated with unopposed estrogen exposure. Endometrial protection can be achieved with the use of oral contraceptives or progestins (medroxyprogesterone acetate, 5–10 mg, or prometrium, 200 mg daily for 10–14 days of each month). Oral contraceptives are also useful for management of hyperandrogenic symptoms, as are spironolactone and cyproterone acetate (not available in the United States), which function as weak androgen receptor blockers. Management of the associated metabolic syndrome may be appropriate for some patients (Chap. 422). For patients interested in fertility, weight control is a critical first step. Clomiphene citrate is highly effective as a first-line treatment, and there is increasing evidence that the aromatase inhibitor letrozole may also be effective. Exogenous gonadotropins can be used by experienced practitioners; a diagnosis of polycystic ovaries in the presence or absence of cycle abnormalities increases the risk of hyperstimulation. The mechanisms that cause pelvic pain are similar to those that cause abdominal pain (Chap. 20) and include inflammation of the parietal peritoneum, obstruction of hollow viscera, vascular disturbances, and pain originating in the abdominal wall. Pelvic pain may reflect pelvic disease per se but also may reflect extrapelvic disorders that refer pain to the pelvis. In up to 60% of cases, pelvic pain can be attributed to 338 gastrointestinal problems, including appendicitis, cholecystitis, infections, intestinal obstruction, diverticulitis, and inflammatory bowel disease. Urinary tract and musculoskeletal disorders are also common causes of pelvic pain. APPROACH TO THE PATIENT: PART 2 Cardinal Manifestations and Presentation of Diseases As with all types of abdominal pain, the first priority is to identify life-threatening conditions (shock, peritoneal signs) that may require emergent surgical management. The possibility of pregnancy should be identified as soon as possible by menstrual history and/or testing. A thorough history that includes the type, location, radiation, and status with respect to increasing or decreasing severity can help identify the cause of acute pelvic pain. Specific associations with vaginal bleeding, sexual activity, defecation, urination, movement, or eating should be specifically sought. Determination of whether the pain is acute versus chronic and cyclic versus noncyclic will direct further investigation (Table 69-1). However, disorders that cause cyclic pain occasionally may cause noncyclic pain, and the converse is also true. Pelvic inflammatory disease most commonly presents with bilateral lower abdominal pain. It is generally of recent onset and is exacerbated by intercourse or jarring movements. Fever is present in about half of these patients; abnormal uterine bleeding occurs in about one-third. New vaginal discharge, urethritis, and chills may be present but are less specific signs. Adnexal pathology can present acutely and may be due to rupture, bleeding or torsion of cysts, or, much less commonly, neoplasms of the ovary, fallopian tubes, or paraovarian areas. Fever may be present with ovarian torsion. Ectopic pregnancy is associated with rightor left-sided lower abdominal pain, with clinical signs generally appearing 6–8 weeks after the last normal menstrual period. Amenorrhea is present in ~75% of cases and vaginal bleeding in ~50% of cases. Orthostatic signs and fever may be present. Risk factors include the presence of known tubal disease, previous ectopic pregnancies, a history of infertility, diethylstilbestrol (DES) exposure of the mother in utero, or a history of pelvic infections. Threatened abortion may also present with amenorrhea, abdominal pain, and vaginal bleeding. Although more common than ectopic pregnancy, it is rarely associated with systemic signs. Uterine pathology includes endometritis and, less frequently, degenerating leiomyomas (fibroids). Endometritis often is associated with vaginal bleeding and systemic signs of infection. It occurs in the setting of sexually transmitted infections, uterine instrumentation, or postpartum infection. A sensitive pregnancy test, complete blood count with differential, urinalysis, tests for chlamydial and gonococcal infections, and abdominal ultrasound aid in making the diagnosis and directing further management. Treatment of acute pelvic pain depends on the suspected etiology but may require surgical or gynecologic intervention. Conservative management is an important consideration for ovarian cysts, if torsion is not suspected, to avoid unnecessary pelvic surgery and the subsequent risk of infertility due to adhesions. Surgical treatment may be required for ectopic pregnancies; however, approximately 35% of ectopic pregnancies are unruptured and may be appropriate for treatment with methotrexate, which is effective in ~90% of cases. Some women experience discomfort at the time of ovulation (mittelschmerz). The pain can be quite intense but is generally of short duration. The mechanism is thought to involve rapid expansion of the dominant follicle, although it also may be caused by peritoneal irritation by follicular fluid released at the time of ovulation. Many women experience premenstrual symptoms such as breast discomfort, food cravings, and abdominal bloating or discomfort. These moliminal symptoms are a good marker of prior ovulation, although their absence is less helpful. Dysmenorrhea Dysmenorrhea refers to the crampy lower abdominal midline discomfort that begins with the onset of menstrual bleeding and gradually decreases over the next 12–72 h. It may be associated with nausea, diarrhea, fatigue, and headache and occurs in 60–93% of adolescents, beginning with the establishment of regular ovulatory cycles. Its prevalence decreases after pregnancy and with the use of oral contraceptives. Primary dysmenorrhea results from increased stores of prostaglandin precursors, which are generated by sequential stimulation of the uterus by estrogen and progesterone. During menstruation, these precursors are converted to prostaglandins, which cause intense uterine contractions, decreased blood flow, and increased peripheral nerve hypersensitivity, resulting in pain. Secondary dysmenorrhea is caused by underlying pelvic pathology. Endometriosis results from the presence of endometrial glands and stroma outside the uterus. These deposits of ectopic endometrium respond to hormonal stimulation and cause dysmenorrhea, which begins several days before menses. Endometriosis also may be associated with painful intercourse, painful bowel movements, and tender nodules in the uterosacral ligament. Fibrosis and adhesions can produce lateral displacement of the cervix. Transvaginal pelvic ultrasound is part of the initial workup and may detect an endometrioma within the ovary, rectovaginal or bladder nodules, or ureteral involvement. The CA125 level may be increased, but it has low negative predictive value. Definitive diagnosis requires laparoscopy. Symptomatology does not always predict the extent of endometriosis. The prevalence is lower in black and Hispanic women than in Caucasians and Asians. Other secondary causes of dysmenorrhea include adenomyosis, a condition caused by the presence of ectopic endometrial glands and stroma within the myometrium. Cervical stenosis may result from trauma, infection, or surgery. Local application of heat; dietary dairy intake; use of vitamins B1, B6, and E and fish oil; acupuncture; yoga; and exercise are of some benefit for the treatment of dysmenorrhea. Studies of vitamin D3 are not yet adequate to provide a recommendation. However, nonsteroidal anti-inflammatory drugs (NSAIDs) are the most effective treatment and provide >80% sustained response rates. Ibuprofen, naproxen, ketoprofen, mefanamic acid, and nimesulide are all superior to placebo. Treatment should be started a day before expected menses and generally is continued for 2–3 days. Oral contraceptives also reduce symptoms of dysmenorrhea. The use of tocolytics, antiphosphodiesterase inhibitors, and magnesium has been suggested, but Approach to the Patient with a Skin Disorder Thomas J. Lawley, Kim B. Yancey The challenge of examining the skin lies in distinguishing normal from abnormal findings, distinguishing significant findings from 70 SECTion 9 ALTERATionS in THE SKin CHAPTER 70 Approach to the Patient with a Skin Disorder trivial ones, and integrating pertinent signs and symptoms into an appropriate differential diagnosis. The fact that the largest organ in the body is visible is both an advantage and a disadvantage to those who examine it. It is advantageous because no special instrumentation is necessary and because the skin can be biopsied with little morbidity. However, the casual observer can be misled by a variety of stimuli and overlook important, subtle signs of skin or systemic disease. For instance, the sometimes minor differences in color and shape that distinguish a melanoma (Fig. 70-1) from a benign nevomelanocytic nevus (Fig. 70-2) can be difficult to recognize. A variety of descriptive terms have been developed that characterize cutaneous lesions (Tables 70-1, 70-2, and Tables 70-3; Fig. 70-3), thereby aiding in their interpretation and in the formulation of a differential diagnosis (Table 70-4). For example, the finding of scaling papules, which are present in psoriasis or atopic dermatitis, places the patient in a different diagnostic category than would hemorrhagic papules, which may indicate vasculitis or sepsis (Figs. 70-4 and 70-5, respectively). It is also important to differentiate primary from secondary skin lesions. If the examiner focuses on linear erosions overlying an area of erythema and scaling, he or she may incorrectly assume that the erosion is the primary lesion and that the redness and scale are secondary, whereas the correct interpretation would be that the patient has a pruritic eczematous dermatitis with erosions caused by scratching. FIguRE 70-1 Superficial spreading melanoma. This is the most common type of melanoma. Such lesions usually demonstrate asymmetry, border irregularity, color variegation (black, blue, brown, pink, and white), a diameter >6 mm, and a history of change (e.g., an increase in size or development of associated symptoms such as pruritus or pain). there are insufficient data to recommend them. Failure of response 339 to NSAIDs and/or oral contraceptives is suggestive of a pelvic disorder such as endometriosis, and diagnostic laparoscopy should be considered to guide further treatment. FIguRE 70-2 Nevomelanocytic nevus. Nevi are benign proliferations of nevomelanocytes characterized by regularly shaped hyperpigment-ed macules or papules of a uniform color. Macule: A flat, colored lesion, <2 cm in diameter, not raised above the surface of the surrounding skin. A “freckle,” or ephelid, is a prototypical pigmented macule. Patch: A large (>2-cm) flat lesion with a color different from the surrounding skin. This differs from a macule only in size. Papule: A small, solid lesion, <0.5 cm in diameter, raised above the surface of the surrounding skin and thus palpable (e.g., a closed comedone, or whitehead, in acne). Nodule: A larger (0.5to 5.0-cm), firm lesion raised above the surface of the surrounding skin. This differs from a papule only in size (e.g., a large dermal nevomelanocytic nevus). Tumor: A solid, raised growth >5 cm in diameter. Plaque: A large (>1-cm), flat-topped, raised lesion; edges may either be distinct (e.g., in psoriasis) or gradually blend with surrounding skin (e.g., in eczematous dermatitis). Vesicle: A small, fluid-filled lesion, <0.5 cm in diameter, raised above the plane of surrounding skin. Fluid is often visible, and the lesions are translucent (e.g., vesicles in allergic contact dermatitis caused by Toxicodendron [poison ivy]). Pustule: A vesicle filled with leukocytes. Note: The presence of pustules does not necessarily signify the existence of an infection. Bulla: A fluid-filled, raised, often translucent lesion >0.5 cm in diameter. Wheal: A raised, erythematous, edematous papule or plaque, usually representing short-lived vasodilation and vasopermeability. Telangiectasia: A dilated, superficial blood vessel. Lichenification: A distinctive thickening of the skin that is characterized by accentuated skin-fold markings. Scale: Excessive accumulation of stratum corneum. Crust: Dried exudate of body fluids that may be either yellow (i.e., serous crust) or red (i.e., hemorrhagic crust). Erosion: Loss of epidermis without an associated loss of dermis. Ulcer: Loss of epidermis and at least a portion of the underlying dermis. Excoriation: Linear, angular erosions that may be covered by crust and are caused by scratching. Atrophy: An acquired loss of substance. In the skin, this may appear as a depression with intact epidermis (i.e., loss of dermal or subcutaneous tissue) or as sites of shiny, delicate, wrinkled lesions (i.e., epidermal atrophy). Scar: A change in the skin secondary to trauma or inflammation. Sites may be erythematous, hypopigmented, or hyperpigmented depending on their age or character. Sites on hair-bearing areas may be characterized by destruction of hair follicles. Alopecia: Hair loss, partial or complete. Annular: Ring-shaped. Cyst: A soft, raised, encapsulated lesion filled with semisolid or liquid contents. Herpetiform: In a grouped configuration. Lichenoid eruption: Violaceous to purple, polygonal lesions that resemble those seen in lichen planus. Milia: Small, firm, white papules filled with keratin. Morbilliform rash: Generalized, small erythematous macules and/or papules that resemble lesions seen in measles. Nummular: Coin-shaped. Poikiloderma: Skin that displays variegated pigmentation, atrophy, and telangiectases. Polycyclic lesions: A configuration of skin lesions formed from coalescing rings or incomplete rings. Pruritus: A sensation that elicits the desire to scratch. Pruritus is often the predominant symptom of inflammatory skin diseases (e.g., atopic dermatitis, allergic contact dermatitis); it is also commonly associated with xerosis and aged skin. Systemic conditions that can be associated with pruritus include chronic renal disease, cholestasis, pregnancy, malignancy, thyroid disease, polycythemia vera, and delusions of parasitosis. PART 2 Cardinal Manifestations and Presentation of Diseases APPROACH TO THE PATIENT: In examining the skin it is usually advisable to assess the patient before taking an extensive history. This approach ensures that the entire cutaneous surface will be evaluated, and objective findings can be integrated with relevant historical data. Four basic features of a skin lesion must be noted and considered during a physical examination: the distribution of the eruption, the types of primary and secondary lesions, the shape of individual lesions, and the arrangement of the lesions. An ideal skin examination includes evaluation of the skin, hair, and nails as well as the mucous membranes of the mouth, eyes, nose, nasopharynx, and anogenital region. In the initial examination, it is important that the patient be disrobed as completely as possible to minimize chances of missing important individual skin lesions and permit accurate assessment of the distribution of the eruption. The patient should first be viewed from a distance of about 1.5–2 m (4–6 ft) so that the general character of the skin and the distribution of lesions can be evaluated. Indeed, the distribution of lesions often correlates highly with diagnosis (Fig. 70-6). For example, a hospitalized patient with a generalized erythematous exanthem is more likely to have a drug eruption than is a patient with a similar rash limited to the sun-exposed portions of the face. Once the distribution of the lesions has been established, the nature of the primary lesion FIguRE 70-3 A schematic representation of several common primary skin lesions (see Table 70-1). must be determined. Thus, when lesions are distributed on elbows, knees, and scalp, the most likely possibility based solely on distribution is psoriasis or dermatitis herpetiformis (Figs. 70-7 and 70-8, respectively). The primary lesion in psoriasis is a scaly papule that soon forms erythematous plaques covered with a white scale, whereas that of dermatitis herpetiformis is an urticarial papule that quickly becomes a small vesicle. In this manner, identification of the primary lesion directs the examiner toward the proper diagnosis. Secondary changes in skin can also be quite helpful. For example, scale represents excessive epidermis, while crust is the result of a discontinuous epithelial cell layer. Palpation of skin lesions can yield insight into the character of an eruption. Thus, red papules on the lower extremities that blanch with pressure can be a manifestation of many different diseases, but hemorrhagic red papules that do not blanch with pressure indicate palpable purpura characteristic of necrotizing vasculitis (Fig. 70-4). The shape of lesions is also an important feature. Flat, round, erythematous papules and plaques are common in many cutaneous diseases. However, target-shaped lesions that consist in part of erythematous plaques are specific for erythema multiforme (Fig. 70-9). Likewise, the arrangement of individual lesions is important. Erythematous papules and vesicles can occur in many conditions, but their arrangement in a specific linear array suggests an external etiology such as allergic contact dermatitis (Fig. 70-10) or primary irritant dermatitis. In contrast, lesions with a generalized arrangement are common and suggest a systemic etiology. As in other branches of medicine, a complete history should be obtained to emphasize the following features: 1. Evolution of lesions a. Site of onset b. Manner in which the eruption progressed or spread c. d. Periods of resolution or improvement in chronic eruptions 2. Symptoms associated with the eruption a. Itching, burning, pain, numbness b. What, if anything, has relieved symptoms c. Time of day when symptoms are most severe 3. Current or recent medications (prescribed as well as over-thecounter) 4. Associated systemic symptoms (e.g., malaise, fever, arthralgias) 5. 6. History of allergies 7. Presence of photosensitivity 8. Review of systems 9. Family history (particularly relevant for patients with melanoma, atopy, psoriasis, or acne) 10. Social, sexual, or travel history Many skin diseases can be diagnosed on the basis of gross clinical appearance, but sometimes relatively simple diagnostic procedures can yield valuable information. In most instances, they can be performed at the bedside with a minimum of equipment. Skin Biopsy A skin biopsy is a straightforward minor surgical procedure; however, it is important to biopsy a lesion that is most likely to yield diagnostic findings. This decision may require expertise in skin diseases and knowledge of superficial anatomic structures in selected areas of the body. In this procedure, a small area of skin is anesthetized with 1% lidocaine with or without epinephrine. The skin lesion in question can be excised or saucerized with a scalpel or removed by punch biopsy. In the latter technique, a punch is pressed against the surface of the skin and rotated with downward pressure until it penetrates to the subcutaneous tissue. The circular biopsy is then lifted with forceps, and the bottom is cut with iris scissors. Biopsy sites may or may not need suture closure, depending on size and location. KOH Preparation A potassium hydroxide (KOH) preparation is per-hyphae in dermatophyte infections, pseudohyphae and budding formed on scaling skin lesions where a fungal infection is suspected. yeasts in Candida infections, and “spaghetti and meatballs” yeast The edge of such a lesion is scraped gently with a no. 15 scalpel forms in tinea versicolor. The same sampling technique can be used blade. The removed scale is collected on a glass microscope slide to obtain scale for culture of selected pathogenic organisms. and then treated with 1 or 2 drops of a solution of 10–20% KOH. KOH dissolves keratin and allows easier visualization of fungal ele-Tzanck Smear A Tzanck smear is a cytologic technique most often ments. Brief heating of the slide accelerates dissolution of keratin. used in the diagnosis of herpesvirus infections (herpes simplex When the preparation is viewed under the microscope, the refractile virus [HSV] or varicella zoster virus [VZV]) (see Figs. 217-1 hyphae are seen more easily when the light intensity is reduced and and 217-3). An early vesicle, not a pustule or crusted lesion, is the condenser is lowered. This technique can be used to identify unroofed, and the base of the lesion is scraped gently with a scalpel PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 70-4 Necrotizing vasculitis. Palpable purpuric papules on the lower legs are seen in this patient with cutaneous small-vessel vasculitis. (Courtesy of Robert Swerlick, MD; with permission.) blade. The material is placed on a glass slide, air-dried, and stained with Giemsa or Wright’s stain. Multinucleated epithelial giant cells suggest the presence of HSV or VZV; culture, immunofluorescence microscopy, or genetic testing must be performed to identify the specific virus. FIguRE 70-5 Meningococcemia. An example of fulminant meningococcemia with extensive angular purpuric patches. (Courtesy of Stephen E. Gellis, MD; with permission.) Diascopy Diascopy is designed to assess whether a skin lesion will blanch with pressure as, for example, in determining whether a red lesion is hemorrhagic or simply blood-filled. Urticaria (Fig. 70-11) will blanch with pressure, whereas a purpuric lesion caused by necrotizing vasculitis (Fig. 70-4) will not. Diascopy is performed by pressing a microscope slide or magnifying lens against a lesion and noting the amount of blanching that occurs. Granulomas often have an opaque to transparent, brown-pink “apple jelly” appearance on diascopy. Wood’s Light A Wood’s lamp generates 360-nm ultraviolet (“black”) light that can be used to aid the evaluation of certain skin disorders. FIguRE 70-6 Distribution of some common dermatologic diseases and lesions. FIguRE 70-7 Psoriasis. This papulosquamous skin disease is charac-terized by small and large erythematous papules and plaques with overlying adherent silvery scale. FIguRE 70-10 Allergic contact dermatitis (ACD). A. An example of ACD in its acute phase, with sharply demarcated, weeping, eczema-tous plaques in a perioral distribution. B. ACD in its chronic phase, with an erythematous, lichenified, weeping plaque on skin chronically exposed to nickel in a metal snap. (B, Courtesy of Robert Swerlick, MD; with permission.) CHAPTER 70 Approach to the Patient with a Skin Disorder FIguRE 70-8 Dermatitis herpetiformis. This disorder typically displays pruritic, grouped papulovesicles on elbows, knees, buttocks, and posterior scalp. Vesicles are often excoriated due to associated pruritus. For example, a Wood’s lamp will cause erythrasma (a superficial, intertriginous infection caused by Corynebacterium minutissimum) to show a characteristic coral pink color, and wounds colonized by Pseudomonas will appear pale blue. Tinea capitis caused by certain dermatophytes (e.g., Microsporum canis or M. audouinii) exhibits a yellow fluorescence. Pigmented lesions of the epidermis such as freckles are accentuated, while dermal pigment such as postinflammatory hyper-pigmentation fades under a Wood’s light. Vitiligo (Fig. 70-12) appears FIguRE 70-9 Erythema multiforme. This eruption is characterized by multiple erythematous plaques with a target or iris morphology. It usually represents a hypersensitivity reaction to drugs (e.g., sulfon-amides) or infections (e.g., HSV). (Courtesy of the Yale Resident’s Slide Collection; with permission.) FIguRE 70-11 Urticaria. Discrete and confluent, edematous, erythematous papules and plaques are characteristic of this whealing eruption. CLiniCAL fEATuRES of AToPiC DERMATiTiS FIguRE 70-12 Vitiligo. Characteristic lesions display an acral distribu-tion and striking depigmentation as a result of loss of melanocytes. totally white under a Wood’s lamp, and previously unsuspected areas of involvement often become apparent. A Wood’s lamp may also aid in the demonstration of tinea versicolor and in recognition of ash leaf spots in patients with tuberous sclerosis. Patch Tests Patch testing is designed to document sensitivity to a specific antigen. In this procedure, a battery of suspected allergens is applied to the patient’s back under occlusive dressings and allowed to remain in contact with the skin for 48 h. The dressings are removed, and the area is examined for evidence of delayed hypersensitivity reactions (e.g., erythema, edema, or papulovesicles). This test is best performed by physicians with special expertise in patch testing and is often helpful in the evaluation of patients with chronic dermatitis. Eczema, Psoriasis, Cutaneous infections, Acne, and other Common Skin Disorders Leslie P. Lawley, Calvin O. McCall, Thomas J. Lawley ECZEMA AND DERMATITIS 71 Eczema is a type of dermatitis, and these terms are often used synonymously (e.g., atopic eczema or atopic dermatitis [AD]). Eczema is a reaction pattern that presents with variable clinical findings and the common histologic finding of spongiosis (intercellular edema of the epidermis). Eczema is the final common expression for a number of disorders, including those discussed in the following sections. Primary lesions may include erythematous macules, papules, and vesicles, which can coalesce to form patches and plaques. In severe eczema, secondary lesions from infection or excoriation, marked by weeping and crusting, may predominate. In chronic eczematous conditions, lichenification (cutaneous hypertrophy and accentuation of normal skin markings) may alter the characteristic appearance of eczema. AD is the cutaneous expression of the atopic state, characterized by a family history of asthma, allergic rhinitis, or eczema. The prevalence of AD is increasing worldwide. Some of its features are shown in Table 71-1. The etiology of AD is only partially defined, but there is a clear genetic predisposition. When both parents are affected by AD, >80% of their children manifest the disease. When only one parent is affected, the prevalence drops to slightly over 50%. A characteristic defect in AD that contributes to the pathophysiology is an PART 2 Cardinal Manifestations and Presentation of Diseases 1. 2. 3. Lesions typical of eczematous dermatitis 4. Personal or family history of atopy (asthma, allergic rhinitis, food allergies, or eczema) 5. 6. Lichenification of skin impaired epidermal barrier. In many patients, a mutation in the gene encoding filaggrin, a structural protein in the stratum corneum, is responsible. Patients with AD may display a variety of immunoregulatory abnormalities, including increased IgE synthesis; increased serum IgE levels; and impaired, delayed-type hypersensitivity reactions. The clinical presentation often varies with age. Half of patients with AD present within the first year of life, and 80% present by 5 years of age. About 80% ultimately coexpress allergic rhinitis or asthma. The infantile pattern is characterized by weeping inflammatory patches and crusted plaques on the face, neck, and extensor surfaces. The childhood and adolescent pattern is typified by dermatitis of flexural skin, particularly in the antecubital and popliteal fossae (Fig. 71-1). AD may resolve spontaneously, but approximately 40% of all individuals affected as children will have dermatitis in adult life. The distribution of lesions in adults may be similar to those seen in childhood; however, adults frequently have localized disease manifesting as lichen simplex chronicus or hand eczema (see below). In patients with localized disease, AD may be suspected because of a typical personal or family history or the presence of cutaneous stigmata of AD such as perioral pallor, an extra fold of skin beneath the lower eyelid (Dennie-Morgan folds), increased palmar skin markings, and an increased incidence of cutaneous infections, particularly with Staphylococcus aureus. Regardless of other manifestations, pruritus is a prominent characteristic of AD in all age groups and is exacerbated by dry skin. Many of the cutaneous findings in affected patients, such as lichenification, are secondary to rubbing and scratching. Therapy for AD should include avoidance of cutaneous irritants, adequate moisturizing through the application of emollients, judicious use of topical anti-inflammatory agents, and prompt treatment of secondary infection. Patients should be instructed to bathe no more often than daily, using warm or cool water, and to use only mild bath soap. Immediately after bathing, while the skin is still moist, a topical anti-inflammatory agent in a cream or ointment base should be applied to areas of dermatitis, and all other skin areas should be lubricated with a moisturizer. Approximately 30 g of a topical agent is required to cover the entire body surface of an average adult. FIguRE 71-1 Atopic dermatitis. Hyperpigmentation, lichenification, and scaling in the antecubital fossae are seen in this patient with atopic dermatitis. (Courtesy of Robert Swerlick, MD; with permission.) Lowto mid-potency topical glucocorticoids are employed in most treatment regimens for AD. Skin atrophy and the potential for systemic absorption are constant concerns, especially with more potent agents. Low-potency topical glucocorticoids or nonglucocorticoid anti-inflammatory agents should be selected for use on the face and in intertriginous areas to minimize the risk of skin atrophy. Two nonglucocorticoid anti-inflammatory agents are available: tacrolimus ointment and pimecrolimus cream. These agents are macrolide immunosuppressants that are approved by the U.S. Food and Drug Administration (FDA) for topical use in AD. Reports of broader effectiveness appear in the literature. These agents do not cause skin atrophy, nor do they suppress the hypothalamic-pituitary-adrenal axis. However, concerns have emerged regarding the potential for lymphomas in patients treated with these agents. Thus, caution should be exercised when these agents are considered. Currently, they are also more costly than topical glucocorticoids. Barrier-repair products that attempt to restore the impaired epidermal barrier are also nonglucocorticoid agents and are gaining popularity in the treatment of AD. Secondary infection of eczematous skin may lead to exacerbation of AD. Crusted and weeping skin lesions may be infected with S. aureus. When secondary infection is suspected, eczematous lesions should be cultured and patients treated with systemic antibiotics active against S. aureus. The initial use of penicillinase-resistant penicillins or cephalosporins is preferable. Dicloxacillin or cephalexin (250 mg qid for 7–10 days) is generally adequate for adults; however, antibiotic selection must be directed by culture results and clinical response. More than 50% of S. aureus isolates are now methicillin resistant in some communities. Current recommendations for the treatment of infection with these community-acquired methicillinresistant S. aureus (CA-MRSA) strains in adults include trimethoprimsulfamethoxazole (1 double-strength tablet bid), minocycline (100 mg bid), doxycycline (100 mg bid), or clindamycin (300–450 mg qid). Duration of therapy should be 7–10 days. Inducible resistance may limit clindamycin’s usefulness. Such resistance can be detected by the double-disk diffusion test, which should be ordered if the isolate is erythromycin resistant and clindamycin sensitive. As an adjunct, antibacterial washes or dilute sodium hypochlorite baths (0.005% bleach) and intermittent nasal mupirocin may be useful. Control of pruritus is essential for treatment, because AD often represents “an itch that rashes.” Antihistamines are most often used to control pruritus. Diphenhydramine (25 mg every 4–6 h), hydroxyzine (10–25mg every 6 h), or doxepin (10–25 mg at bedtime) are useful primarily due to their sedating action. Higher doses of these agents may be required, but sedation can become bothersome. Patients need to be counseled about driving or operating heavy equipment after taking these medications. When used at bedtime, sedating antihistamines may improve the patient’s sleep. Although they are effective in urticaria, non-sedating antihistamines and selective H2 blockers are of little use in controlling the pruritus of AD. Treatment with systemic glucocorticoids should be limited to severe exacerbations unresponsive to topical therapy. In the patient with chronic AD, therapy with systemic glucocorticoids will generally clear the skin only briefly, and cessation of the systemic therapy will invariably be accompanied by a return, if not a worsening, of the dermatitis. Patients who do not respond to conventional therapies should be considered for patch testing to rule out allergic contact dermatitis (ACD). The role of dietary allergens in AD is controversial, and there is little evidence that they play any role outside of infancy, during which a small percentage of patients with AD may be affected by food allergens. Lichen simplex chronicus may represent the end stage of a variety of pruritic and eczematous disorders, including AD. It consists of a circumscribed plaque or plaques of lichenified skin due to chronic 345 scratching or rubbing. Common areas involved include the posterior nuchal region, dorsum of the feet, and ankles. Treatment of lichen simplex chronicus centers on breaking the cycle of chronic itching and scratching. High-potency topical glucocorticoids are helpful in most cases, but, in recalcitrant cases, application of topical glucocorticoids under occlusion, or intralesional injection of glucocorticoids may be required. Contact dermatitis is an inflammatory skin process caused by an exogenous agent or agents that directly or indirectly injure the skin. In irritant contact dermatitis (ICD), this injury is caused by an inherent characteristic of a compound—for example, a concentrated acid or base. Agents that cause ACD induce an antigen-specific immune response (e.g., poison ivy dermatitis). The clinical lesions of contact dermatitis may be acute (wet and edematous) or chronic (dry, thickened, and scaly), depending on the persistence of the insult (see Fig. 70-10). Irritant Contact Dermatitis ICD is generally well demarcated and often localized to areas of thin skin (eyelids, intertriginous areas) or to areas where the irritant was occluded. Lesions may range from minimal skin erythema to areas of marked edema, vesicles, and ulcers. Prior exposure to the offending agent is not necessary, and the reaction develops in minutes to a few hours. Chronic low-grade irritant dermatitis is the most common type of ICD, and the most common area of involvement is the hands (see below). The most common irritants encountered are chronic wet work, soaps, and detergents. Treatment should be directed toward the avoidance of irritants and the use of protective gloves or clothing. Allergic Contact Dermatitis ACD is a manifestation of delayed-type hypersensitivity mediated by memory T lymphocytes in the skin. Prior exposure to the offending agent is necessary to develop the hypersensitivity reaction, which may take as little as 12 h or as much as 72 h to develop. The most common cause of ACD is exposure to plants, especially to members of the family Anacardiaceae, including the genus Toxicodendron. Poison ivy, poison oak, and poison sumac are members of this genus and cause an allergic reaction marked by erythema, vesiculation, and severe pruritus. The eruption is often linear or angular, corresponding to areas where plants have touched the skin. The sensitizing antigen common to these plants is urushiol, an oleoresin containing the active ingredient pentadecylcatechol. The oleoresin may adhere to skin, clothing, tools, and pets, and contaminated articles may cause dermatitis even after prolonged storage. Blister fluid does not contain urushiol and is not capable of inducing skin eruption in exposed subjects. If contact dermatitis is suspected and an offending agent is identified and removed, the eruption will resolve. Usually, treatment with high-potency topical glucocorticoids is enough to relieve symptoms while the dermatitis runs its course. For those patients who require systemic therapy, daily oral prednisone—beginning at 1 mg/kg, but usually ≤60 mg/d—is sufficient. The dose should be tapered over 2–3 weeks, and each daily dose should be taken in the morning with food. Identification of a contact allergen can be a difficult and time-consuming task. Allergic contact dermatitis should be suspected in patients with dermatitis unresponsive to conventional therapy or with an unusual and patterned distribution. Patients should be questioned carefully regarding occupational exposures and topical medications. Common sensitizers include preservatives in topical preparations, nickel sulfate, potassium dichromate, thimerosal, neomycin sulfate, fragrances, formaldehyde, and rubber-curing agents. Patch testing is helpful in identifying these agents but should not be attempted when patients have widespread active dermatitis or are taking systemic glucocorticoids. CHAPTER 71 Eczema, Psoriasis, Cutaneous Infections, Acne, and Other Common Skin Disorders PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 71-2 Dyshidrotic eczema. This example is characterized by deep-seated vesicles and scaling on palms and lateral fingers, and the disease is often associated with an atopic diathesis. Hand eczema is a very common, chronic skin disorder in which both exogenous and endogenous factors play important roles. It may be associated with other cutaneous disorders such as AD, and contact with various agents may be involved. Hand eczema represents a large proportion of cases of occupation-associated skin disease. Chronic, excessive exposure to water and detergents, harsh chemicals, or allergens may initiate or aggravate this disorder. It may present with dryness and cracking of the skin of the hands as well as with variable amounts of erythema and edema. Often, the dermatitis will begin under rings, where water and irritants are trapped. Dyshidrotic eczema, a variant of hand eczema, presents with multiple, intensely pruritic, small papules and vesicles on the thenar and hypothenar eminences and the sides of the fingers (Fig. 71-2). Lesions tend to occur in crops that slowly form crusts and then heal. The evaluation of a patient with hand eczema should include an assessment of potential occupation-associated exposures. The history should be directed to identifying possible irritant or allergen exposures. Therapy for hand eczema is directed toward avoidance of irritants, identification of possible contact allergens, treatment of coexistent infection, and application of topical glucocorticoids. Whenever possible, the hands should be protected by gloves, preferably vinyl. The use of rubber gloves (latex) to protect dermatitic skin is sometimes associated with the development of hypersensitivity reactions to components of the gloves. Patients can be treated with cool moist compresses followed by application of a midto high-potency topical glucocorticoid in a cream or ointment base. As in AD, treatment of secondary infection is essential for good control. In addition, patients with hand eczema should be examined for dermatophyte infection by KOH preparation and culture (see below). Nummular eczema is characterized by circular or oval “coinlike” lesions, beginning as small edematous papules that become crusted and scaly. The etiology of nummular eczema is unknown, but dry skin is a contributing factor. Common locations are the trunk or the extensor surfaces of the extremities, particularly on the pretibial areas or dorsum of the hands. Nummular eczema occurs more frequently in men and is most common in middle age. The treatment of nummular eczema is similar to that for AD. Asteatotic eczema, also known as xerotic eczema or “winter itch,” is a mildly inflammatory dermatitis that develops in areas of extremely dry skin, especially during the dry winter months. Clinically, there may be considerable overlap with nummular eczema. This form of eczema accounts for a large number of physician visits because of the associated pruritus. Fine cracks and scale, with or without erythema, characteristically develop in areas of dry skin, especially on the anterior surfaces of the lower extremities in elderly patients. Asteatotic eczema responds well to topical moisturizers and the avoidance of cutaneous irritants. Overbathing and the use of harsh soaps exacerbate asteatotic eczema. Stasis dermatitis develops on the lower extremities secondary to venous incompetence and chronic edema. Patients may give a history of deep venous thrombosis and may have evidence of vein removal or varicose veins. Early findings in stasis dermatitis consist of mild erythema and scaling associated with pruritus. The typical initial site of involvement is the medial aspect of the ankle, often over a distended vein (Fig. 71-3). Stasis dermatitis may become acutely inflamed, with crusting and exudate. In this state, it is easily confused with cellulitis. Chronic stasis dermatitis is often associated with dermal fibrosis that is recognized clinically as brawny edema of the skin. As the disorder progresses, the dermatitis becomes progressively pigmented due to chronic erythrocyte extravasation leading to cutaneous hemosiderin deposition. Stasis dermatitis may be complicated by secondary infection and contact dermatitis. Severe stasis dermatitis may precede the development of stasis ulcers. Patients with stasis dermatitis and stasis ulceration benefit greatly from leg elevation and the routine use of compression stockings with a gradient of at least 30–40 mmHg. Stockings providing less compression, such as antiembolism hose, are poor substitutes. Use of emollients and/or mid-potency topical glucocorticoids and avoidance of irritants are also helpful in treating stasis dermatitis. Protection of the legs from injury, including scratching, and control of chronic edema are essential to prevent ulcers. Diuretics may be required to adequately control chronic edema. Stasis ulcers are difficult to treat, and resolution is slow. It is extremely important to elevate the affected limb as much as possible. The ulcer should be kept clear of necrotic material by gentle debridement and covered with a semipermeable dressing and a compression dressing or compression stocking. Glucocorticoids should not be applied to ulcers, because they may retard healing; however, they may be applied to the surrounding skin to control itching, scratching, and additional trauma. Secondarily infected lesions should be treated with appropriate oral antibiotics, but it should be noted that all ulcers will become colonized with bacteria, and the purpose of antibiotic therapy should not be to clear all bacterial growth. Care must be taken to exclude treatable causes of leg ulcers (hypercoagulation, vasculitis) before beginning the chronic management outlined above. FIguRE 71-3 Stasis dermatitis. An example of stasis dermatitis showing erythematous, scaly, and oozing patches over the lower leg. Several stasis ulcers are also seen in this patient. FIguRE 71-4 Seborrheic dermatitis. Central facial erythema with overlying greasy, yellowish scale is seen in this patient. (Courtesy of Jean Bolognia, MD; with permission.) Seborrheic dermatitis is a common, chronic disorder characterized by greasy scales overlying erythematous patches or plaques. Induration and scale are generally less prominent than in psoriasis, but clinical overlap exists between these diseases (“sebopsoriasis”). The most common location is in the scalp, where it may be recognized as severe dandruff. On the face, seborrheic dermatitis affects the eyebrows, eyelids, glabella, and nasolabial folds (Fig. 71-4). Scaling of the external auditory canal is common in seborrheic dermatitis. In addition, the postauricular areas often become macerated and tender. Seborrheic dermatitis may also develop in the central chest, axilla, groin, submammary folds, and gluteal cleft. Rarely, it may cause widespread generalized dermatitis. Pruritus is variable. Seborrheic dermatitis may be evident within the first few weeks of life, and within this context it typically occurs in the scalp (“cradle cap”), face, or groin. It is rarely seen in children beyond infancy but becomes evident again during adult life. Although it is frequently seen in patients with Parkinson’s disease, in those who have had cerebrovascular accidents, and in those with HIV infection, the overwhelming majority of individuals with seborrheic dermatitis have no underlying disorder. Treatment with low-potency topical glucocorticoids in conjunction with a topical antifungal agent, such as ketoconazole cream or ciclopirox cream, is often effective. The scalp and beard areas may benefit from antidandruff shampoos, which should be left in place 3–5 min before rinsing. High-potency topical glucocorticoid solutions (betamethasone or clobetasol) are effective for control of severe scalp involvement. High-potency glucocorticoids should not be used on the face because this treatment is often associated with steroid-induced rosacea or atrophy. Psoriasis is one of the most common dermatologic diseases, affecting up to 2% of the world’s population. It is an immune-mediated disease clinically characterized by erythematous, sharply demarcated papules and rounded plaques covered by silvery micaceous scale. The skin lesions of psoriasis are variably pruritic. Traumatized areas often develop lesions of psoriasis (the Koebner or isomorphic phenomenon). In addition, other external factors may exacerbate psoriasis, including infections, stress, and medications (lithium, beta blockers, and antimalarial drugs). The most common variety of psoriasis is called plaque-type. Patients with plaque-type psoriasis have stable, slowly enlarging plaques, which remain basically unchanged for long periods of time. The most commonly involved areas are the elbows, knees, gluteal cleft, and scalp. Involvement tends to be symmetric. Plaque psoriasis generally develops slowly and runs an indolent course. It rarely remits spontaneously. Inverse psoriasis affects the intertriginous regions, including the axilla, groin, submammary region, and navel; it also tends to affect the scalp, palms, and soles. The individual lesions are sharply demarcated plaques (see Fig. 70-7), but they may be moist and without scale due to their locations. Guttate psoriasis (eruptive psoriasis) is most common in children and young adults. It develops acutely in individuals without psoriasis or in those with chronic plaque psoriasis. Patients present with many small erythematous, scaling papules, frequently after upper respiratory tract infection with β-hemolytic streptococci. The differential diagnosis should include pityriasis rosea and secondary syphilis. In pustular psoriasis, patients may have disease localized to the palms and soles, or the disease may be generalized. Regardless of the extent of disease, the skin is erythematous, with pustules and variable scale. Localized to the palms and soles, it is easily confused with eczema. When it is generalized, episodes are characterized by fever (39°–40°C [102.2°–104.0°F]) lasting several days, an accompanying generalized eruption of sterile pustules, and a background of intense erythema; patients may become erythrodermic. Episodes of fever and pustules are recurrent. Local irritants, pregnancy, medications, infections, and systemic glucocorticoid withdrawal can precipitate this form of psoriasis. Oral retinoids are the treatment of choice in nonpregnant patients. CHAPTER 71 Eczema, Psoriasis, Cutaneous Infections, Acne, and Other Common Skin Disorders Psoriasis Sharply demarcated, erythematous plaques with mica-like scale; predominantly on elbows, knees, and scalp; atypical forms may localize to intertriginous areas; eruptive forms may be associated with infection Lichen planus Purple polygonal papules marked by severe pruritus; lacy white markings, especially associated with mucous membrane lesions Pityriasis rosea Rash often preceded by herald patch; oval to round plaques with trailing scale; most often affects trunk; eruption lines up in skinfolds giving a “fir tree–like” appearance; generally spares palms and soles Dermatophytosis Polymorphous appearance depending on dermatophyte, body site, and host response; sharply defined to ill-demarcated scaly plaques with or without inflammation; may be associated with hair loss May be aggravated by certain drugs, infection; severe forms seen in association with HIV Certain drugs may induce: thiazides, antimalarial drugs Variable pruritus; self-limited, resolving in 2–8 weeks; may be imitated by secondary syphilis KOH preparation may show branching hyphae; culture helpful Acanthosis, vascular proliferation PART 2 Cardinal Manifestations and Presentation of Diseases Fingernail involvement, appearing as punctate pitting, onycholysis, nail thickening, or subungual hyperkeratosis, may be a clue to the diagnosis of psoriasis when the clinical presentation is not classic. According to the National Psoriasis Foundation, up to 30% of patients with psoriasis have psoriatic arthritis (PsA). There are five subtypes of PsA: symmetric, asymmetric, distal interphalangeal predominant (DIP), spondylitis, and arthritis mutilans. Symmetric arthritis resembles rheumatoid arthritis, but is usually milder. Asymmetric arthritis can involve any joint and may present as “sausage digits.” DIP is the classic form, but occurs in only about 5% of patients with PsA. It may involve fingers and toes. Spondylitis also occurs in about 5% of patients with PsA. Arthritis mutilans is severe and deforming. It affects primarily the small joints of the hands and feet. It accounts for fewer than 5% of PsA cases. An increased risk of metabolic syndrome, including increased morbidity and mortality from cardiovascular events, has been demonstrated in psoriasis patients. Appropriate screening tests should be performed. The etiology of psoriasis is still poorly understood, but there is clearly a genetic component to the disease. In various studies, 30–50% of patients with psoriasis report a positive family history. Psoriatic lesions contain infiltrates of activated T cells that are thought to elaborate cytokines responsible for keratinocyte hyperproliferation, which results in the characteristic clinical findings. Agents inhibiting T cell activation, clonal expansion, or release of proinflammatory cytokines are often effective for the treatment of severe psoriasis (see below). Treatment of psoriasis depends on the type, location, and extent of disease. All patients should be instructed to avoid excess drying or irritation of their skin and to maintain adequate cutaneous hydration. Most cases of localized, plaque-type psoriasis can be managed with mid-potency topical glucocorticoids, although their long-term use is often accompanied by loss of effectiveness (tachyphylaxis) and atrophy of the skin. A topical vitamin D analogue (calcipotriene) and a retinoid (tazarotene) are also efficacious in the treatment of limited psoriasis and have largely replaced other topical agents such as coal tar, salicylic acid, and anthralin. Ultraviolet (UV) light, natural or artificial, is an effective therapy for many patients with widespread psoriasis. Ultraviolet B (UVB), narrowband UVB, and ultraviolet A (UVA) light with either oral or topical psoralens (PUVA) is used clinically. UV light’s immunosuppressive properties are thought to be responsible for its therapeutic activity in psoriasis. It is also mutagenic, potentially leading to an increased incidence of nonmelanoma and melanoma skin cancer. UV-light therapy is contraindicated in patients receiving cyclosporine and should be used with great care in all immunocompromised patients due to the increased risk of skin cancer. Various systemic agents can be used for severe, widespread psoriatic disease (Table 71-3). Oral glucocorticoids should not be used for the treatment of psoriasis due to the potential for development of life-threatening pustular psoriasis when therapy is discontinued. Methotrexate is an effective agent, especially in patients with psoriatic arthritis. The synthetic retinoid acitretin is useful, especially when immunosuppression must be avoided; however, teratogenicity limits its use. The evidence implicating psoriasis as a T cell–mediated disorder has directed therapeutic efforts to immunoregulation. Cyclosporine and other immunosuppressive agents can be very effective in the treatment of psoriasis, and much attention is currently directed toward the development of biologic agents with more selective immunosuppressive properties and better safety profiles (Table 71-4). Experience with these biologic agents is limited, and information regarding combination therapy and adverse events continues to emerge. Use of tumor necrosis factor (TNF-α) inhibitors may worsen congestive heart failure (CHF), and they should be used with caution in patients at risk for or known to have CHF. Further, none of the immunosuppressive agents used in the treatment of psoriasis should be initiated if the patient has a severe infection; patients on such therapy should be routinely screened for tuberculosis. There have been reports of progressive multifocal leukoencephalopathy in association with treatment with the TNF-α inhibitors. Malignancies, including a risk or history of certain malignancies, may limit the use of these systemic agents. Administration Agent Mechanism of Action Indication Route Frequency Warnings Abbreviations: CHF, congestive heart failure; IL, interleukin; IM, intramuscular; Ps, psoriasis; PsA, psoriatic arthritis; SC, subcutaneous; TNF, tumor necrosis factor. FIguRE 71-5 Lichen planus. An example of lichen planus showing multiple flat-topped, violaceous papules and plaques. Nail dystrophy, as seen in this patient’s thumbnail, may also be a feature. (Courtesy of Robert Swerlick, MD; with permission.) Lichen planus (LP) is a papulosquamous disorder that may affect the skin, scalp, nails, and mucous membranes. The primary cutaneous lesions are pruritic, polygonal, flat-topped, violaceous papules. Close examination of the surface of these papules often reveals a network of gray lines (Wickham’s striae). The skin lesions may occur anywhere but have a predilection for the wrists, shins, lower back, and genitalia (Fig. 71-5). Involvement of the scalp (lichen planopilaris) may lead to scarring alopecia, and nail involvement may lead to permanent deformity or loss of fingernails and toenails. LP commonly involves mucous membranes, particularly the buccal mucosa, where it can present on a spectrum ranging from a mild, white, reticulate eruption of the mucosa to a severe, erosive stomatitis. Erosive stomatitis may persist for years and may be linked to an increased risk of oral squamous cell carcinoma. Cutaneous eruptions clinically resembling LP have been observed after administration of numerous drugs, including thiazide diuretics, gold, antimalarial agents, penicillamine, and phenothiazines, and in patients with skin lesions of chronic graft-versus-host disease. In addition, LP may be associated with hepatitis C infection. The course of LP is variable, but most patients have spontaneous remissions 6 months to 2 years after the onset of disease. Topical glucocorticoids are the mainstay of therapy. Pityriasis rosea (PR) is a papulosquamous eruption of unknown etiology occurring more commonly in the spring and fall. Its first manifestation is the development of a 2to 6-cm annular lesion (the herald patch). This is followed in a few days to a few weeks by the appearance of many smaller annular or papular lesions with a predilection to occur on the trunk (Fig. 71-6). The lesions are generally oval, with their long axis parallel to the skinfold lines. Individual lesions may range in color from red to brown and have a trailing scale. PR shares many clinical features with the eruption of secondary syphilis, but palm and sole lesions are extremely rare in PR and common in secondary syphilis. The eruption tends to be moderately pruritic and lasts 3–8 weeks. Treatment is directed at alleviating pruritus and consists of oral antihistamines; mid-potency topical glucocorticoids; and, in some cases, UVB phototherapy. IMPETIgO, ECTHYMA, AND FuRuNCuLOSIS Impetigo is a common superficial bacterial infection of skin caused most often by S. aureus (Chap. 172) and in some cases by group A β-hemolytic streptococci (Chap. 173). The primary lesion is a superficial pustule that ruptures and forms a characteristic yellow-brown FIguRE 71-6 Pityriasis rosea. In this patient with pityriasis rosea, multiple round to oval erythematous patches with fine central scale are distributed along the skin tension lines on the trunk. honey-colored crust (see Fig. 173-3). Lesions may occur on normal skin (primary infection) or in areas already affected by another skin disease (secondary infection). Lesions caused by staphylococci may be tense, clear bullae, and this less common form of the disease is called bullous impetigo. Blisters are caused by the production of exfoliative toxin by S. aureus phage type II. This is the same toxin responsible for staphylococcal scalded-skin syndrome, often resulting in dramatic loss of the superficial epidermis due to blistering. The latter syndrome is much more common in children than in adults; however, it should be considered along with toxic epidermal necrolysis and severe drug eruptions in patients with widespread blistering of the skin. Ecthyma is a deep non-bullous variant of impetigo that causes punched-out ulcerative lesions. It is more often caused by a primary or secondary infection with Streptococcus pyogenes. Ecthyma is a deeper infection than typical impetigo and resolves with scars. Treatment of both ecthyma and impetigo involves gentle debridement of adherent crusts, which is facilitated by the use of soaks and topical antibiotics in conjunction with appropriate oral antibiotics. Furunculosis is also caused by S. aureus, and this disorder has gained prominence in the last decade because of CA-MRSA. A furuncle, or boil, is a painful, erythematous nodule that can occur on any cutaneous surface. The lesions may be solitary but are most often multiple. Patients frequently believe they have been bitten by spiders or insects. Family members or close contacts may also be affected. Furuncles can rupture and drain spontaneously or may need incision and drainage, which may be adequate therapy for small solitary furuncles without cellulitis or systemic symptoms. Whenever possible, lesional material should be sent for culture. Current recommendations for methicillinsensitive infections are β-lactam antibiotics. Therapy for CA-MRSA was discussed previously (see “Atopic Dermatitis”). Warm compresses and nasal mupirocin are helpful therapeutic additions. Severe infections may require IV antibiotics. See Chap. 156. Dermatophytes are fungi that infect skin, hair, and nails and include members of the genera Trichophyton, Microsporum, and Epidermophyton (Chap. 243). Tinea corporis, or infection of the relatively hairless skin of the body (glabrous skin), may have a variable appearance depending on the extent of the associated inflammatory reaction. Typical infections consist of erythematous, scaly plaques, with an annular appearance that accounts for the common name “ringworm.” Deep inflammatory nodules or granulomas occur in CHAPTER 71 Eczema, Psoriasis, Cutaneous Infections, Acne, and Other Common Skin Disorders PART 2 Cardinal Manifestations and Presentation of Diseases some infections, most often those inappropriately treated with midto high-potency topical glucocorticoids. Involvement of the groin (tinea cruris) is more common in males than in females. It presents as a scaling, erythematous eruption sparing the scrotum. Infection of the foot (tinea pedis) is the most common dermatophyte infection and is often chronic; it is characterized by variable erythema, edema, scaling, pruritus, and occasionally vesiculation. The infection may be widespread or localized but generally involves the web space between the fourth and fifth toes. Infection of the nails (tinea unguium or onychomycosis) occurs in many patients with tinea pedis and is characterized by opacified, thickened nails and subungual debris. The distal-lateral variant is most common. Proximal subungual onychomycosis may be a marker for HIV infection or other immunocompromised states. Dermatophyte infection of the scalp (tinea capitis) continues to be common, particularly affecting inner-city children but also affecting adults. The predominant organism is Trichophyton tonsurans, which can produce a relatively noninflammatory infection with mild scale and hair loss that is diffuse or localized. T. tonsurans can also cause a markedly inflammatory dermatosis with edema and nodules. This latter presentation is a kerion. The diagnosis of tinea can be made from skin scrapings, nail scrapings, or hair by culture or direct microscopic examination with potassium hydroxide (KOH). Nail clippings may be sent for histologic examination with periodic acid–Schiff (PAS) stain. Both topical and systemic therapies may be used in dermatophyte infections. Treatment depends on the site involved and the type of infection. Topical therapy is generally effective for uncomplicated tinea corporis, tinea cruris, and limited tinea pedis. Topical agents are not effective as monotherapy for tinea capitis or onychomycosis (see below). Topical imidazoles, triazoles, and allylamines may be effective therapies for dermatophyte infections, but nystatin is not active against dermatophytes. Topicals are generally applied twice daily, and treatment should continue for 1 week beyond clinical resolution of the infection. Tinea pedis often requires longer treatment courses and frequently relapses. Oral antifungal agents may be required for recalcitrant tinea pedis or tinea corporis. Oral antifungal agents are required for dermatophyte infections involving the hair and nails and for other infections unresponsive to topical therapy. A fungal etiology should be confirmed by direct microscopic examination or by culture before oral antifungal agents are prescribed. All of the oral agents may cause hepatotoxicity. They should not be used in women who are pregnant or breast-feeding. Griseofulvin is approved in the United States for dermatophyte infections involving the skin, hair, or nails. When griseofulvin is used, a daily dose of 500 mg microsized or 375 mg ultramicrosized, administered with a fatty meal, is adequate for most dermatophyte infections. Higher doses are required for some cases of tinea pedis and tinea capitis. Markedly inflammatory tinea capitis may result in scarring and hair loss, and systemic or topical glucocorticoids may be helpful in preventing these sequelae. The duration of griseofulvin therapy may be 2 weeks for uncomplicated tinea corporis, 8–12 weeks for tinea capitis, or as long as 6–18 months for nail infections. Due to high relapse rates, griseofulvin is seldom used for nail infections. Common side effects of griseofulvin include gastrointestinal distress, headache, and urticaria. Oral itraconazole is approved for onychomycosis. Itraconazole is given with food as either continuous daily therapy (200 mg/d) or pulses (200 mg bid for 1 week per month). Fingernails require 2 months of continuous therapy or two pulses. Toenails require 3 months of continuous therapy or three pulses. Itraconazole has the potential for serious interactions with other drugs requiring the P450 enzyme system for metabolism. Itraconazole should not be administered to patients with evidence of ventricular dysfunction or patients with known CHF. Terbinafine (250 mg/d) is also effective for onychomycosis, and the granule version is approved for treatment of tinea capitis. Therapy with terbinafine is continued for 6 weeks for fingernail and scalp infections and 12 weeks for toenail infections. Terbinafine has fewer interactions with other drugs than itraconazole, but caution should be used with patients who are on multiple medications. The risk/benefit ratio should be considered when an asymptomatic toenail infection is treated with systemic agents. Tinea versicolor is caused by a nondermatophytic, dimorphic fungus, Malassezia furfur, a normal inhabitant of the skin. The expression of infection is promoted by heat and humidity. The typical lesions consist of oval scaly macules, papules, and patches concentrated on the chest, shoulders, and back but only rarely on the face or distal extremities. On dark skin the lesions often appear as hypopigmented areas, while on light skin they are slightly erythematous or hyperpigmented. A KOH preparation from scaling lesions will demonstrate a confluence of short hyphae and round spores (“spaghetti and meatballs”). Lotions or shampoos containing sulfur, salicylic acid, or selenium sulfide will clear the infection if used daily for 1–2 weeks and then weekly thereafter. These preparations are irritating if left on the skin for >10 min; thus, they should be washed off completely. Treatment with some oral antifungal agents is also effective, but they do not provide lasting results and are not FDA approved for this indication. A very short course of ketoconazole has been used, as have itraconazole and fluconazole. The patient must sweat after taking the medication if it is to be effective. Griseofulvin is not effective and terbinafine is not reliably effective for tinea versicolor. Candidiasis is a fungal infection caused by a related group of yeasts whose manifestations may be localized to the skin and mucous membranes or, rarely, may be systemic and life-threatening (Chap. 240). The causative organism is usually Candida albicans. These organisms are normal saprophytic inhabitants of the gastrointestinal tract but may overgrow due to broad-spectrum antibiotic therapy, diabetes mellitus, or immunosuppression and cause disease. Candidiasis is a oral cavity is commonly involved. Lesions may occur on the tongue or very common infection in HIV-infected individuals (Chap. 226). The buccal mucosa (thrush) and appear as white plaques. Fissured, macerated lesions at the corners of the mouth (perléche) are often seen in individuals with poorly fitting dentures and may also be associated with candidal infection. In addition, candidal infections have an affinity for sites that are chronically wet and macerated, including the skin around nails (onycholysis and paronychia), and in intertriginous areas. Intertriginous lesions are characteristically edematous, erythematous, and scaly, with scattered “satellite pustules.” In males, there is often involvement of the penis and scrotum as well as the inner aspect of the thighs. In contrast to dermatophyte infections, candidal infections are frequently painful and accompanied by a marked inflammatory response. Diagnosis of candidal infection is based upon the clinical pattern and demonstration of yeast on KOH preparation or culture. Treatment involves removal of any predisposing factors such as antibiotic therapy or chronic wetness and the use of appropriate topical or systemic antifungal agents. Effective topicals include nystatin or azoles (miconazole, clotrimazole, econazole, or ketoconazole). The associated inflammatory response accompanying candidal infection on glabrous skin can be treated with a mild glucocorticoid lotion or cream (2.5% hydrocortisone). Systemic therapy is usually reserved for immunosuppressed patients or individuals with chronic or recurrent disease who fail to respond to appropriate topical therapy. Oral agents approved for the treatment of candidiasis include itraconazole and fluconazole. Oral nystatin is effective only for candidiasis of the gastrointestinal tract. Griseofulvin and terbinafine are not effective. Warts are cutaneous neoplasms caused by papillomaviruses. More than 100 different human papillomaviruses (HPVs) have been described. A typical wart, verruca vulgaris, is sessile, dome-shaped, and usually about a centimeter in diameter. Its surface is hyperkeratotic, consisting of many small filamentous projections. The HPV types that cause typical verruca vulgaris also cause typical plantar warts, flat warts (verruca plana), and filiform warts. Plantar warts are endophytic and are covered by thick keratin. Paring of the wart will generally reveal a central core of keratinized debris and punctate bleeding points. Filiform warts are most commonly seen on the face, neck, and skinfolds and present as papillomatous lesions on a narrow base. Flat warts are only slightly elevated and have a velvety, nonverrucous surface. They have a propensity for the face, arms, and legs and are often spread by shaving. Genital warts begin as small papillomas that may grow to form large, fungating lesions. In women, they may involve the labia, perineum, or perianal skin. In addition, the mucosa of the vagina, urethra, and anus can be involved as well as the cervical epithelium. In men, the lesions often occur initially in the coronal sulcus but may be seen on the shaft of the penis, the scrotum, or the perianal skin or in the urethra. Appreciable evidence has accumulated indicating that HPV plays a role in the development of neoplasia of the uterine cervix and anogenital skin (Chap. 117). HPV types 16 and 18 have been most intensely studied and are the major risk factors for intraepithelial neoplasia and squamous cell carcinoma of the cervix, anus, vulva, and penis. The risk is higher among patients immunosuppressed after solid organ transplantation and among those infected with HIV. Recent evidence also implicates other HPV types. Histologic examination of biopsied samples from affected sites may reveal changes associated with typical warts and/or features typical of intraepidermal carcinoma (Bowen’s disease). Squamous cell carcinomas associated with HPV infections have also been observed in extragenital skin (Chap. 105), most commonly in patients immunosuppressed after organ transplantation. Patients on long-term immunosuppression should be monitored for the development of squamous cell carcinoma and other cutaneous malignancies. Treatment of warts, other than anogenital warts, should be tempered by the observation that a majority of warts in normal individuals resolve spontaneously within 1–2 years. There are many modalities available to treat warts, but no single therapy is universally effective. Factors that influence the choice of therapy include the location of the wart, the extent of disease, the age and immunologic status of the patient, and the patient’s desire for therapy. Perhaps the most useful and convenient method for treating warts in almost any location is cryotherapy with liquid nitrogen. Equally effective for nongenital warts, but requiring much more patient compliance, is the use of keratolytic agents such as salicylic acid plasters or solutions. For genital warts, in-office application of a podophyllin solution is moderately effective but may be associated with marked local reactions. Prescription preparations of dilute, purified podophyllin are available for home use. Topical imiquimod, a potent inducer of local cytokine release, has been approved for treatment of genital warts. A new topical compound composed of green tea extracts (sinecatechins) is also available. Conventional and laser surgical procedures may be required for recalcitrant warts. Recurrence of warts appears to be common to all these modalities. A highly effective vaccine for selected types of HPV has been approved by the FDA, and its use appears to reduce the incidence of anogenital and cervical carcinoma. See Chap. 216. See Chap. 217. Acne vulgaris is a self-limited disorder primarily of teenagers and young adults, although perhaps 10–20% of adults may continue to experience some form of the disorder. The permissive factor for the expression of the disease in adolescence is the increase in sebum production by sebaceous glands after puberty. Small cysts, called comedones, form in hair follicles due to blockage of the follicular orifice by retention of keratinous material and sebum. The activity of bacteria (Propionibacterium acnes) within the comedones releases free fatty acids from sebum, causes inflammation within the cyst, and results in rupture of the cyst wall. An inflammatory foreign-body reaction develops as result of extrusion of oily and keratinous debris from the cyst. The clinical hallmark of acne vulgaris is the comedone, which may be closed (whitehead) or open (blackhead). Closed comedones appear as 1to 2-mm pebbly white papules, which are accentuated when the skin is stretched. They are the precursors of inflammatory lesions of acne vulgaris. The contents of closed comedones are not easily expressed. Open comedones, which rarely result in inflammatory acne lesions, have a large dilated follicular orifice and are filled with easily expressible oxidized, darkened, oily debris. Comedones are usually accompanied by inflammatory lesions: papules, pustules, or nodules. The earliest lesions seen in adolescence are generally mildly inflamed or noninflammatory comedones on the forehead. Subsequently, more typical inflammatory lesions develop on the cheeks, nose, and chin (Fig. 71-7). The most common location for acne is the face, but involvement of the chest and back is common. Most disease remains mild and does not lead to scarring. A small number of patients develop large inflammatory cysts and nodules, which may drain and result in significant scarring. Regardless of the severity, acne may affect a patient’s quality of life. With adequate treatment, this effect may be transient. In the case of severe, scarring acne, the effects can be permanent and profound. Early therapeutic intervention in severe acne is essential. Exogenous and endogenous factors can alter the expression of acne vulgaris. Friction and trauma (from headbands or chin straps of athletic helmets), application of comedogenic topical agents (cosmetics CHAPTER 71 Eczema, Psoriasis, Cutaneous Infections, Acne, and Other Common Skin Disorders PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 71-7 Acne vulgaris. An example of acne vulgaris with inflammatory papules, pustules, and comedones. (Courtesy of Kalman Watsky, MD; with permission.) or hair preparations), or chronic topical exposure to certain industrial compounds may elicit or aggravate acne. Glucocorticoids, topical or systemic, may also elicit acne. Other systemic medications such as oral contraceptive pills, lithium, isoniazid, androgenic steroids, halogens, phenytoin, and phenobarbital may produce acneiform eruptions or aggravate preexisting acne. Genetic factors and polycystic ovary disease may also play a role. Treatment of acne vulgaris is directed toward elimination of comedones by normalizing follicular keratinization, decreasing sebaceous gland activity, decreasing the population of P. acnes, and decreasing inflammation. Minimal to moderate pauci-inflammatory disease may respond adequately to local therapy alone. Although areas affected with acne should be kept clean, overly vigorous scrubbing may aggravate acne due to mechanical rupture of comedones. Topical agents such as retinoic acid, benzoyl peroxide, or salicylic acid may alter the pattern of epidermal desquamation, preventing the formation of comedones and aiding in the resolution of preexisting cysts. Topical antibacterial agents (such as azelaic acid, erythromycin, clindamycin, or dapsone) are also useful adjuncts to therapy. Patients with moderate to severe acne with a prominent inflammatory component will benefit from the addition of systemic therapy, such as tetracycline in doses of 250–500 mg bid or doxycycline in doses of 100 mg bid. Minocycline is also useful. Such antibiotics appear to have anti-inflammatory effects independent of their antibacterial effects. Female patients who do not respond to oral antibiotics may benefit from hormonal therapy. Several oral contraceptives are now approved by the FDA for use in the treatment of acne vulgaris. Patients with severe nodulocystic acne unresponsive to the therapies discussed above may benefit from treatment with the synthetic retinoid isotretinoin. Its dose is based on the patient’s weight, and it is given once daily for 5 months. Results are excellent in appropriately selected patients. Its use is highly regulated due to its potential for severe adverse events, primarily teratogenicity and depression. In addition, patients receiving this medication develop extremely dry skin and cheilitis and must be followed for development of hypertriglyceridemia. At present, prescribers must enroll in a program designed to prevent pregnancy and adverse events while patients are taking isotretinoin. These measures are imposed to ensure that all prescribers are familiar with the risks of isotretinoin; that all female patients have two negative pregnancy tests prior to initiation of therapy and a negative pregnancy test prior to each refill; and that all patients have been warned about the risks associated with isotretinoin. FIguRE 71-8 Acne rosacea. Prominent facial erythema, telangiecta-sia, scattered papules, and small pustules are seen in this patient with acne rosacea. (Courtesy of Robert Swerlick, MD; with permission.) Acne rosacea, commonly referred to simply as rosacea, is an inflammatory disorder predominantly affecting the central face. Persons most often affected are Caucasians of northern European background, but rosacea also occurs in patients with dark skin. Rosacea is seen almost exclusively in adults, only rarely affecting patients <30 years old. Rosacea is more common in women, but those most severely affected are men. It is characterized by the presence of erythema, telangiectases, and superficial pustules (Fig. 71-8) but is not associated with the presence of comedones. Rosacea rarely involves the chest or back. There is a relationship between the tendency for facial flushing and the subsequent development of acne rosacea. Often, individuals with rosacea initially demonstrate a pronounced flushing reaction. This may be in response to heat, emotional stimuli, alcohol, hot drinks, or spicy foods. As the disease progresses, the flush persists longer and longer and may eventually become permanent. Papules, pustules, and telangiectases can become superimposed on the persistent flush. Rosacea of very long standing may lead to connective tissue overgrowth, particularly of the nose (rhinophyma). Rosacea may also be complicated by various inflammatory disorders of the eye, including keratitis, blepharitis, iritis, and recurrent chalazion. These ocular problems are potentially sight-threatening and warrant ophthalmologic evaluation. Acne rosacea can be treated topically or systemically. Mild disease often responds to topical metronidazole, sodium sulfacetamide, or azaleic acid. More severe disease requires oral tetracyclines: tetracycline, 250–500 mg bid; doxycycline, 100 mg bid; or minocycline, 50–100 mg bid. Residual telangiectasia may respond to laser therapy. Topical glucocorticoids, especially potent agents, should be avoided because chronic use of these preparations may elicit rosacea. Application of topical agents to the skin is not effective treatment for ocular disease. Although smallpox vaccinations were discontinued several decades ago for the general population, they are still required for certain military personnel and first responders. In the absence of a bioterrorism attack and a real or potential exposure to smallpox, such vaccination is contraindicated in persons with a history of skin diseases such as AD, eczema, and psoriasis, who have a higher incidence of adverse events associated with smallpox vaccination. In the case of such exposure, the risk of smallpox infection outweighs the risk of adverse events from the vaccine (Chap. 261e). Skin Manifestations of internal Disease Jean L. Bolognia, Irwin M. Braverman It is a generally accepted concept in medicine that the skin can develop signs of internal disease. Therefore, in textbooks of medicine, one finds a chapter describing in detail the major systemic disorders that 72 can be identified by cutaneous signs. The underlying assumption of such a chapter is that the clinician has been able to identify the specific disorder in the patient and needs only to read about it in the textbook. In reality, concise differential diagnoses and the identification of these disorders are actually difficult for the nondermatologist because he or she is not well-versed in the recognition of cutaneous lesions or their spectrum of presentations. Therefore, this chapter covers this particular topic of cutaneous medicine not by simply focusing on individual diseases, but by describing the various presenting clinical signs and symptoms that point to specific disorders. Concise differential diagnoses will be generated in which the significant diseases will be distinguished from the more common cutaneous disorders that have minimal or no significance with regard to associated internal disease. The latter disorders are reviewed in table form and always need to be excluded when considering the former. For a detailed description of individual diseases, the reader should consult a dermatologic text. (Table 72-1) When an eruption is characterized by elevated lesions, either papules (<1 cm) or plaques (>1 cm), in association with scale, it is referred to as papulosquamous. The most common papulosquamous diseases—tinea, psoriasis, pityriasis rosea, and lichen planus—are primary cutaneous disorders (Chap. 71). When psoriatic lesions are accompanied by arthritis, the possibility of psoriatic arthritis or reactive arthritis (formerly known as Reiter’s syndrome) should be considered. A history of oral ulcers, conjunctivitis, uveitis, and/or urethritis points to the latter diagnosis. Lithium, beta blockers, HIV or streptococcal infections, and a rapid taper of systemic glucocorticoids are known to exacerbate psoriasis. Comorbidities in patients with psoriasis include cardiovascular disease and metabolic syndrome. Whenever the diagnosis of pityriasis rosea or lichen planus is made, it is important to review the patient’s medications because the eruption may resolve by simply discontinuing the offending agent. Pityriasis rosea–like drug eruptions are seen most commonly with beta blockers, angiotensin-converting enzyme (ACE) inhibitors, and metronidazole, whereas the drugs that can produce a lichenoid eruption include thiazides, antimalarials, quinidine, beta blockers, and ACE inhibitors. In some populations, there is a higher prevalence of hepatitis C viral infection in patients with lichen planus. Lichen planus–like lesions are also observed in chronic graft-versus-host disease. In its early stages, the mycosis fungoides (MF) form of cutaneous T cell lymphoma (CTCL) may be confused with eczema or psoriasis, but it often fails to respond to the appropriate therapy for those inflammatory diseases. MF can develop within lesions of large-plaque parapsoriasis and is suggested by an increase in the thickness of the lesions. The diagnosis of MF is established by skin biopsy in which collections of atypical T lymphocytes are found in the epidermis and dermis. As the disease progresses, cutaneous tumors and lymph node involvement may appear. In secondary syphilis, there are scattered red-brown papules with thin scale. The eruption often involves the palms and soles and can resemble pityriasis rosea. Associated findings are helpful in making the diagnosis and include annular plaques on the face, nonscarring alopecia, condyloma lata (broad-based and moist), and mucous patches as well as lymphadenopathy, malaise, fever, headache, and myalgias. The interval between the primary chancre and the secondary stage is usually 4–8 weeks, and spontaneous resolution without appropriate therapy occurs. SELECTED CAuSES of PAPuLoSquAMouS SKin LESionS 1. Primary cutaneous disorders a. b. c. d. e. Parapsoriasis, small plaque and large plaque f. 2. 3. a. Lupus erythematosus, primarily subacute or chronic (discoid) lesionsc b. Cutaneous T cell lymphoma, in particular, mycosis fungoidesd c. d. Reactive arthritis (formerly known as Reiter's syndrome) e. aDiscussed in detail in Chap. 71; cardiovascular disease and the metabolic syndrome are comorbidities in psoriasis; primarily in Europe, hepatitis C virus is associated with oral lichen planus. bAssociated with chronic sun exposure more often than exposure to arsenic; usually one or a few lesions. cSee also Red Lesions in “Papulonodular Skin Lesions.” dAlso cutaneous lesions of HTLV-1-associated adult T cell leukemia/lymphoma. eSee also Red-Brown Lesions in “Papulonodular Skin Lesions.” (Table 72-2) Erythroderma is the term used when the majority of the skin surface is erythematous (red in color). There may be associated scale, erosions, or pustules as well as shedding of the hair and nails. Potential systemic manifestations include fever, chills, hypothermia, reactive lymphadenopathy, peripheral edema, hypoalbuminemia, and high-output cardiac failure. The major etiologies of erythroderma are (1) cutaneous diseases such as psoriasis and dermatitis (Table 72-3); (2) drugs; (3) systemic diseases, most commonly CTCL; and (4) idiopathic. In the first three groups, the location and description of the initial lesions, prior to the development of the erythroderma, aid in the diagnosis. For example, a history of red scaly plaques on the elbows and knees would point to psoriasis. It is also important to examine the skin carefully for a migration of the erythema and associated secondary changes such as pustules or erosions. Migratory waves of erythema studded with superficial pustules are seen in pustular psoriasis. Drug-induced erythroderma (exfoliative dermatitis) may begin as an exanthematous (morbilliform) eruption (Chap. 74) or may arise as diffuse erythema. A number of drugs can produce an erythroderma, including penicillins, sulfonamides, carbamazepine, phenytoin, and allopurinol. Fever and peripheral eosinophilia often accompany the eruption, and there may also be facial swelling, hepatitis, myocarditis, thyroiditis, and allergic interstitial nephritis; this constellation is frequently referred to as drug reaction with eosinophilia and systemic symptoms (DRESS) or drug-induced hypersensitivity reaction (DIHS). In addition, these reactions, especially to aromatic anticonvulsants, can lead to a pseudolymphoma syndrome (with adenopathy and CAuSES of ERyTHRoDERMA 1. Primary cutaneous disorders a. b. c. 2. 3. a. Cutaneous T cell lymphoma (Sézary syndrome, erythrodermic mycosis fungoides) b. 4. Idiopathic (usually older men) a Discussed in detail in Chap. 71. CHAPTER 72 Skin Manifestations of Internal Disease PART 2 Cardinal Manifestations and Presentation of Diseases Abbreviations: Ab, antibody; HSV, herpes simplex virus; IL, interleukin; IM, intramuscular; IV, intravenous; MTX, methotrexate; PUVA, psoralens + ultraviolet A irradiation; SAPHO, synovitis, acne, pustulosis, hyperostosis, and osteitis (also referred to as chronic recurrent multifocal osteomyelitis); TNF, tumor necrosis factor; UV-A, ultraviolet A irradiation; UV-B, ultraviolet B irradiation. circulating atypical lymphocytes), while reactions to allopurinol may The most common causes of nonscarring alopecia include androgebe accompanied by gastrointestinal bleeding. netic alopecia, telogen effluvium, alopecia areata, tinea capitis, and the The most common malignancy that is associated with erythroderma early phase of traumatic alopecia (Table 72-5). In women with androis CTCL; in some series, up to 25% of the cases of erythroderma were genetic alopecia, an elevation in circulating levels of androgens may be due to CTCL. The patient may progress from isolated plaques and seen as a result of ovarian or adrenal gland dysfunction or neoplasm. tumors, but more commonly, the erythroderma is present throughout When there are signs of virilization, such as a deepened voice and the course of the disease (Sézary syndrome). In the Sézary syndrome, enlarged clitoris, the possibility of an ovarian or adrenal gland tumor there are circulating clonal atypical T lymphocytes, pruritus, and lymph-should be considered. adenopathy. In cases of erythroderma where there is no apparent cause Exposure to various drugs can also cause diffuse hair loss, (idiopathic), longitudinal evaluation is mandatory to monitor for the usually by inducing a telogen effluvium. An exception is the anagen possible development of CTCL. There have been isolated case reports effluvium observed with antimitotic agents such as daunorubicin. of erythroderma secondary to some solid tumors—lung, liver, prostate, Alopecia is a side effect of the following drugs: warfarin, heparin, thyroid, and colon—but it is primarily during a late stage of the disease. propylthiouracil, carbimazole, isotretinoin, acitretin, lithium, beta blockers, interferons, colchicine, and amphetamines. Fortunately, spontaneous regrowth usually follows discontinuation of the offend- ing agent. (Table 72-4) The two major forms of alopecia are scarring and non-Less commonly, nonscarring alopecia is associated with lupus eryscarring. Scarring alopecia is associated with fibrosis, inflammation, thematosus and secondary syphilis. In systemic lupus there are two and loss of hair follicles. A smooth scalp with a decreased number of forms of alopecia—one is scarring secondary to discoid lesions (see follicular openings is usually observed clinically, but in some patients, below), and the other is nonscarring. The latter form coincides with the changes are seen only in biopsy specimens from affected areas. In flares of systemic disease and may involve the entire scalp or just nonscarring alopecia, the hair shafts are absent or miniaturized, but the the frontal scalp, with the appearance of multiple short hairs (“lupus hair follicles are preserved, explaining the reversible nature of nonscar-hairs”) as a sign of initial regrowth. Scattered, poorly circumscribed ring alopecia. patches of alopecia with a “moth-eaten” appearance are a manifestation CAuSES of ALoPECiA I. Nonscarring alopecia A. Primary cutaneous disorders 1. 2. 3. 4. 5. B. Drugs C. Systemic diseases 1. 2. 3. 4. 5. 6. Deficiencies of protein, biotin, zinc, and perhaps iron II. A. 1. 2. 3. 4. 5. B. Systemic diseases 1. Discoid lesions in the setting of systemic lupus erythematosusb 2. 3. a Most patients with trichotillomania, pressure-induced alopecia, or early stages of traction alopecia. b While the majority of patients with discoid lesions have only cutaneous disease, these lesions do represent one of the 11 American College of Rheumatology criteria (1982) for systemic lupus erythematosus. c Can involve underlying muscles and osseous structures. of the secondary stage of syphilis. Diffuse thinning of the hair is also 355 associated with hypothyroidism and hyperthyroidism (Table 72-4). Scarring alopecia is more frequently the result of a primary cutaneous disorder such as lichen planus, folliculitis decalvans, chronic cutaneous (discoid) lupus, or linear scleroderma (morphea) than it is a sign of systemic disease. Although the scarring lesions of discoid lupus can be seen in patients with systemic lupus, in the majority of patients, the disease process is limited to the skin. Less common causes of scarring alopecia include sarcoidosis (see “Papulonodular Skin Lesions,” below) and cutaneous metastases. In the early phases of discoid lupus, lichen planus, and folliculitis decalvans, there are circumscribed areas of alopecia. Fibrosis and subsequent loss of hair follicles are observed primarily in the center of these alopecic patches, whereas the inflammatory process is most prominent at the periphery. The areas of active inflammation in discoid lupus are erythematous with scale, whereas the areas of previous inflammation are often hypopigmented with a rim of hyperpigmentation. In lichen planus, perifollicular macules at the periphery are usually violet-colored. A complete examination of the skin and oral mucosa combined with a biopsy and direct immunofluorescence microscopy of inflamed skin will aid in distinguishing these two entities. The peripheral active lesions in folliculitis decalvans are follicular pustules; these patients can develop a reactive arthritis. (Table 72-6) In figurate eruptions, the lesions form rings and arcs that are usually erythematous but can be skin-colored to brown. Most commonly, they are due to primary cutaneous diseases such as tinea, urticaria, granuloma annulare, and erythema annulare centrifugum (Chaps. 71 and 73). An underlying systemic illness is found in a second, less common group of migratory annular erythemas. It includes erythema migrans, erythema gyratum repens, erythema marginatum, and necrolytic migratory erythema. In erythema gyratum repens, one sees numerous mobile concentric arcs and wavefronts that resemble the grain in wood. A search for an CHAPTER 72 Skin Manifestations of Internal Disease Diffuse shedding of normal hairs Follows major stress (high fever, severe infection) or change in hormone levels (postpartum) Miniaturization of hairs along the midline of the scalp Recession of the anterior scalp line in men and some women Well-circumscribed, circular areas of hair loss, 2–5 cm in diameter In extensive cases, coalescence of lesions and/or involvement of other hair-bearing surfaces of the body Pitting or sandpapered appearance of the nails Varies from scaling with minimal hair loss to discrete patches with “black dots” (broken infected hairs) to boggy plaque with pustules (kerion)b Broken hairs, often of varying lengths Irregular outline Stress causes more of the asynchronous growth cycles of individual hairs to become synchronous; therefore, larger numbers of growing (anagen) hairs simultaneously enter the dying (telogen) phase Increased sensitivity of affected hairs to the effects of androgens Increased levels of circulating androgens (ovarian or adrenal source in women) The germinative zones of the hair follicles are surrounded by T lymphocytes Occasional associated diseases: hyperthyroidism, hypothyroidism, vitiligo, Down syndrome Invasion of hairs by dermatophytes, most commonly Trichophyton tonsurans Traction with curlers, rubber bands, braiding Exposure to heat or chemicals (e.g., hair straighteners) Mechanical pulling (trichotillomania) Observation; discontinue any drugs that have alopecia as a side effect; must exclude underlying metabolic causes, e.g., hypothyroidism, hyperthyroidism If no evidence of hyperandrogenemia, then topical minoxidil; finasteridea; spironolactone (women); hair transplant Oral griseofulvin or terbinafine plus 2.5% selenium sulfide or ketoconazole shampoo; examine family members Discontinuation of offending hair style or chemical treatments; diagnosis of trichotillomania may require observation of shaved hairs (for growth) or biopsy, possibly followed by psychotherapy aTo date, Food and Drug Administration–approved for men. bScarring alopecia can occur at sites of kerions. cMay also be scarring, especially late-stage traction alopecia. CAuSES of figuRATE SKin LESionS I. Primary cutaneous disorders A. Tinea B. Urticaria (primary in ≥90% of patients) C. Granuloma annulare D. Erythema annulare centrifugum E. Psoriasis II. A. 1. Erythema migrans (CDC case definition is ≥5 cm in diameter) 2. Urticaria (≤10% of patients) 3. 4. 5. 6. B. Nonmigratory 1. 2. 3. 4. Cutaneous T cell lymphoma (especially mycosis fungoides) aMigratory erythema with erosions; favors lower extremities and girdle area. Abbreviation: CDC, Centers for Disease Control and Prevention. underlying malignancy is mandatory in a patient with this eruption. Erythema migrans is the cutaneous manifestation of Lyme disease, which is caused by the spirochete Borrelia burgdorferi. In the initial stage (3–30 days after tick bite), a single annular lesion is usually seen, which can expand to ≥10 cm in diameter. Within several days, up to half of the patients develop multiple smaller erythematous lesions at sites distant from the bite. Associated symptoms include fever, headache, photophobia, myalgias, arthralgias, and malar rash. Erythema marginatum is seen in patients with rheumatic fever, primarily on the trunk. Lesions are pink-red in color, flat to minimally elevated, and transient. There are additional cutaneous diseases that present as annular eruptions but lack an obvious migratory component. Examples include CTCL, subacute cutaneous lupus, secondary syphilis, and sarcoidosis (see “Papulonodular Skin Lesions,” below). (Table 72-7) In addition to acne vulgaris and acne rosacea, the two major forms of acne (Chap. 71), there are drugs and systemic diseases that can lead to acneiform eruptions. Patients with the carcinoid syndrome have episodes of flushing of the head, neck, and sometimes the trunk. Resultant skin changes of the face, in particular telangiectasias, may mimic the clinical appearance of acne rosacea. CAuSES of ACnEifoRM ERuPTionS I. Primary cutaneous disorders A. Acne vulgaris B. Acne rosacea II. Drugs, e.g., anabolic steroids, glucocorticoids, lithium, EGFRa inhibitors, iodides III. A. 1. Adrenal origin, e.g., Cushing’s disease, 21-hydroxylase deficiency 2. Ovarian origin, e.g., polycystic ovary syndrome, ovarian hyperthecosis B. Cryptococcosis, disseminated C. Dimorphic fungal infections D. Behçet’s disease aEGFR, epidermal growth factor receptor. PART 2 Cardinal Manifestations and Presentation of Diseases Acneiform eruptions (see “Acne,” above) and folliculitis represent the most common pustular dermatoses. An important consideration in the evaluation of follicular pustules is a determination of the associated pathogen, e.g., normal flora, Staphylococcus aureus, Pseudomonas aeruginosa (“hot tub” folliculitis), Malassezia, dermatophytes (Majocchi’s granuloma), and Demodex spp. Noninfectious forms of folliculitis include HIVor immunosuppression-associated eosinophilic folliculitis and folliculitis secondary to drugs such as glucocorticoids, lithium, and epidermal growth factor receptor (EGFR) inhibitors. Administration of high-dose systemic glucocorticoids can result in a widespread eruption of follicular pustules on the trunk, characterized by lesions in the same stage of development. With regard to underlying systemic diseases, nonfollicular-based pustules are a characteristic component of pustular psoriasis (sterile) and can be seen in septic emboli of bacterial or fungal origin (see “Purpura,” below). In patients with acute generalized exanthematous pustulosis (AGEP) due primarily to medications (e.g., cephalosporins), there are large areas of erythema studded with multiple sterile pustules in addition to neutrophilia. (Table 72-8) To distinguish the various types of telangiectasias, it is important to examine the shape and configuration of the dilated blood vessels. Linear telangiectasias are seen on the face of patients CAuSES of TELAngiECTASiAS I. Primary cutaneous disorders A. Linear/branching 1. 2. 3. 4. 5. 6. B. Poikiloderma 1. 2. Parapsoriasis, large plaque C. Spider angioma 1. 2. II. A. 1. 2. 3. B. Poikiloderma 1. 2. 3. C. Mat 1. Systemic sclerosis (scleroderma) D. Periungual/cuticular 1. 2. 3. 4. E. Papular 1. Hereditary hemorrhagic telangiectasia F. Spider angioma 1. Cirrhosis aBecoming less common. with actinically damaged skin and acne rosacea, and they are found on the legs of patients with venous hypertension and generalized essential telangiectasia. Patients with an unusual form of mastocytosis (telangiectasia macularis eruptiva perstans) and the carcinoid syndrome (see “Acne,” above) also have linear telangiectasias. Lastly, linear telangiectasias are found in areas of cutaneous inflammation. For example, lesions of discoid lupus frequently have telangiectasias within them. Poikiloderma is a term used to describe a patch of skin with: (1) reticulated hypoand hyperpigmentation, (2) wrinkling secondary to epidermal atrophy, and (3) telangiectasias. Poikiloderma does not imply a single disease entity—although it is becoming less common, it is seen in skin damaged by ionizing radiation as well as in patients with autoimmune connective tissue diseases, primarily dermatomyositis (DM), and rare genodermatoses (e.g., Kindler syndrome). In systemic sclerosis (scleroderma) the dilated blood vessels have a unique configuration and are known as mat telangiectasias. The lesions are broad macules that usually measure 2–7 mm in diameter but occasionally are larger. Mats have a polygonal or oval shape, and their erythematous color may appear uniform, but, upon closer inspection, the erythema is the result of delicate telangiectasias. The most common locations for mat telangiectasias are the face, oral mucosa, and hands—peripheral sites that are prone to intermittent ischemia. The limited form of systemic sclerosis, often referred to as the CREST (calcinosis cutis, Raynaud’s phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia) variant (Chap. 382), is associated with a chronic course and anticentromere antibodies. Mat telangiectasias are an important clue to the diagnosis of this variant as well as the diffuse form of systemic sclerosis because they may be the only cutaneous finding. Periungual telangiectasias are pathognomonic signs of the three major autoimmune connective tissue diseases: lupus erythematosus, systemic sclerosis, and DM. They are easily visualized by the naked eye and occur in at least two-thirds of these patients. In both DM and lupus, there is associated nailfold erythema, and in DM, the erythema is often accompanied by “ragged” cuticles and fingertip tenderness. Under 10× magnification, the blood vessels in the nailfolds of lupus patients are tortuous and resemble “glomeruli,” whereas in systemic sclerosis and DM, there is a loss of capillary loops and those that remain are markedly dilated. In hereditary hemorrhagic telangiectasia (Osler-Rendu-Weber disease), the lesions usually appear during adolescence (mucosal) and adulthood (cutaneous) and are most commonly seen on the mucous membranes (nasal, orolabial), face, and distal extremities, including under the nails. They represent arteriovenous (AV) malformations of the dermal microvasculature, are dark red in color, and are usually slightly elevated. When the skin is stretched over an individual lesion, an eccentric punctum with radiating legs is seen. Although the degree of systemic involvement varies in this autosomal dominant disease (due primarily to mutations in either the endoglin or activin receptor– like kinase gene), the major symptoms are recurrent epistaxis and gastrointestinal bleeding. The fact that these mucosal telangiectasias are actually AV communications helps to explain their tendency to bleed. (Table 72-9) Disorders of hypopigmentation are often classified as either diffuse or localized. The classic example of diffuse hypopigmentation is oculocutaneous albinism (OCA). The most common forms are due to mutations in the tyrosinase gene (type I) or the P gene (type II); patients with type IA OCA have a total lack of enzyme activity. At birth, different forms of OCA can appear similar—white hair, gray-blue eyes, and pink-white skin. However, the patients with no tyrosinase activity maintain this phenotype, whereas those with decreased activity will acquire some pigmentation of the eyes, hair, and skin as they age. The degree of pigment formation is also a function of racial background, and the pigmentary dilution is more readily apparent when patients are compared to their first-degree relatives. The ocular findings in OCA correlate with the degree of hypopigmentation and CAuSES of HyPoPigMEnTATion I. Primary cutaneous disorders A. Diffuse 1. Generalized vitiligoa B. Localized 1. 2. 3. 4. 5. 6. 7. II. A. 1. 2. Hermansky-Pudlak syndromeb,c 3. Chédiak-Higashi syndromeb,d 4. B. Localized 1. 2. Melanoma-associated leukoderma, spontaneous or immunotherapy-induced 3. 4. 5. 6. 7. 8. Linear nevoid hypopigmentation (hypomelanosis of Ito)e 9. 10. 11. aAbsence of melanocytes in areas of leukoderma. bNormal number of melanocytes. cPlatelet storage defect and restrictive lung disease secondary to deposits of ceroid-like material or immunodeficiency; due to mutations in β subunit of adaptor protein 3 as well as subunits of biogenesis of lysosome-related organelles complex (BLOC)-1, -2, and -3. dGiant lysosomal granules and recurrent infections. eMinority of patients in a nonreferral setting have systemic abnormalities (musculoskeletal, central nervous system, ocular). include decreased visual acuity, nystagmus, photophobia, strabismus, and a lack of normal binocular vision. The differential diagnosis of localized hypomelanosis includes the following primary cutaneous disorders: idiopathic guttate hypomelanosis, postinflammatory hypopigmentation, tinea (pityriasis) versicolor, vitiligo, chemicalor drug-induced leukoderma, nevus depigmentosus (see below), and piebaldism (Table 72-10). In this group of diseases, the areas of involvement are macules or patches with a decrease or absence of pigmentation. Patients with vitiligo also have an increased incidence of several autoimmune disorders, including Hashimoto’s thyroiditis, Graves’ disease, pernicious anemia, Addison’s disease, uveitis, alopecia areata, chronic mucocutaneous candidiasis, and the autoimmune polyendocrine syndromes (types I and II). Diseases of the thyroid gland are the most frequently associated disorders, occurring in up to 30% of patients with vitiligo. Circulating autoantibodies are often found, and the most common ones are antithyroglobulin, antimicrosomal, and antithyroid-stimulating hormone receptor antibodies. There are four systemic diseases that should be considered in a patient with skin findings suggestive of vitiligo—Vogt-Koyanagi-Harada syndrome, systemic sclerosis, onchocerciasis, and melanoma-associated leukoderma. A history of aseptic meningitis, nontraumatic uveitis, tinnitus, hearing loss, and/or dysacousia points to the diagnosis of the Vogt-Koyanagi-Harada syndrome. In these patients, the face and scalp are the most common locations of pigment loss. The CHAPTER 72 Skin Manifestations of Internal Disease 358 TABLE 72-10 HyPoPigMEnTATion (PRiMARy CuTAnEouS DiSoRDERS, LoCALizED) PART 2 Cardinal Manifestations and Presentation of Diseases Can develop within active lesions, as in subacute cutaneous lupus, or after the lesion fades, as in atopic dermatitis Upper trunk and neck (shawl-like distribution), groin Symmetric areas of complete pigment loss Periorificial—around mouth, nose, eyes, nipples, umbilicus, anus Other areas—flexor wrists, extensor distal extremities Segmental form is less common—unilateral, dermatomal-like Similar appearance to vitiligo Often begins on hands when associated with chemical exposure Satellite lesions in areas not exposed to chemicals Congenital, stable Areas of amelanosis contain normally pigmented and hyper-pigmented macules of various sizes Symmetric involvement of central forehead, ventral trunk, and mid regions of upper and lower extremities Less enhancement than vitiligo Enhancement of leukoderma and hyperpigmented macules Abrupt decrease in epidermal melanin content Type of inflammatory infiltrate depends on specific disease Absence of melanocytes Decreased number or absence of melanocytes Amelanotic areas—few to no melanocytes Possible somatic mutations as a reflection of aging or UV exposure Block in transfer of melanin from melanocytes to keratinocytes could be secondary to edema or decrease in contact time Destruction of melanocytes if inflammatory cells attack basal layer of epidermis Invasion of stratum corneum by the yeast Yeast is lipophilic and produces C9 and C11 dicarboxylic acids, which in vitro inhibit tyrosinase Autoimmune phenomenon that results in destruction of melanocytes—primarily cellular (circulating skin-homing autoreactive T cells) Exposure to chemicals that selectively destroy melanocytes, in particular phenols and catechols (germicides; adhesives) or ingestion of drugs such as imatinib Release of cellular antigens and activation of circulating lymphocytes may explain satellite phenomenon Possible inhibition of KIT receptor Defect in migration of melanoblasts from neural crest to involved skin or failure of melanoblasts to survive or differentiate in these areas Mutations within the c-kit protooncogene that encodes the tyrosine kinase receptor for stem cell growth factor (kit ligand) None Selenium sulfide 2.5%; topical imidazoles; oral triazoles Topical glucocorticoids; topical calcineurin inhibitors; NBUV-B; PUVA; transplants, if stable; depigmentation (topical MBEH), if widespread Avoid exposure to offending agent, then treat as vitiligo Drug-induced variant may undergo repigmentation when medication is discontinued Abbreviations: MBEH, monobenzylether of hydroquinone; NBUV-B, narrow band ultraviolet B; PUVA, psoralens + ultraviolet A irradiation. vitiligo-like leukoderma seen in patients with systemic sclerosis has a clinical resemblance to idiopathic vitiligo that has begun to repigment as a result of treatment; that is, perifollicular macules of normal pigmentation are seen within areas of depigmentation. The basis of this leukoderma is unknown; there is no evidence of inflammation in areas of involvement, but it can resolve if the underlying connective tissue disease becomes inactive. In contrast to idiopathic vitiligo, melanoma-associated leukoderma often begins on the trunk, and its appearance, if spontaneous, should prompt a search for metastatic disease. It is also seen in patients undergoing immunotherapy for melanoma, including ipilimumab, with cytotoxic T lymphocytes presumably recognizing cell surface antigens common to melanoma cells and melanocytes, and is associated with a greater likelihood of a clinical response. There are two systemic disorders (neurocristopathies) that may have the cutaneous findings of piebaldism (Table 72-9). They are Shah-Waardenburg syndrome and Waardenburg syndrome. A possible explanation for both disorders is an abnormal embryonic migration or survival of two neural crest–derived elements, one of them being melanocytes and the other myenteric ganglion cells (leading to Hirschsprung disease in Shah-Waardenburg syndrome) or auditory nerve cells (Waardenburg syndrome). The latter syndrome is characterized by congenital sensorineural hearing loss, dystopia canthorum (lateral displacement of the inner canthi but normal interpupillary distance), heterochromic irises, and a broad nasal root, in addition to the piebaldism. The facial dysmorphism can be explained by the neural crest origin of the connective tissues of the head and neck. Patients with Waardenburg syndrome have been shown to have mutations in four genes, including PAX-3 and MITF, all of which encode DNA-binding proteins, whereas patients with Hirschsprung disease plus white spotting have mutations in one of three genes—endothelin 3, endothelin B receptor, and SOX-10. In tuberous sclerosis, the earliest cutaneous sign is macular hypomelanosis, referred to as an ash leaf spot. These lesions are often present at birth and are usually multiple; however, detection may require Wood’s lamp examination, especially in fair-skinned individuals. The pigment within them is reduced, but not absent. The average size is 1–3 cm, and the common shapes are polygonal and lance-ovate. Examination of the patient for additional cutaneous signs such as multiple angiofibromas of the face (adenoma sebaceum), ungual and gingival fibromas, fibrous plaques of the forehead, and connective tissue nevi (shagreen patches) is recommended. It is important to remember that an ash leaf spot on the scalp will result in a circumscribed patch of lightly pigmented hair. Internal manifestations include seizures, mental retardation, central nervous system (CNS) and retinal hamartomas, pulmonary lymphangioleiomyomatosis (women), renal angiomyolipomas, and cardiac rhabdomyomas. The latter can be detected in up to 60% of children (<18 years) with tuberous sclerosis by echocardiography. Nevus depigmentosus is a stable, well-circumscribed hypomelanosis that is present at birth. There is usually a single oval or rectangular lesion, but when there are multiple lesions, the possibility of tuberous sclerosis needs to be considered. In linear nevoid hypopigmentation, a term that is replacing hypomelanosis of Ito and segmental or systematized nevus depigmentosus, streaks and swirls of hypopigmentation are observed. Up to a third of patients in a tertiary care setting had associated abnormalities involving the musculoskeletal system (asymmetry), the CNS (seizures and mental retardation), and the eyes (strabismus and hypertelorism). Chromosomal mosaicism has been detected in these patients, lending support to the hypothesis that the cutaneous pattern is the result of the migration of two clones of primordial melanocytes, each with a different pigment potential. Localized areas of decreased pigmentation are commonly seen as a result of cutaneous inflammation (Table 72-10) and have been observed in the skin overlying active lesions of sarcoidosis (see “Papulonodular Skin Lesions,” below) as well as in CTCL. Cutaneous infections also present as disorders of hypopigmentation, and in tuberculoid leprosy, there are a few asymmetric patches of hypomelanosis that have associated anesthesia, anhidrosis, and alopecia. Biopsy specimens of the palpable border show dermal granulomas that contain rare, if any, Mycobacterium leprae organisms. (Table 72-11) Disorders of hyperpigmentation are also divided into two groups—localized and diffuse. The localized forms are due to an epidermal alteration, a proliferation of melanocytes, or an increase in pigment production. Both seborrheic keratoses and acanthosis nigricans belong to the first group. Seborrheic keratoses are common lesions, but in one rare clinical setting, they are a sign of systemic disease, and that setting is the sudden appearance of multiple lesions, often with an inflammatory base and in association with acrochordons (skin tags) and acanthosis nigricans. This is termed the sign of Leser-Trélat and alerts the clinician to search for an internal malignancy. Acanthosis nigricans can also be a reflection of an internal malignancy, most commonly of the gastrointestinal tract, and it appears as velvety hyperpigmentation, primarily in flexural areas. However, in the majority of patients, acanthosis nigricans is associated with obesity and insulin resistance, although it may be a reflection of an endocrinopathy such as acromegaly, Cushing’s syndrome, polycystic ovary syndrome, or insulin-resistant diabetes mellitus (type A, type B, and lipodystrophic forms). A proliferation of melanocytes results in the following pigmented lesions: lentigo, melanocytic nevus, and melanoma (Chap. 105). In an adult, the majority of lentigines are related to sun exposure, which explains their distribution. However, in the Peutz-Jeghers and LEOPARD (lentigines; ECG abnormalities, primarily conduction defects; ocular hypertelorism; pulmonary stenosis and subaortic valvular stenosis; abnormal genitalia [cryptorchidism, hypospadias]; retardation of growth; and deafness [sensorineural]) syndromes, lentigines do serve as a clue to systemic disease. In LEOPARD syndrome, hundreds of lentigines develop during childhood and are scattered over the entire surface of the body. The lentigines in patients with Peutz-Jeghers syndrome are located primarily around the nose and mouth, on the hands and feet, and within the oral cavity. While the pigmented macules on the face may fade with age, the oral lesions persist. However, similar intraoral lesions are also seen in Addison’s disease, in Laugier-Hunziker syndrome (no internal manifestations), and as a normal finding in darkly pigmented individuals. Patients with this autosomal dominant syndrome (due to mutations in a novel serine threonine kinase gene) have multiple benign polyps of the gastrointestinal tract, testicular or ovarian tumors, and an increased risk of developing gastrointestinal (primarily colon) and pancreatic cancers. In the Carney complex, numerous lentigines are also seen, but they are in association with cardiac myxomas. This autosomal dominant disorder is also known as the LAMB (lentigines, atrial myxomas, mucocutaneous myxomas, and blue nevi) syndrome or NAME (nevi, atrial myxoma, myxoid neurofibroma, and ephelides [freckles]) syndrome. These patients can also have evidence of endocrine overactivity in the form of Cushing’s syndrome (pigmented nodular adrenocortical disease) and acromegaly. The third type of localized hyperpigmentation is due to a local increase in pigment production, and it includes ephelides and café au lait macules (CALMs). While a single CALM can be seen in up to 10% of the normal population, the presence of multiple or large-sized CALMs raises the possibility of an associated genodermatosis, e.g., neurofibromatosis (NF) or McCune-Albright syndrome. CALMs are flat, uniformly brown in color (usually two shades darker than uninvolved skin), and can vary in size from 0.5–12 cm. Approximately 80–90% of adult patients with type I NF will have six or more CALMs measuring ≥1.5 cm in diameter. Additional findings are discussed in the section on neurofibromas (see “Papulonodular Skin Lesions,” below). In comparison with NF, the CALMs in patients with McCune-Albright syndrome (polyostotic fibrous dysplasia with precocious puberty in females due to mosaicism for an activating mutation in a G protein [Gsα] gene) are usually larger, are more irregular in outline, and tend to respect the midline. In incontinentia pigmenti, dyskeratosis congenita, and bleomycin pigmentation, the areas of localized hyperpigmentation form a pattern—swirled in the first, reticulated in the second, and flagellate in the third. In dyskeratosis congenita, atrophic reticulated CHAPTER 72 Skin Manifestations of Internal Disease CAuSES of HyPERPigMEnTATion I. Primary cutaneous disorders A. Localized 1. Epidermal alteration a. b. 2. Proliferation of melanocytes a. b. c. 3. Increased pigment production a. b. c. B. 1. Drugs (e.g., minocycline, hydroxychloroquine, bleomycin) II. A. 1. Epidermal alteration a. Seborrheic keratoses (sign of Leser-Trélat) b. Acanthosis nigricans (insulin resistance, other endocrine disorders, paraneoplastic) 2. Proliferation of melanocytes a. b. 3. Increased pigment production a. Café au lait macules (neurofibromatosis, McCune-Albright syndromeb) b. 4. Dermal pigmentation a. b. B. 1. Endocrinopathies a. b. c. d. 2. Metabolic a. b. c. Vitamin B12, folate deficiency d. e. Malabsorption, including Whipple’s disease 3. Melanosis secondary to metastatic melanoma 4. a. b. c. 5. Drugs and metals (e.g., arsenic) aAlso lentigines. bPolyostotic fibrous dysplasia. cSee also “Papulonodular Skin Lesions.” dLate 1980s. Abbreviations: LAMB, lentigines, atrial myxomas, mucocutaneous myxomas, and blue nevi; LEOPARD, lentigines, ECG abnormalities, ocular hypertelorism, pulmonary stenosis and subaortic valvular stenosis, abnormal genitalia, retardation of growth, and deafness (sensorineural); NAME, nevi, atrial myxoma, myxoid neurofibroma, and ephelides (freckles); POEMS, polyneuropathy, organomegaly, endocrinopathies, M-protein, and skin changes. PART 2 Cardinal Manifestations and Presentation of Diseases hyperpigmentation is seen on the neck, trunk, and thighs and is accompanied by nail dystrophy, pancytopenia, and leukoplakia of the oral and anal mucosae. The latter often develops into squamous cell carcinoma. In addition to the flagellate pigmentation (linear streaks) on the trunk, patients receiving bleomycin often have hyperpigmentation overlying the elbows, knees, and small joints of the hand. Localized hyperpigmentation is seen as a side effect of several other systemic medications, including those that produce fixed drug reactions (nonsteroidal anti-inflammatory drugs [NSAIDs], sulfonamides, barbiturates, and tetracyclines) and those that can complex with melanin (antimalarials) or iron (minocycline). Fixed drug eruptions recur in the exact same location as circular areas of erythema that can become bullous and then resolve as brown macules. The eruption usually appears within hours of administration of the offending agent, and common locations include the genitalia, distal extremities, and perioral region. Chloroquine and hydroxychloroquine produce gray-brown to blue-black discoloration of the shins, hard palate, and face, while blue macules (often misdiagnosed as bruises) can be seen on the lower extremities and in sites of inflammation with prolonged minocycline administration. Estrogen in oral contraceptives can induce melasma—symmetric brown patches on the face, especially the cheeks, upper lip, and forehead. Similar changes are seen in pregnancy and in patients receiving phenytoin. In the diffuse forms of hyperpigmentation, the darkening of the skin may be of equal intensity over the entire body or may be accentuated in sun-exposed areas. The causes of diffuse hyperpigmentation can be divided into four major groups—endocrine, metabolic, autoimmune, and drugs. The endocrinopathies that frequently have associated hyperpigmentation include Addison’s disease, Nelson syndrome, and ectopic ACTH syndrome. In these diseases, the increased pigmentation is diffuse but is accentuated in sun-exposed areas, the palmar creases, sites of friction, and scars. An overproduction of the pituitary hormones α-MSH (melanocyte-stimulating hormone) and ACTH can lead to an increase in melanocyte activity. These peptides are products of the proopiomelanocortin gene and exhibit homology; e.g., α-MSH and ACTH share 13 amino acids. A minority of patients with Cushing’s disease or hyperthyroidism have generalized hyperpigmentation. The metabolic causes of hyperpigmentation include porphyria cutanea tarda (PCT), hemochromatosis, vitamin B12 deficiency, folic acid deficiency, pellagra, and malabsorption, including Whipple’s disease. In patients with PCT (see “Vesicles/Bullae,” below), the skin darkening is seen in sun-exposed areas and is a reflection of the photoreactive properties of porphyrins. The increased level of iron in the skin of patients with type 1 hemochromatosis stimulates melanin pigment production and leads to the classic bronze color. Patients with pellagra have a brown discoloration of the skin, especially in sun-exposed areas, as a result of nicotinic acid (niacin) deficiency. In the areas of increased pigmentation, there is a thin, varnish-like scale. These changes are also seen in patients who are vitamin B6 deficient, have functioning carcinoid tumors (increased consumption of niacin), or take isoniazid. Approximately 50% of the patients with Whipple’s disease have an associated generalized hyperpigmentation in association with diarrhea, weight loss, arthritis, and lymphadenopathy. A diffuse, slate-blue to gray-brown color is seen in patients with melanosis secondary to metastatic melanoma. The color reflects widespread deposition of melanin within the dermis as a result of the high concentration of circulating melanin precursors. Of the autoimmune diseases associated with diffuse hyperpigmentation, biliary cirrhosis and systemic sclerosis are the most common, and occasionally, both disorders are seen in the same patient. The skin is dark brown in color, especially in sun-exposed areas. In biliary cirrhosis, the hyperpigmentation is accompanied by pruritus, jaundice, and xanthomas, whereas in systemic sclerosis, it is accompanied by sclerosis of the extremities, face, and, less commonly, the trunk. Additional clues to the diagnosis of systemic sclerosis are mat and periungual telangiectasias, calcinosis cutis, Raynaud’s phenomenon, and distal ulcerations (see “Telangiectasias,” above). The differential diagnosis of cutaneous sclerosis with hyperpigmentation includes the POEMS (polyneuropathy; organomegaly [liver, spleen, lymph nodes]; endocrinopathies [impotence, gynecomastia]; M-protein; and skin changes) syndrome. The skin changes include hyperpigmentation, induration, hypertrichosis, angiomas, clubbing, and facial lipoatrophy. Diffuse hyperpigmentation that is due to drugs or metals can result from one of several mechanisms—induction of melanin pigment formation, complexing of the drug or its metabolites to melanin, and deposits of the drug in the dermis. Busulfan, cyclophosphamide, 5-fluorouracil, and inorganic arsenic induce pigment production. Complexes containing melanin or iron plus the drug or its metabolites are seen in patients receiving minocycline, and a diffuse, blue-gray, muddy appearance within sun-exposed areas may develop, in addition to pigmentation of the mucous membranes, teeth, nails, bones, and thyroid. Administration of amiodarone can result in both a phototoxic eruption (exaggerated sunburn) and/or a slate-gray to violaceous discoloration of sun-exposed skin. Biopsy specimens of the latter show yellow-brown granules in dermal macrophages, which represent intralysosomal accumulations of lipids, amiodarone, and its metabolites. Actual deposits of a particular drug or metal in the skin are seen with silver (argyria), where the skin appears blue-gray in color; gold (chrysiasis), where the skin has a brown to blue-gray color; and clofazimine, where the skin appears reddish brown. The associated pigmentation is accentuated in sun-exposed areas, and discoloration of the eye is seen with gold (sclerae) and clofazimine (conjunctivae). (Table 72-12) Depending on their size, cutaneous blisters are referred to as vesicles (<1 cm) or bullae (>1 cm). The primary autoimmune blistering disorders include pemphigus vulgaris, pemphigus foliaceus, paraneoplastic pemphigus, bullous pemphigoid, gestational pemphigoid, cicatricial pemphigoid, epidermolysis bullosa acquisita, linear IgA bullous dermatosis (LABD), and dermatitis herpetiformis (Chap. 73). Vesicles and bullae are also seen in contact dermatitis, both allergic and irritant forms (Chap. 71). When there is a linear arrangement of vesicular lesions, an exogenous cause or herpes zoster should be suspected. Bullous disease secondary to the ingestion of drugs can take one of several forms, including phototoxic eruptions, isolated bullae, Stevens-Johnson syndrome (SJS), and toxic epidermal necrolysis (TEN) (Chap. 74). Clinically, phototoxic eruptions resemble an exaggerated sunburn with diffuse erythema and bullae in sun-exposed areas. The most commonly associated drugs are doxycycline, quinolones, thiazides, NSAIDs, voriconazole, and psoralens. The development of a phototoxic eruption is dependent on the doses of both the drug and ultraviolet (UV)-A irradiation. Toxic epidermal necrolysis is characterized by bullae that arise on widespread areas of tender erythema and then slough. This results in large areas of denuded skin. The associated morbidity, such as sepsis, and mortality rates are relatively high and are a function of the extent of epidermal necrosis. In addition, these patients may also have involvement of the mucous membranes and respiratory and intestinal tracts. Drugs are the primary cause of TEN, and the most common offenders are aromatic anticonvulsants (phenytoin, barbiturates, carbamazepine), sulfonamides, aminopenicillins, allopurinol, and NSAIDs. Severe acute graft-versus-host disease (grade 4), van-comycin-induced LABD, and the acute syndrome of apoptotic panepidermolysis (ASAP) in patients with lupus can also resemble TEN. In erythema multiforme (EM), the primary lesions are pink-red macules and edematous papules, the centers of which may become vesicular. In contrast to a morbilliform exanthem, the clue to the diagnosis of EM, and especially SJS, is the development of a “dusky” violet color in the center of the lesions. Target lesions are also characteristic of EM and arise as a result of active centers and borders in combination with centrifugal spread. However, target lesions need not be present to make the diagnosis of EM. EM has been subdivided into two major groups: (1) EM minor due to herpes simplex virus (HSV) and (2) EM major due to HSV; Mycoplasma pneumoniae; or, occasionally, drugs. Involvement of the CAuSES of vESiCLES/BuLLAE I. Primary mucocutaneous diseases A. Primary blistering diseases (autoimmune) 1. Pemphigus, foliaceus and vulgarisa 2. 3. 4. 5. Dermatitis herpetiformisb,c 6. 7. Epidermolysis bullosa acquisitab,d B. Secondary blistering diseases 1. Contact dermatitisa,b 2. 3. 4. C. Infections 1. Varicella-zoster virusa,f 2. Herpes simplex virusa,f 3. Enteroviruses, e.g., hand-foot-and-mouth diseasef 4. Staphylococcal scalded-skin syndromea,g 5. II. A. 1. Paraneoplastic pemphigusa B. Infections 1. Cutaneous embolib C. Metabolic 1. Diabetic bullaea,b 2. 3. 4. 5. Bullous dermatosis of hemodialysisb D. Ischemia 1. Coma bullae aIntraepidermal. bSubepidermal. cAssociated with gluten enteropathy. dAssociated with inflammatory bowel disease. eDegeneration of cells within the basal layer of the epidermis can give impression split is subepidermal. fAlso systemic. gIn adults, associated with renal failure and immunocompromised state. mucous membranes (ocular, nasal, oral, and genital) is seen more commonly in the latter form. Hemorrhagic crusts of the lips are characteristic of EM major and SJS as well as herpes simplex, pemphigus vulgaris, and paraneoplastic pemphigus. Fever, malaise, myalgias, sore throat, and cough may precede or accompany the eruption. The lesions of EM usually resolve over 2–4 weeks but may be recurrent, especially when due to HSV. In addition to HSV (in which lesions usually appear 7–12 days after the viral eruption), EM can also follow vaccinations, radiation therapy, and exposure to environmental toxins, including the oleoresin in poison ivy. Induction of SJS is most often due to drugs, especially sulfonamides, phenytoin, barbiturates, lamotrigine, aminopenicillins, nonnucleoside reverse transcriptase inhibitors (e.g., nevirapine), and carbamazepine. Widespread dusky macules and significant mucosal involvement are characteristic of SJS, and the cutaneous lesions may or may not develop epidermal detachment. If the latter occurs, by definition, it is limited to <10% of the body surface area (BSA). Greater involvement leads to the diagnosis of SJS/TEN overlap (10–30% BSA) or TEN (>30% BSA). In addition to primary blistering disorders and hypersensitivity reactions, bacterial and viral infections can lead to vesicles and bullae. The most common infectious agents are HSV (Chap. 216), varicellazoster virus (Chap. 217), and S. aureus (Chap. 172). CHAPTER 72 Skin Manifestations of Internal Disease Staphylococcal scalded-skin syndrome (SSSS) and bullous impetigo are two blistering disorders associated with staphylococcal (phage group II) infection. In SSSS, the initial findings are redness and tenderness of the central face, neck, trunk, and intertriginous zones. This is followed by short-lived flaccid bullae and a slough or exfoliation of the superficial epidermis. Crusted areas then develop, characteristically around the mouth in a radial pattern. SSSS is distinguished from TEN by the following features: younger age group (primarily infants), more superficial site of blister formation, no oral lesions, shorter course, lower morbidity and mortality rates, and an association with staphylococcal exfoliative toxin (“exfoliatin”), not drugs. A rapid diagnosis of SSSS versus TEN can be made by a frozen section of the blister roof or exfoliative cytology of the blister contents. In SSSS, the site of staphylococcal infection is usually extracutaneous (conjunctivitis, rhinorrhea, otitis media, pharyngitis, tonsillitis), and the cutaneous lesions are sterile, whereas in bullous impetigo, the skin lesions are the site of infection. Impetigo is more localized than SSSS and usually presents with honey-colored crusts. Occasionally, superficial purulent blisters also form. Cutaneous emboli from gram-negative infections may present as isolated bullae, but the base of the lesion is purpuric or necrotic, and it may develop into an ulcer (see “Purpura,” below). Several metabolic disorders are associated with blister formation, including diabetes mellitus, renal failure, and porphyria. Local hypoxemia secondary to decreased cutaneous blood flow can also produce blisters, which explains the presence of bullae over pressure points in comatose patients (coma bullae). In diabetes mellitus, tense bullae with clear sterile viscous fluid arise on normal skin. The lesions can be as large as 6 cm in diameter and are located on the distal extremities. There are several types of porphyria, but the most common form with cutaneous findings is porphyria cutanea tarda (PCT). In sun-exposed areas (primarily the face and hands), the skin is very fragile, with trauma leading to erosions mixed with tense vesicles. These lesions then heal with scarring and formation of milia; the latter are firm, 1to 2-mm white or yellow papules that represent epidermoid inclusion cysts. Associated findings can include hypertrichosis of the lateral malar region (men) or face (women) and, in sun-exposed areas, hyperpigmentation and firm sclerotic plaques. An elevated level of urinary uroporphyrins confirms the diagnosis and is due to a decrease in uroporphyrinogen decarboxylase activity. PCT can be exacerbated by alcohol, hemochromatosis and other forms of iron overload, chlorinated hydrocarbons, hepatitis C and HIV infections, and hepatomas. The differential diagnosis of PCT includes (1) porphyria variegata— the skin signs of PCT plus the systemic findings of acute intermittent porphyria; it has a diagnostic plasma porphyrin fluorescence emission at 626 nm; (2) drug-induced pseudoporphyria—the clinical and histologic findings are similar to PCT, but porphyrins are normal; etiologic agents include naproxen and other NSAIDs, furosemide, tetracycline, and voriconazole; (3) bullous dermatosis of hemodialysis—the same appearance as PCT, but porphyrins are usually normal or occasionally borderline elevated; patients have chronic renal failure and are on hemodialysis; (4) PCT associated with hepatomas and hemodialysis; and (5) epidermolysis bullosa acquisita (Chap. 73). (Table 72-13) Exanthems are characterized by an acute generalized eruption. The most common presentation is erythematous macules and papules (morbilliform) and less often confluent blanching erythema (scarlatiniform). Morbilliform eruptions are usually due to either drugs or viral infections. For example, up to 5% of patients receiving penicillins, sulfonamides, phenytoin, or nevirapine will develop a maculopapular eruption. Accompanying signs may include pruritus, fever, eosinophilia, and transient lymphadenopathy. Similar maculopapular eruptions are seen in the classic childhood viral exanthems, including (1) rubeola (measles)—a prodrome of coryza, cough, and conjunctivitis followed by Koplik’s spots on the buccal mucosa; the eruption begins behind the ears, at the hairline, and on the forehead and then spreads down the body, often becoming confluent; (2) rubella—the eruption begins on the forehead and face PART 2 Cardinal Manifestations and Presentation of Diseases CAuSES of ExAnTHEMS I. Morbilliform A. Drugs B. Viral 1. 2. 3. Erythema infectiosum (erythema of cheeks; reticulated on extremities) 4. Epstein-Barr virus, echovirus, coxsackievirus, CMV, adenovirus, HHV-6/ HHV-7a, dengue virus, and West Nile virus infections 5. C. Bacterial 1. 2. 3. 4. 5. D. Acute graft-versus-host disease E. Kawasaki disease II. A. B. C. D. Early staphylococcal scalded-skin syndrome aPrimary infection in infants and reactivation in the setting of immunosuppression. Abbreviations: CMV, cytomegalovirus; HHV, human herpesvirus; HIV, human immunodeficiency virus. and then spreads down the body; it resolves in the same order and is associated with retroauricular and suboccipital lymphadenopathy; and (3) erythema infectiosum (fifth disease)—erythema of the cheeks is followed by a reticulated pattern on the extremities; it is secondary to a parvovirus B19 infection, and an associated arthritis is seen in adults. Both measles and rubella can occur in unvaccinated adults, and an atypical form of measles is seen in adults immunized with either killed measles vaccine or killed vaccine followed in time by live vaccine. In contrast to classic measles, the eruption of atypical measles begins on the palms, soles, wrists, and ankles, and the lesions may become purpuric. The patient with atypical measles can have pulmonary involvement and be quite ill. Rubelliform and roseoliform eruptions are also associated with Epstein-Barr virus (5–15% of patients), echovirus, coxsackievirus, cytomegalovirus, adenovirus, dengue virus, and West Nile virus infections. Detection of specific IgM antibodies or fourfold elevations in IgG antibodies allow the proper diagnosis, but polymerase chain reaction (PCR) is gradually replacing serologic assays. Occasionally, a maculopapular drug eruption is a reflection of an underlying viral infection. For example, ~95% of the patients with infectious mononucleosis who are given ampicillin will develop a rash. Of note, early in the course of infections with Rickettsia and meningococcus, prior to the development of petechiae and purpura, the lesions may be erythematous macules and papules. This is also the case in chickenpox prior to the development of vesicles. Maculopapular eruptions are associated with early HIV infection, early secondary syphilis, typhoid fever, and acute graft-versus-host disease. In the last, lesions frequently begin on the dorsal hands and forearms; the macular rose spots of typhoid fever involve primarily the anterior trunk. The prototypic scarlatiniform eruption is seen in scarlet fever and is due to an erythrogenic toxin produced by bacteriophage-containing group A β-hemolytic streptococci, most commonly in the setting of pharyngitis. This eruption is characterized by diffuse erythema, which begins on the neck and upper trunk, and red follicular puncta. Additional findings include a white strawberry tongue (white coating with red papillae) followed by a red strawberry tongue (red tongue with red papillae); petechiae of the palate; a facial flush with circumoral pallor; linear petechiae in the antecubital fossae; and desquamation of the involved skin, palms, and soles 5–20 days after onset of the eruption. A similar desquamation of the palms and soles is seen in toxic shock syndrome (TSS), in Kawasaki disease, and after severe febrile illnesses. Certain strains of staphylococci also produce an erythrotoxin that leads to the same clinical findings as in streptococcal scarlet fever, except that the anti-streptolysin O or -DNase B titers are not elevated. In toxic shock syndrome, staphylococcal (phage group I) infections produce an exotoxin (TSST-1) that causes the fever and rash as well as enterotoxins. Initially, the majority of cases were reported in menstruating women who were using tampons. However, other sites of infection, including wounds and nasal packing, can lead to TSS. The diagnosis of TSS is based on clinical criteria (Chap. 172), and three of these involve mucocutaneous sites (diffuse erythema of the skin, desquamation of the palms and soles 1–2 weeks after onset of illness, and involvement of the mucous membranes). The latter is characterized as hyperemia of the vagina, oropharynx, or conjunctivae. Similar systemic findings have been described in streptococcal toxic shock syndrome (Chap. 173), and although an exanthem is seen less often than in TSS due to a staphylococcal infection, the underlying infection is often in the soft tissue (e.g., cellulitis). The cutaneous eruption in Kawasaki disease (Chap. 385) is polymorphous, but the two most common forms are morbilliform and scarlatiniform. Additional mucocutaneous findings include bilateral conjunctival injection; erythema and edema of the hands and feet followed by desquamation; and diffuse erythema of the oropharynx, red strawberry tongue, and dry fissured lips. This clinical picture can resemble TSS and scarlet fever, but clues to the diagnosis of Kawasaki disease are cervical lymphadenopathy, cheilitis, and thrombocytosis. The most serious associated systemic finding in this disease is coronary aneurysms secondary to arteritis. Scarlatiniform eruptions are also seen in the early phase of SSSS (see “Vesicles/Bullae,” above), in young adults with Arcanobacterium haemolyticum infection, and as reactions to drugs. (Table 72-14) Urticaria (hives) are transient lesions that are composed of a central wheal surrounded by an erythematous halo or flare. Individual lesions are round, oval, or figurate and are often pruritic. Acute and chronic urticaria have a wide variety of allergic etiologies and reflect edema in the dermis. Urticarial lesions can also be seen in patients with mastocytosis (urticaria pigmentosa), hypoor hyperthyroidism, and systemic-onset juvenile idiopathic arthritis (Still’s disease). In both juvenileand adult-onset Still’s disease, the lesions CAuSES of uRTiCARiA AnD AngioEDEMA I. Primary cutaneous disorders A. Acute and chronic urticariaa B. Physical urticaria 1. 2. 3. 4. C. Angioedema (hereditary and acquired)b,c II. A. B. C. D. aA small minority develop anaphylaxis. bAlso systemic. cAcquired angioedema can be idiopathic, associated with a lymphoproliferative disorder, or due to a drug, e.g., angiotensin-converting enzyme (ACE) inhibitors. coincide with the fever spike, are transient, and are due to dermal 363 infiltrates of neutrophils. The common physical urticarias include dermatographism, solar urticaria, cold urticaria, and cholinergic urticaria. Patients with dermatographism exhibit linear wheals following minor pressure or scratching of the skin. It is a common disorder, affecting ~5% of the population. Solar urticaria characteristically occurs within minutes of sun exposure and is a skin sign of one systemic disease—erythropoietic protoporphyria. In addition to the urticaria, these patients have subtle pitted scarring of the nose and hands. Cold urticaria is precipitated by exposure to the cold, and therefore exposed areas are usually affected. In occasional patients, the disease is associated with abnormal circulating proteins—more commonly cryoglobulins and less commonly cryofibrinogens. Additional systemic symptoms include wheezing and syncope, thus explaining the need for these patients to avoid swimming in cold water. Autosomal dominantly inherited cold urticaria is associated with dysfunction of cryopyrin. Cholinergic urticaria is precipitated by heat, exercise, or emotion and is characterized by small wheals with relatively large flares. It is occasionally associated with wheezing. Whereas urticarias are the result of dermal edema, subcutaneous edema leads to the clinical picture of angioedema. Sites of involvement include the eyelids, lips, tongue, larynx, and gastrointestinal tract as well as the subcutaneous tissue. Angioedema occurs alone or in combination with urticaria, including urticarial vasculitis and the physical urticarias. Both acquired and hereditary (autosomal dominant) forms of angioedema occur (Chap. 376), and in the latter, urticaria is rarely, if ever, seen. Urticarial vasculitis is an immune complex disease that may be confused with simple urticaria. In contrast to simple urticaria, individual lesions tend to last longer than 24 h and usually develop central petechiae that can be observed even after the urticarial phase has resolved. The patient may also complain of burning rather than pruritus. On biopsy, there is a leukocytoclastic vasculitis of the small dermal blood vessels. Although many cases of urticarial vasculitis are idiopathic in origin, it can be a reflection of an underlying systemic illness such as lupus erythematosus, Sjögren’s syndrome, or hereditary complement deficiency. There is a spectrum of urticarial vasculitis that ranges from purely cutaneous to multisystem involvement. The most common systemic signs and symptoms are arthralgias and/or arthritis, nephritis, and crampy abdominal pain, with asthma and chronic obstructive lung disease seen less often. Hypocomplementemia occurs in oneto two-thirds of patients, even in the idiopathic cases. Urticarial vasculitis can also be seen in patients with hepatitis B and hepatitis C infections, serum sickness, and serum sickness–like illnesses (e.g., due to cefaclor, minocycline). (Table 72-15) In the papulonodular diseases, the lesions are elevated above the surface of the skin and may coalesce to form larger plaques. The location, consistency, and color of the lesions are the keys to their diagnosis; this section is organized on the basis of color. In calcinosis cutis, there are firm white to white-yellow papules with an irregular surface. When the contents are expressed, a chalky white material is seen. Dystrophic calcification is seen at sites of previous inflammation or damage to the skin. It develops in acne scars as well as on the distal extremities of patients with systemic sclerosis and in the subcutaneous tissue and intermuscular fascial planes in DM. The latter is more extensive and is more commonly seen in children. An elevated calcium phosphate product, most commonly due to secondary hyperparathyroidism in the setting of renal failure, can lead to nodules of metastatic calcinosis cutis, which tend to be subcutaneous and periarticular. These patients can also develop calcification of muscular arteries and subsequent ischemic necrosis (calciphylaxis). Osteoma cutis, in the form of small papules, most commonly occurs on the face CHAPTER 72 Skin Manifestations of Internal Disease PART 2 Cardinal Manifestations and Presentation of Diseases I. White A. Calcinosis cutis B. Osteoma cutis (also skin-colored or blue) II. A. B. C. Angiofibromas (tuberous sclerosis, MEN syndrome, type 1) D. Neuromas (MEN syndrome, type 2b) E. 1. 2. F. Osteomas (arise in skull and jaw in Gardner syndrome) G. Primary cutaneous disorders 1. 2. III. A. Amyloidosis, primary systemic B. C. IV. A. B. C. D. E. V. A. 1. 2. B. Papules/plaques 1. 2. 3. 4. C. Nodules 1. 2. Medium-sized vessel vasculitis (e.g., cutaneous polyarteritis nodosa) D. Primary cutaneous disorders 1. 2. 3. Infections, e.g., streptococcal cellulitis, sporotrichosis 4. 5. VI. A. B. C. D. VII. A. Venous malformations (e.g., blue rubber bleb syndrome) B. 1. 2. VIII. A. B. C. IX. A. B. C. X. XI. A. aIf multiple with childhood onset, consider Gardner syndrome. bMay have darker hue in more darkly pigmented individuals. cSee also “Hyperpigmentation.” Abbreviation: MEN, multiple endocrine neoplasia. of individuals with a history of acne vulgaris, whereas plate-like lesions occur in rare genetic syndromes (Chap. 82). There are several types of skin-colored lesions, including epidermoid inclusion cysts, lipomas, rheumatoid nodules, neurofibromas, angiofibromas, neuromas, and adnexal tumors such as tricholemmomas. Both epidermoid inclusion cysts and lipomas are very common mobile subcutaneous nodules—the former are rubbery and drain cheese-like material (sebum and keratin) if incised. Lipomas are firm and somewhat lobulated on palpation. When extensive facial epidermoid inclusion cysts develop during childhood or there is a family history of such lesions, the patient should be examined for other signs of Gardner syndrome, including osteomas and desmoid tumors. Rheumatoid nodules are firm 0.5to 4-cm nodules that favor the extensor aspect of joints, especially the elbows. They are seen in ~20% of patients with rheumatoid arthritis and 6% of patients with Still’s disease. Biopsies of the nodules show palisading granulomas. Similar lesions that are smaller and shorter-lived are seen in rheumatic fever. Neurofibromas (benign Schwann cell tumors) are soft papules or nodules that exhibit the “button-hole” sign; that is, they invaginate into the skin with pressure in a manner similar to a hernia. Single lesions are seen in normal individuals, but multiple neurofibromas, usually in combination with six or more CALMs measuring >1.5 cm (see “Hyperpigmentation,” above), axillary freckling, and multiple Lisch nodules, are seen in von Recklinghausen’s disease (NF type I) (Chap. 118). In some patients, the neurofibromas are localized and unilateral due to somatic mosaicism. Angiofibromas are firm pink to skin-colored papules that measure from 3 mm to a few centimeters in diameter. When multiple lesions are located on the central cheeks (adenoma sebaceum), the patient has tuberous sclerosis or multiple endocrine neoplasia (MEN) syndrome, type 1. The former is an autosomal disorder due to mutations in two different genes, and the associated findings are discussed in the section on ash leaf spots as well as in Chap. 118. Neuromas (benign proliferations of nerve fibers) are also firm, skin-colored papules. They are more commonly found at sites of amputation and as rudimentary supernumerary digits. However, when there are multiple neuromas on the eyelids, lips, distal tongue, and/or oral mucosa, the patient should be investigated for other signs of the MEN syndrome, type 2b. Associated findings include marfanoid habitus, protuberant lips, intestinal ganglioneuromas, and medullary thyroid carcinoma (>75% of patients; Chap. 408). Adnexal tumors are derived from pluripotent cells of the epidermis that can differentiate toward hair, sebaceous, apocrine, or eccrine glands or remain undifferentiated. Basal cell carcinomas (BCCs) are examples of adnexal tumors that have little or no evidence of differentiation. Clinically, they are translucent papules with rolled borders, telangiectasias, and central erosion. BCCs commonly arise in sun-damaged skin of the head and neck as well as the upper trunk. When a patient has multiple BCCs, especially prior to age 30, the possibility of the nevoid basal cell carcinoma syndrome should be raised. It is inherited as an autosomal dominant trait and is associated with jaw cysts, palmar and plantar pits, frontal bossing, medulloblastomas, and calcification of the falx cerebri and diaphragma sellae. Tricholemmomas are also skin-colored adnexal tumors but differentiate toward hair follicles and can have a wartlike appearance. The presence of multiple tricholemmomas on the face and cobblestoning of the oral mucosa points to the diagnosis of Cowden disease (multiple hamartoma syndrome) due to mutations in the phosphatase and tensin homolog (PTEN) gene. Internal organ involvement (in decreasing order of frequency) includes fibrocystic disease and carcinoma of the breast, adenomas and carcinomas of the thyroid, and gastrointestinal polyposis. Keratoses of the palms, soles, and dorsal aspect of the hands are also seen. The cutaneous lesions associated with primary systemic amyloidosis are often pink in color and translucent. Common locations are the face, especially the periorbital and perioral regions, and flexural areas. On biopsy, homogeneous deposits of amyloid are seen in the dermis and in the walls of blood vessels; the latter lead to an increase in vessel wall fragility. As a result, petechiae and purpura develop in clinically normal skin as well as in lesional skin following minor trauma, hence the term pinch purpura. Amyloid deposits are also seen in the striated muscle of the tongue and result in macroglossia. Even though specific mucocutaneous lesions are present in only ~30% of the patients with primary systemic (AL) amyloidosis, the diagnosis can be made via histologic examination of abdominal subcutaneous fat, in conjunction with a serum free light chain assay. By special staining, amyloid deposits are seen around blood vessels or individual fat cells in 40–50% of patients. There are also three forms of amyloidosis that are limited to the skin and that should not be construed as cutaneous lesions of systemic amyloidosis. They are macular amyloidosis (upper back), lichen amyloidosis (usually lower extremities), and nodular amyloidosis. In macular and lichen amyloidosis, the deposits are composed of altered epidermal keratin. Early-onset macular and lichen amyloidosis have been associated with MEN syndrome, type 2a. Patients with multicentric reticulohistiocytosis also have pink-colored papules and nodules on the face and mucous membranes as well as on the extensor surface of the hands and forearms. They have a polyarthritis that can mimic rheumatoid arthritis clinically. On histologic examination, the papules have characteristic giant cells that are not seen in biopsies of rheumatoid nodules. Pink to skin-colored papules that are firm, 2–5 mm in diameter, and often in a linear arrangement are seen in patients with papular mucinosis. This disease is also referred to as generalized lichen myxedematosus or scleromyxedema. The latter name comes from the induration of the face and extremities that may accompany the papular eruption. Biopsy specimens of the papules show localized mucin deposition, and serum protein electrophoresis plus immunofixation electrophoresis demonstrates a monoclonal spike of IgG, usually with a λ light chain. Several systemic disorders are characterized by yellow-colored cutaneous papules or plaques—hyperlipidemia (xanthomas), gout (tophi), diabetes (necrobiosis lipoidica), pseudoxanthoma elasticum, and Muir-Torre syndrome (sebaceous tumors). Eruptive xanthomas are the most common form of xanthomas and are associated with hypertriglyceridemia (primarily hyperlipoproteinemia types I, IV, and V). Crops of yellow papules with erythematous halos occur primarily on the extensor surfaces of the extremities and the buttocks, and they spontaneously involute with a fall in serum triglycerides. Types II and III result in one or more of the following types of xanthoma: xanthelasma, tendon xanthomas, and plane xanthomas. Xanthelasma are found on the eyelids, whereas tendon xanthomas are frequently associated with the Achilles and extensor finger tendons; plane xanthomas are flat and favor the palmar creases, neck, upper trunk, and flexural folds. Tuberous xanthomas are frequently associated with hypertriglyceridemia, but they are also seen in patients with hypercholesterolemia and are found most 365 frequently over the large joints or hand. Biopsy specimens of xanthomas show collections of lipid-containing macrophages (foam cells). Patients with several disorders, including biliary cirrhosis, can have a secondary form of hyperlipidemia with associated tuberous and plane xanthomas. However, patients with plasma cell dyscrasias have normolipemic plane xanthomas. This latter form of xanthoma may be ≥12 cm in diameter and is most frequently seen on the upper trunk or side of the neck. It is important to note that the most common setting for eruptive xanthomas is uncontrolled diabetes mellitus. The least specific sign for hyperlipidemia is xanthelasma, because at least 50% of the patients with this finding have normal lipid profiles. In tophaceous gout, there are deposits of monosodium urate in the skin around the joints, particularly those of the hands and feet. Additional sites of tophi formation include the helix of the ear and the olecranon and prepatellar bursae. The lesions are firm, yellow in color, and occasionally discharge a chalky material. Their size varies from 1 mm to 7 cm, and the diagnosis can be established by polarized light microscopy of the aspirated contents of a lesion. Lesions of necrobiosis lipoidica are found primarily on the shins (90%), and patients can have diabetes mellitus or develop it subsequently. Characteristic findings include a central yellow color, atrophy (transparency), telangiectasias, and a red to red-brown border. Ulcerations can also develop within the plaques. Biopsy specimens show necrobiosis of collagen and granulomatous inflammation. In pseudoxanthoma elasticum (PXE), due to mutations in the gene ABCC6, there is an abnormal deposition of calcium on the elastic fibers of the skin, eye, and blood vessels. In the skin, the flexural areas such as the neck, axillae, antecubital fossae, and inguinal area are the primary sites of involvement. Yellow papules coalesce to form reticulated plaques that have an appearance similar to that of plucked chicken skin. In severely affected skin, hanging, redundant folds develop. Biopsy specimens of involved skin show swollen and irregularly clumped elastic fibers with deposits of calcium. In the eye, the calcium deposits in Bruch’s membrane lead to angioid streaks and choroiditis; in the arteries of the heart, kidney, gastrointestinal tract, and extremities, the deposits lead to angina, hypertension, gastrointestinal bleeding, and claudication, respectively. Adnexal tumors that have differentiated toward sebaceous glands include sebaceous adenoma, sebaceous carcinoma, and sebaceous hyperplasia. Except for sebaceous hyperplasia, which is commonly seen on the face, these tumors are fairly rare. Patients with Muir-Torre syndrome have one or more sebaceous adenoma(s), and they can also have sebaceous carcinomas and sebaceous hyperplasia as well as keratoacanthomas. The internal manifestations of Muir-Torre syndrome include multiple carcinomas of the gastrointestinal tract (primarily colon) as well as cancers of the larynx, genitourinary tract, and breast. Cutaneous lesions that are red in color have a wide variety of etiologies; in an attempt to simplify their identification, they will be subdivided into papules, papules/plaques, and subcutaneous nodules. Common red papules include arthropod bites and cherry hemangiomas; the latter are small, bright-red, dome-shaped papules that represent a benign proliferation of capillaries. In patients with AIDS (Chap. 226), the development of multiple red hemangioma-like lesions points to bacillary angiomatosis, and biopsy specimens show clusters of bacilli that stain positive with the Warthin-Starry stain; the pathogens have been identified as Bartonella henselae and Bartonella quintana. Disseminated visceral disease is seen primarily in immunocompromised hosts but can occur in immunocompetent individuals. Multiple angiokeratomas are seen in Fabry disease, an X-linked recessive lysosomal storage disease that is due to a deficiency of α-galactosidase A. The lesions are red to red-blue in color and can be quite small in size (1–3 mm), with the most common location being the lower trunk. Associated findings include chronic renal disease, peripheral neuropathy, and corneal opacities (cornea verticillata). Electron photomicrographs of angiokeratomas and clinically normal skin demonstrate lamellar lipid deposits in fibroblasts, pericytes, and endothelial CHAPTER 72 Skin Manifestations of Internal Disease 366 cells that are diagnostic of this disease. Widespread acute eruptions of erythematous papules are discussed in the section on exanthems. There are several infectious diseases that present as erythematous papules or nodules in a lymphocutaneous or sporotrichoid pattern, i.e., in a linear arrangement along the lymphatic channels. The two most common etiologies are Sporothrix schenckii (sporotrichosis) and the atypical mycobacterium Mycobacterium marinum. The organisms are introduced as a result of trauma, and a primary inoculation site is often seen in addition to the lymphatic nodules. Additional causes include Nocardia, Leishmania, and other atypical mycobacteria and dimorphic fungi; culture of lesional tissue will aid in the diagnosis. The diseases that are characterized by erythematous plaques with scale are reviewed in the papulosquamous section, and the various forms of dermatitis are discussed in the section on erythroderma. Additional disorders in the differential diagnosis of red papules/ plaques include cellulitis, polymorphous light eruption (PMLE), cutaneous lymphoid hyperplasia (lymphocytoma cutis), cutaneous lupus, lymphoma cutis, and leukemia cutis. The first three diseases represent primary cutaneous disorders, although cellulitis may be accompanied by a bacteremia. PMLE is characterized by erythematous papules and plaques in a primarily sun-exposed distribution—dorsum of the hand, extensor forearm, and upper trunk. Lesions follow exposure to UV-B and/or UV-A, and in higher latitudes, PMLE is most severe in the late spring and early summer. A process referred to as “hardening” occurs with continued UV exposure, and the eruption fades, but in temperate climates, it will recur in the spring. PMLE must be differentiated from cutaneous lupus, and this is accomplished by observation of the natural history, histologic examination, and direct immunofluorescence of the lesions. Cutaneous lymphoid hyperplasia (pseudolymphoma) is a benign polyclonal proliferation of lymphocytes in the skin that presents as infiltrated pink-red to red-purple papules and plaques; it must be distinguished from lymphoma cutis. Several types of red plaques are seen in patients with systemic lupus, including (1) erythematous urticarial plaques across the cheeks and nose in the classic butterfly rash; (2) erythematous discoid lesions with fine or “carpet-tack” scale, telangiectasias, central hypopigmentation, peripheral hyperpigmentation, follicular plugging, and atrophy located on the face, scalp, external ears, arms, and upper trunk; and (3) psoriasiform or annular lesions of subacute cutaneous lupus with hypopigmented centers located primarily on the extensor arms and upper trunk. Additional mucocutaneous findings include (1) a violaceous flush on the face and V of the neck; (2) photosensitivity; (3) urticarial vasculitis (see “Urticaria,” above); (4) lupus panniculitis (see below); (5) diffuse alopecia; (6) alopecia secondary to discoid lesions; (7) periungual telangiectasias and erythema; (8) EM-like lesions that may become bullous; (9) oral ulcers; and (10) distal ulcerations secondary to Raynaud’s phenomenon, vasculitis, or livedoid vasculopathy. Patients with only discoid lesions usually have the form of lupus that is limited to the skin. However, up to 10% of these patients eventually develop systemic lupus. Direct immunofluorescence of involved skin, in particular discoid lesions, shows deposits of IgG or IgM and C3 in a granular distribution along the dermal-epidermal junction. In lymphoma cutis, there is a proliferation of malignant lymphocytes in the skin, and the clinical appearance resembles that of cutaneous lymphoid hyperplasia—infiltrated pink-red to red-purple papules and plaques. Lymphoma cutis can occur anywhere on the surface of the skin, whereas the sites of predilection for lymphocytomas include the malar ridge, tip of the nose, and earlobes. Patients with nonHodgkin’s lymphomas have specific cutaneous lesions more often than those with Hodgkin’s disease, and, occasionally, the skin nodules precede the development of extracutaneous non-Hodgkin’s lymphoma or represent the only site of involvement (e.g., primary cutaneous B cell lymphoma). Arcuate lesions are sometimes seen in lymphoma and lymphocytoma cutis as well as in CTCL. Adult T cell leukemia/ lymphoma that develops in association with HTLV-1 infection is characterized by cutaneous plaques, hypercalcemia, and circulating CD25+ lymphocytes. Leukemia cutis has the same appearance as lymphoma cutis, and specific lesions are seen more commonly in monocytic leukemias than in lymphocytic or granulocytic leukemias. Cutaneous PART 2 Cardinal Manifestations and Presentation of Diseases chloromas (granulocytic sarcomas) may precede the appearance of circulating blasts in acute myelogenous leukemia and, as such, represent a form of aleukemic leukemia cutis. Sweet syndrome is characterized by pink-red to red-brown edematous plaques that are frequently painful and occur primarily on the head, neck, and upper (and, less often, lower) extremities. The patients also have fever, neutrophilia, and a dense dermal infiltrate of neutrophils in the lesions. In ~10% of the patients, there is an associated malignancy, most commonly acute myelogenous leukemia. Sweet syndrome has also been reported with inflammatory bowel disease, systemic lupus erythematosus, and solid tumors (primarily of the genitourinary tract) as well as drugs (e.g., all-trans-retinoic acid, granulocyte colony-stimulating factor [G-CSF]). The differential diagnosis includes neutrophilic eccrine hidradenitis; bullous forms of pyoderma gangrenosum; and, occasionally, cellulitis. Extracutaneous sites of involvement include joints, muscles, eye, kidney (proteinuria, occasionally glomerulonephritis), and lung (neutrophilic infiltrates). The idiopathic form of Sweet syndrome is seen more often in women, following a respiratory tract infection. Common causes of erythematous subcutaneous nodules include inflamed epidermoid inclusion cysts, acne cysts, and furuncles. Panniculitis, an inflammation of the fat, also presents as subcutaneous nodules and is frequently a sign of systemic disease. There are several forms of panniculitis, including erythema nodosum, erythema induratum/nodular vasculitis, lupus panniculitis, lipodermatosclerosis, α1-antitrypsin deficiency, factitial, and fat necrosis secondary to pancreatic disease. Except for erythema nodosum, these lesions may break down and ulcerate or heal with a scar. The shin is the most common location for the nodules of erythema nodosum, whereas the calf is the most common location for lesions of erythema induratum. In erythema nodosum, the nodules are initially red but then develop a blue color as they resolve. Patients with erythema nodosum but no underlying systemic illness can still have fever, malaise, leukocytosis, arthralgias, and/or arthritis. However, the possibility of an underlying illness should be excluded, and the most common associations are streptococcal infections, upper respiratory viral infections, sarcoidosis, and inflammatory bowel disease, in addition to drugs (oral contraceptives, sulfonamides, penicillins, bromides, iodides). Less common associations include bacterial gastroenteritis (Yersinia, Salmonella) and coccidioidomycosis followed by tuberculosis, histoplasmosis, brucellosis, and infections with Chlamydophila pneumoniae or Chlamydia trachomatis, Mycoplasma pneumoniae, or hepatitis B virus. Erythema induratum and nodular vasculitis have overlapping features clinically and histologically, and whether they represent two separate entities or the ends of a single disease spectrum is a point of debate; in general, the latter is usually idiopathic and the former is associated with the presence of Mycobacterium tuberculosis DNA by PCR within skin lesions. The lesions of lupus panniculitis are found primarily on the cheeks, upper arms, and buttocks (sites of abundant fat) and are seen in both the cutaneous and systemic forms of lupus. The overlying skin may be normal, erythematous, or have the changes of discoid lupus. The subcutaneous fat necrosis that is associated with pancreatic disease is presumably secondary to circulating lipases and is seen in patients with pancreatic carcinoma as well as in patients with acute and chronic pancreatitis. In this disorder, there may be an associated arthritis, fever, and inflammation of visceral fat. Histologic examination of deep incisional biopsy specimens will aid in the diagnosis of the particular type of panniculitis. Subcutaneous erythematous nodules are also seen in cutaneous polyarteritis nodosa and as a manifestation of systemic vasculitis when there is involvement of medium-sized vessels, e.g., systemic polyarteritis nodosa, allergic granulomatosis, or granulomatosis with polyangiitis (Wegener’s) (Chap. 385). Cutaneous polyarteritis nodosa presents with painful subcutaneous nodules and ulcers within a red-purple, netlike pattern of livedo reticularis. The latter is due to slowed blood flow through the superficial horizontal venous plexus. The majority of lesions are found on the lower extremities, and while arthralgias and myalgias may accompany cutaneous polyarteritis nodosa, there is no evidence of systemic involvement. In both the cutaneous and systemic forms of vasculitis, skin biopsy specimens of the associated nodules will show the changes characteristic of a necrotizing vasculitis and/or granulomatous inflammation. The cutaneous lesions in sarcoidosis (Chap. 390) are classically red to red-brown in color, and with diascopy (pressure with a glass slide), a yellow-brown residual color is observed that is secondary to the granulomatous infiltrate. The waxy papules and plaques may be found anywhere on the skin, but the face is the most common location. Usually there are no surface changes, but occasionally the lesions will have scale. Biopsy specimens of the papules show “naked” granulomas in the dermis, i.e., granulomas surrounded by a minimal number of lymphocytes. Other cutaneous findings in sarcoidosis include annular lesions with an atrophic or scaly center, papules within scars, hypopigmented papules and patches, alopecia, acquired ichthyosis, erythema nodosum, and lupus pernio (see below). The differential diagnosis of sarcoidosis includes foreign-body granulomas produced by chemicals such as beryllium and zirconium, late secondary syphilis, and lupus vulgaris. Lupus vulgaris is a form of cutaneous tuberculosis that is seen in previously infected and sensitized individuals. There is often underlying active tuberculosis elsewhere, usually in the lungs or lymph nodes. Lesions occur primarily in the head and neck region and are red-brown plaques with a yellow-brown color on diascopy. Secondary scarring and squamous cell carcinomas can develop within the plaques. Cultures or PCR analysis of the lesions should be performed, along with an interferon γ release assay of peripheral blood, because it is rare for the acid-fast stain to show bacilli within the dermal granulomas. A generalized distribution of red-brown macules and papules is seen in the form of mastocytosis known as urticaria pigmentosa (Chap. 376). Each lesion represents a collection of mast cells in the dermis, with hyperpigmentation of the overlying epidermis. Stimuli such as rubbing cause these mast cells to degranulate, and this leads to the formation of localized urticaria (Darier’s sign). Additional symptoms can result from mast cell degranulation and include headache, flushing, diarrhea, and pruritus. Mast cells also infiltrate various organs such as the liver, spleen, and gastrointestinal tract, and accumulations of mast cells in the bones may produce either osteosclerotic or osteolytic lesions on radiographs. In the majority of these patients, however, the internal involvement remains indolent. A subtype of chronic cutaneous small-vessel vasculitis, erythema elevatum diutinum (EED), also presents with papules that are red-brown in color. The papules coalesce into plaques on the extensor surfaces of knees, elbows, and the small joints of the hand. Flares of EED have been associated with streptococcal infections. Lesions that are blue in color are the result of vascular ectasias, hyperplasias and tumors or melanin pigment within the dermis. Venous lakes (ectasias) are compressible dark-blue lesions that are found commonly in the head and neck region. Venous malformations are also compressible blue papulonodules and plaques that can occur anywhere on the body, including the oral mucosa. When there are multiple rather than single congenital lesions, the patient may have the blue rubber bleb syndrome or Maffucci’s syndrome. Patients with the blue rubber bleb syndrome also have vascular anomalies of the gastrointestinal tract that may bleed, whereas patients with Maffucci’s syndrome have associated osteochondromas. Blue nevi (moles) are seen when there are collections of pigment-producing nevus cells in the dermis. These benign papular lesions are dome-shaped and occur most commonly on the dorsum of the hand or foot or in the head and neck region. Violaceous papules and plaques are seen in lupus pernio, lymphoma cutis, and cutaneous lupus. Lupus pernio is a particular type of sarcoidosis that involves the tip and alar rim of the nose as well as the earlobes, with lesions that are violaceous in color rather than red-brown. This 367 form of sarcoidosis is associated with involvement of the upper respiratory tract. The plaques of lymphoma cutis and cutaneous lupus may be red or violaceous in color and were discussed above. Purple-colored papules and plaques are seen in vascular tumors, such as Kaposi’s sarcoma (Chap. 226) and angiosarcoma, and when there is extravasation of red blood cells into the skin in association with inflammation, as in palpable purpura (see “Purpura,” below). Patients with congenital or acquired AV fistulas and venous hypertension can develop purple papules on the lower extremities that can resemble Kaposi’s sarcoma clinically and histologically; this condition is referred to as pseudo-Kaposi’s sarcoma (acral angiodermatitis). Angiosarcoma is found most commonly on the scalp and face of elderly patients or within areas of chronic lymphedema and presents as purple papules and plaques. In the head and neck region, the tumor often extends beyond the clinically defined borders and may be accompanied by facial edema. Brownand black-colored papules are reviewed in “Hyperpigmentation,” above. These are discussed last because they can have a wide range of colors. Most commonly, they present as either firm, skin-colored subcutaneous nodules or firm, red to red-brown papulonodules. The lesions of lymphoma cutis range from pink-red to plum in color, whereas metastatic melanoma can be pink, blue, or black in color. Cutaneous metastases develop from hematogenous or lymphatic spread and are most often due to the following primary carcinomas: in men, melanoma, oropharynx, lung, and colon; and in women, breast, melanoma, and ovary. These metastatic lesions may be the initial presentation of the carcinoma, especially when the primary site is the lung. (Table 72-16) Purpura are seen when there is an extravasation of red blood cells into the dermis and, as a result, the lesions do not blanch with pressure. This is in contrast to those erythematous or violet-colored lesions that are due to localized vasodilatation—they do blanch with pressure. Purpura (≥3 mm) and petechiae (≤2 mm) are divided into two major groups: palpable and nonpalpable. The most frequent causes of nonpalpable petechiae and purpura are primary cutaneous disorders such as trauma, solar (actinic) purpura, and capillaritis. Less common causes are steroid purpura and livedoid vasculopathy (see “Ulcers,” below). Solar purpura are seen primarily on the extensor forearms, whereas steroid purpura secondary to potent topical glucocorticoids or endogenous or exogenous Cushing’s syndrome can be more widespread. In both cases, there is alteration of the supporting connective tissue that surrounds the dermal blood vessels. In contrast, the petechiae that result from capillaritis are found primarily on the lower extremities. In capillaritis, there is an extravasation of erythrocytes as a result of perivascular lymphocytic inflammation. The petechiae are bright red, 1–2 mm in size, and scattered within yellow-brown patches. The yellow-brown color is caused by hemosiderin deposits within the dermis. Systemic causes of nonpalpable purpura fall into several categories, and those secondary to clotting disturbances and vascular fragility will be discussed first. The former group includes thrombocytopenia (Chap. 140), abnormal platelet function as is seen in uremia, and clotting factor defects. The initial site of presentation for thrombocytopenia-induced petechiae is the distal lower extremity. Capillary fragility leads to nonpalpable purpura in patients with systemic amyloidosis (see “Papulonodular Skin Lesions,” above), disorders of collagen production such as Ehlers-Danlos syndrome, and scurvy. In scurvy, there are flattened corkscrew hairs with surrounding hemorrhage on the lower extremities, in addition to gingivitis. Vitamin C is a cofactor for lysyl hydroxylase, an enzyme involved in the posttranslational modification of procollagen that is necessary for cross-link formation. CHAPTER 72 Skin Manifestations of Internal Disease CAuSES of PuRPuRA PART 2 Cardinal Manifestations and Presentation of Diseases I. Primary cutaneous disorders A. Nonpalpable 1. 2. Solar (actinic, senile) purpura 3. 4. 5. Livedoid vasculopathy in the setting of venous hypertensiona II. A. 1. Clotting disturbances a. b. c. 2. Vascular fragility a. b. c. 3. Thrombi a. b. c. d. e. f. g. h. 4. Emboli a. b. 5. Possible immune complex a. b. B. 1. Vasculitis a. Cutaneous small-vessel vasculitis, including in the setting of systemic vasculitides b. 2. Embolib a. b. c. d. aAlso associated with underlying disorders that lead to hypercoagulability, e.g., factor V Leiden, protein C dysfunction/deficiency. bBacterial (including rickettsial), fungal, or parasitic. Abbreviation: ITP, idiopathic thrombocytopenic purpura. In contrast to the previous group of disorders, the purpura (noninflammatory with a retiform outline) seen in the following group of diseases are associated with thrombi formation within vessels. It is important to note that these thrombi are demonstrable in skin biopsy specimens. This group of disorders includes disseminated intravascular coagulation (DIC), monoclonal cryoglobulinemia, thrombocytosis, thrombotic thrombocytopenic purpura, antiphospholipid antibody syndrome, and reactions to warfarin and heparin (heparin-induced thrombocytopenia and thrombosis). DIC is triggered by several types of infection (gramnegative, gram-positive, viral, and rickettsial) as well as by tissue injury and neoplasms. Widespread purpura and hemorrhagic infarcts of the distal extremities are seen. Similar lesions are found in purpura fulminans, which is a form of DIC associated with fever and hypotension that occurs more commonly in children following an infectious illness such as varicella, scarlet fever, or an upper respiratory tract infection. In both disorders, hemorrhagic bullae can develop in involved skin. Monoclonal cryoglobulinemia is associated with plasma cell dyscrasias, chronic lymphocytic leukemia, and lymphoma. Purpura, primarily of the lower extremities, and hemorrhagic infarcts of the fingers, toes, and ears are seen in these patients. Exacerbations of disease activity can follow cold exposure or an increase in serum viscosity. Biopsy specimens show precipitates of the cryoglobulin within dermal vessels. Similar deposits have been found in the lung, brain, and renal glomeruli. Patients with thrombotic thrombocytopenic purpura can also have hemorrhagic infarcts as a result of intravascular thromboses. Additional signs include microangiopathic hemolytic anemia and fluctuating neurologic abnormalities, especially headaches and confusion. Administration of warfarin can result in painful areas of erythema that become purpuric and then necrotic with an adherent black eschar; the condition is referred to as warfarin-induced necrosis. This reaction is seen more often in women and in areas with abundant subcutaneous fat—breasts, abdomen, buttocks, thighs, and calves. The erythema and purpura develop between the third and tenth day of therapy, most likely as a result of a transient imbalance in the levels of anticoagulant and procoagulant vitamin K–dependent factors. Continued therapy does not exacerbate preexisting lesions, and patients with an inherited or acquired deficiency of protein C are at increased risk for this particular reaction as well as for purpura fulminans and calciphylaxis. Purpura secondary to cholesterol emboli are usually seen on the lower extremities of patients with atherosclerotic vascular disease. They often follow anticoagulant therapy or an invasive vascular procedure such as an arteriogram but also occur spontaneously from disintegration of atheromatous plaques. Associated findings include livedo reticularis, gangrene, cyanosis, and ischemic ulcerations. Multiple step sections of the biopsy specimen may be necessary to demonstrate the cholesterol clefts within the vessels. Petechiae are also an important sign of fat embolism and occur primarily on the upper body 2–3 days after a major injury. By using special fixatives, the emboli can be demonstrated in biopsy specimens of the petechiae. Emboli of tumor or thrombus are seen in patients with atrial myxomas and marantic endocarditis. In the Gardner-Diamond syndrome (autoerythrocyte sensitivity), female patients develop large ecchymoses within areas of painful, warm erythema. Intradermal injections of autologous erythrocytes or phosphatidyl serine derived from the red cell membrane can reproduce the lesions in some patients; however, there are instances where a reaction is seen at an injection site of the forearm but not in the midback region. The latter has led some observers to view Gardner-Diamond syndrome as a cutaneous manifestation of severe emotional stress. More recently, the possibility of platelet dysfunction (as assessed via aggregation studies) has been raised. Waldenström’s hypergammaglobulinemic purpura is a chronic disorder characterized by petechiae on the lower extremities. There are circulating complexes of IgG–anti-IgG molecules, and exacerbations are associated with prolonged standing or walking. Palpable purpura are further subdivided into vasculitic and embolic. In the group of vasculitic disorders, cutaneous small-vessel vasculitis, also known as leukocytoclastic vasculitis (LCV), is the one most commonly associated with palpable purpura (Chap. 385). Underlying etiologies include drugs (e.g., antibiotics), infections (e.g., hepatitis C virus), and autoimmune connective tissue diseases (e.g., rheumatoid arthritis, Sjögren’s syndrome, lupus). Henoch-Schönlein purpura (HSP) is a subtype of acute LCV that is seen more commonly in children and adolescents following an upper respiratory infection. The majority of lesions are found on the lower extremities and buttocks. Systemic manifestations include fever, arthralgias (primarily of the knees and ankles), abdominal pain, gastrointestinal bleeding, and nephritis. Direct immunofluorescence examination shows deposits of IgA within dermal blood vessel walls. Renal disease is of particular concern in adults with HSP. In polyarteritis nodosa, specific cutaneous lesions result from a vasculitis of arterial vessels (arteritis), or there may be an associated LCV. Arteritis leads to an infarct of the skin, and CAuSES of MuCoCuTAnEouS uLCERS this explains the irregular outline of the purpura (see below). Several types of infectious emboli can give rise to palpable purpura. These embolic lesions are usually irregular in outline as opposed to the lesions of LCV, which are circular in outline. The irregular outline is indicative of a cutaneous infarct, and the size corresponds to the area of skin that received its blood supply from that particular arteriole or artery. The palpable purpura in LCV are circular because the erythrocytes simply diffuse out evenly from the postcapillary venules as a result of inflammation. Infectious emboli are most commonly due to gram-negative cocci (meningococcus, gonococcus), gram-negative rods (Enterobacteriaceae), and gram-positive cocci (Staphylococcus). Additional causes include Rickettsia and, in immunocompromised patients, Aspergillus and other opportunistic fungi. The embolic lesions in acute meningococcemia are found primarily on the trunk, lower extremities, and sites of pressure, and a gunmetal-gray color often develops within them. Their size varies from a few millimeters to several centimeters, and the organisms can be cultured from the lesions. Associated findings include a preceding upper respiratory tract infection; fever; meningitis; DIC; and, in some patients, a deficiency of the terminal components of complement. In disseminated gonococcal infection (arthritis-dermatitis syndrome), a small number of inflammatory papules and vesicopustules, often with central purpura or hemorrhagic necrosis, are found on the distal extremities. Additional symptoms include arthralgias, tenosynovitis, and fever. To establish the diagnosis, a Gram stain of these lesions should be performed. Rocky Mountain spotted fever is a tick-borne disease that is caused by Rickettsia rickettsii. A several-day history of fever, chills, severe headache, and photophobia precedes the onset of the cutaneous eruption. The initial lesions are erythematous macules and papules on the wrists, ankles, palms, and soles. With time, the lesions spread centripetally and become purpuric. Lesions of ecthyma gangrenosum begin as edematous, erythematous papules or plaques and then develop central purpura and necrosis. Bullae formation also occurs in these lesions, and they are frequently found in the girdle region. The organism that is classically associated with ecthyma gangrenosum is Pseudomonas aeruginosa, but other gram-negative rods such as Klebsiella, Escherichia coli, and Serratia can produce similar lesions. In immunocompromised hosts, the list of potential pathogens is expanded to include Candida and other opportunistic fungi (e.g., Aspergillus, Fusarium). The approach to the patient with a cutaneous ulcer is outlined in Table 72-17. Peripheral vascular diseases of the extremities are reviewed in Chap. 302, as is Raynaud’s phenomenon. Livedoid vasculopathy (livedoid vasculitis; atrophie blanche) represents a combination of a vasculopathy plus intravascular thrombosis. Purpuric lesions and livedo reticularis are found in association with painful ulcerations of the lower extremities. These ulcers are often slow to heal, but when they do, irregularly shaped white scars form. The majority of cases are secondary to venous hypertension, but possible underlying illnesses include cryofibrinogenemia and disorders of hypercoagulability, e.g., the antiphospholipid syndrome (Chaps. 142 and 379). In pyoderma gangrenosum, the border of untreated active ulcers has a characteristic appearance consisting of an undermined necrotic violaceous edge and a peripheral erythematous halo. The ulcers often begin as pustules that then expand rather rapidly to a size as large as 20 cm. Although these lesions are most commonly found on the lower extremities, they can arise anywhere on the surface of the body, including sites of trauma (pathergy). An estimated 30–50% of cases are idiopathic, and the most common associated disorders are ulcerative colitis and Crohn’s disease. Less commonly, pyoderma gangrenosum is associated with seropositive rheumatoid arthritis, acute and chronic myelogenous leukemia, hairy cell leukemia, myelofibrosis, or a monoclonal gammopathy, usually IgA. Because the histology of pyoderma gangrenosum may be nonspecific I. Primary cutaneous disorders A. Peripheral vascular disease (Chap. 302) 1. 2. B. Livedoid vasculopathy in the setting of venous hypertensionb C. Squamous cell carcinoma, e.g., within scars, basal cell carcinomas D. Infections, e.g., ecthyma caused by Streptococcus (Chap. 173) E. Physical, e.g., trauma, pressure F. Drugs, e.g., hydroxyurea II. A. 1. 2. Hemoglobinopathies (Chap. 127) 3. Cryoglobulinemia,c cryofibrinogenemia 4. 5. 6. Antiphospholipid syndrome (Chap. 141) 7. Neuropathice (Chap. 417) 8. 9. Kaposi's sarcoma, acral angiodermatitis 10. B. Hands and feet 1. Raynaud's phenomenon (Chap. 302) 2. C. Generalized 1. Pyoderma gangrenosum, but most commonly legs 2. Calciphylaxis (Chap. 424) 3. Infections, e.g., dimorphic fungi, leishmaniasis 4. D. Face, especially perioral, and anogenital 1. Chronic herpes simplexf III. A. Behçet's syndrome (Chap. 387) B. Erythema multiforme major, Stevens-Johnson syndrome, TEN C. Primary blistering disorders (Chap. 73) D. Lupus erythematosus, lichen planus E. F. G. Reactive arthritis (formerly known as Reiter's syndrome) aUnderlying atherosclerosis. bAlso associated with underlying disorders that lead to hypercoagulability, e.g., factor V Leiden, protein C dysfunction/deficiency, antiphospholipid antibodies. cReviewed in section on Purpura. dReviewed in section on Papulonodular Skin Lesions. eFavors plantar surface of the foot. fSign of immunosuppression. Abbreviation: TEN, toxic epidermal necrolysis. (dermal infiltrate of neutrophils when in untreated state), the diagnosis requires clinicopathologic correlation, in particular, the exclusion of similar-appearing ulcers such as necrotizing vasculitis, Meleney’s ulcer (synergistic infection at a site of trauma or surgery), dimorphic fungi, cutaneous amebiasis, spider bites, and factitial. In the myeloproliferative disorders, the ulcers may be more superficial with a pustulobullous border, and these lesions provide a connection between classic pyoderma gangrenosum and acute febrile neutrophilic dermatosis (Sweet syndrome). The major considerations in a patient with a fever and a rash are inflammatory diseases versus infectious diseases. In the hospital setting, the most common scenario is a patient who has a drug rash plus a fever secondary to an underlying infection. However, it should be CHAPTER 72 Skin Manifestations of Internal Disease lesions. immunologically Mediated Skin Diseases Kim B. Yancey, Thomas J. Lawley A number of immunologically mediated skin diseases and immuno-logically mediated systemic disorders with cutaneous manifestations 73 PART 2 Cardinal Manifestations and Presentation of Diseases emphasized that a drug reaction can lead to both a cutaneous eruption and a fever (“drug fever”), especially in the setting of DRESS, AGEP, or serum sickness–like reaction. Additional inflammatory diseases that are often associated with a fever include pustular psoriasis, erythroderma, and Sweet syndrome. Lyme disease, secondary syphilis, and viral and bacterial exanthems (see “Exanthems,” above) are examples of infectious diseases that produce a rash and a fever. Lastly, it is important to determine whether or not the cutaneous lesions represent septic emboli (see “Purpura,” above). Such lesions usually have evidence of ischemia in the form of purpura, necrosis, or impending necrosis (gunmetal-gray color). In the patient with thrombocytopenia, however, purpura can be seen in inflammatory reactions such as morbilliform drug eruptions and infectious are now recognized as distinct entities with consistent clinical, histologic, and immunopathologic findings. Clinically, these disorders are characterized by morbidity (pain, pruritus, disfigurement) and, in some instances, result in death (largely due to loss of epidermal barrier function and/or secondary infection). The major features of the more common immunologically mediated skin diseases are summarized in this chapter (Table 73-1), as are the autoimmune systemic disorders with cutaneous manifestations. aAutoantigens bound by these patients’ autoantibodies are defined as follows: Dsg1, desmoglein 1; Dsg3, desmoglein 3; BPAG1, bullous pemphigoid antigen 1; BPAG2, bullous pemphigoid antigen 2. Abbreviation: BMZ, basement membrane zone. Pemphigus refers to a group of autoantibody-mediated intraepidermal blistering diseases characterized by loss of cohesion between epidermal cells (a process termed acantholysis). Manual pressure to the skin of these patients may elicit the separation of the epidermis (Nikolsky’s sign). This finding, while characteristic of pemphigus, is not specific to this group of disorders and is also seen in toxic epidermal necrolysis, Stevens-Johnson syndrome, and a few other skin diseases. Pemphigus vulgaris (PV) is a mucocutaneous blistering disease that predominantly occurs in patients >40 years of age. PV typically begins on mucosal surfaces and often progresses to involve the skin. This disease is characterized by fragile, flaccid blisters that rupture to produce extensive denudation of mucous membranes and skin (Fig. 73-1). The mouth, scalp, face, neck, axilla, groin, and trunk are typically involved. PV may be associated with severe skin pain; some patients experience pruritus as well. Lesions usually heal without scarring except at sites complicated by secondary infection or mechanically induced dermal wounds. Postinflammatory hyperpigmentation is usually present for some time at sites of healed lesions. Biopsies of early lesions demonstrate intraepidermal vesicle formation secondary to loss of cohesion between epidermal cells (i.e., acantholytic blisters). Blister cavities contain acantholytic epidermal cells, which appear as round homogeneous cells containing hyperchromatic nuclei. Basal keratinocytes remain attached to the epidermal basement membrane; hence, blister formation takes place within the suprabasal portion of the epidermis. Lesional skin may contain focal collections of intraepidermal eosinophils within blister cavities; dermal alterations are slight, often limited to an eosinophil-predominant leukocytic infiltrate. Direct immunofluorescence microscopy of lesional or intact patient skin shows deposits of IgG on the surface of keratinocytes; deposits of complement components are typically found in lesional but not in uninvolved skin. Deposits of IgG on keratinocytes are derived from circulating autoantibodies to cell-surface autoantigens. Such circulating autoantibodies FIguRE 73-1 Pemphigus vulgaris. A. Flaccid bullae are easily ruptured, resulting in multiple erosions and crusted plaques. B. Involvement of the oral mucosa, which is almost invariable, may present with erosions on the gingiva, buccal mucosa, palate, posterior pharynx, or tongue. (B, Courtesy of Robert Swerlick, MD; with permission.) can be demonstrated in 80–90% of PV patients by indirect immunofluorescence microscopy; monkey esophagus is the optimal substrate for these studies. Patients with PV have IgG autoantibodies to desmogleins (Dsgs), transmembrane desmosomal glycoproteins that belong to the cadherin family of calcium-dependent adhesion molecules. Such autoantibodies can be precisely quantitated by enzyme-linked immunosorbent assay (ELISA). Patients with early PV (i.e., mucosal disease) have IgG autoantibodies to Dsg3; patients with advanced PV (i.e., mucocutaneous disease) have IgG autoantibodies to both Dsg3 and Dsg1. Experimental studies have shown that autoantibodies from patients with PV are pathogenic (i.e., responsible for blister formation) and that their titer correlates with disease activity. Recent studies have shown that the anti-Dsg autoantibody profile in these patients’ sera as well as the tissue distribution of Dsg3 and Dsg1 determine the site of blister formation in patients with PV. Coexpression of Dsg3 and Dsg1 by epidermal cells protects against pathogenic IgG antibodies to either of these cadherins but not against pathogenic autoantibodies to both. PV can be life-threatening. Prior to the availability of glucocorticoids, mortality rates ranged from 60% to 90%; the current figure is ~5%. Common causes of morbidity and death are infection and complications of treatment with glucocorticoids. Bad prognostic 371 factors include advanced age, widespread involvement, and the requirement for high doses of glucocorticoids (with or without other immunosuppressive agents) for control of disease. The course of PV in individual patients is variable and difficult to predict. Some patients experience remission, while others may require long-term treatment or succumb to complications of their disease or its treatment. The mainstay of treatment is systemic glucocorticoids. Patients with moderate to severe PV are usually started on prednisone at 1 mg/kg per day. If new lesions continue to appear after 1–2 weeks of treatment, the dose may need to be increased and/or prednisone may need to be combined with other immunosuppressive agents such as azathioprine (2–2.5 mg/kg per day), mycophenolate mofetil (20–35 mg/kg per day), or cyclophosphamide (1–2 mg/kg per day). Patients with severe, treatment-resistant disease may derive benefit from plasmapheresis (six high-volume exchanges [i.e., 2–3 L per exchange] over ~2 weeks), IV immunoglobulin (IVIg) (2 g/kg over 3–5 days every 6–8 weeks), or rituximab (375 mg/m2 per week × 4, or 1000 mg on days 1 and 15). It is important to bring severe or progressive disease under control quickly in order to lessen the severity and/or duration of this disorder. Accordingly, some have suggested that rituximab and daily glucocorticoids should be used early in PV patients to avert the development of treatment-resistant disease. Pemphigus foliaceus (PF) is distinguished from PV by several features. In PF, acantholytic blisters are located high within the epidermis, usually just beneath the stratum corneum. Hence, PF is a more superficial blistering disease than PV. The distribution of lesions in the two disorders is much the same, except that in PF mucous membranes are almost always spared. Patients with PF rarely have intact blisters but rather exhibit shallow erosions associated with erythema, scale, and crust formation. Mild cases of PF resemble severe seborrheic dermatitis; severe PF may cause extensive exfoliation. Sun exposure (ultraviolet irradiation) may be an aggravating factor. PF has immunopathologic features in common with PV. Specifically, direct immunofluorescence microscopy of perilesional skin demonstrates IgG on the surface of keratinocytes. Similarly, patients with PF have circulating IgG autoantibodies directed against the surface of keratinocytes. In PF, autoantibodies are directed against Dsg1, a 160kDa desmosomal cadherin. These autoantibodies can be quantitated by ELISA. As noted for PV, the autoantibody profile in patients with PF (i.e., anti-Dsg1 IgG) and the tissue distribution of this autoantigen (i.e., expression in oral mucosa that is compensated by coexpression of Dsg3) are thought to account for the distribution of lesions in this disease. Endemic forms of PF are found in south-central rural Brazil, where the disease is known as fogo salvagem (FS), as well as in selected sites in Latin America and Tunisia. Endemic PF, like other forms of this disease, is mediated by IgG autoantibodies to Dsg1. Clusters of FS overlap with those of leishmaniasis, a disease transmitted by bites of the sand fly Lutzomyia longipalis. Recent studies have shown that sand-fly salivary antigens (specifically, the LJM11 salivary protein) are recognized by IgG autoantibodies from FS patients (as well as by monoclonal antibodies to Dsg1 derived from these patients). Moreover, mice immunized with LJM11 produce antibodies to Dsg1. Thus, these findings suggest that insect bites may deliver salivary antigens that initiate a cross-reactive humoral immune response, which may lead to FS in genetically susceptible individuals. Although pemphigus has been associated with several autoimmune diseases, its association with thymoma and/or myasthenia gravis is particularly notable. To date, >30 cases of thymoma and/or myasthenia gravis have been reported in association with pemphigus, usually with PF. Patients may also develop pemphigus as a consequence of drug exposure; drug-induced pemphigus usually resembles PF rather than PV. Drugs containing a thiol group in their chemical structure (e.g., penicillamine, captopril, enalapril) are most commonly associated with drug-induced pemphigus. Nonthiol drugs linked to pemphigus include penicillins, cephalosporins, and piroxicam. It has 372 been suggested that thiol-containing and non-thiol-containing drugs induce pemphigus via biochemical and immunologic mechanisms, respectively—hence, the better prognosis upon drug withdrawal in cases of pemphigus induced by thiol-containing medications. Some cases of drug-induced pemphigus are durable and require treatment with systemic glucocorticoids and/or immunosuppressive agents. PF is generally a less severe disease than PV and carries a better prognosis. Localized disease can sometimes be treated with topical or intralesional glucocorticoids; more active cases can usually be controlled with systemic glucocorticoids. Patients with severe, treatment-resistant disease may require more aggressive interventions, as described above for patients with PV. Paraneoplastic pemphigus (PNP) is an autoimmune acantholytic mucocutaneous disease associated with an occult or confirmed neoplasm. Patients with PNP typically have painful mucosal erosive lesions in association with papulosquamous and/or lichenoid eruptions that often progress to blisters. Palm and sole involvement are common in these patients and raise the possibility that prior reports of neoplasiaassociated erythema multiforme actually may have represented unrecognized cases of PNP. Biopsies of lesional skin from these patients show varying combinations of acantholysis, keratinocyte necrosis, and vacuolar-interface dermatitis. Direct immunofluorescence microscopy of a patient’s skin shows deposits of IgG and complement on the surface of keratinocytes and (variably) similar immunoreactants in the epidermal basement membrane zone. Patients with PNP have IgG autoantibodies to cytoplasmic proteins that are members of the plakin family (e.g., desmoplakins I and II, bullous pemphigoid antigen [BPAG]1, envoplakin, periplakin, and plectin) and to cell-surface proteins that are members of the cadherin family (e.g., Dsg1 and Dsg3). Passive transfer studies have shown that autoantibodies from patients with PNP are pathogenic in animal models. The predominant neoplasms associated with PNP are non-Hodgkin’s lymphoma, chronic lymphocytic leukemia, thymoma, spindle cell tumors, Waldenström’s macroglobulinemia, and Castleman’s disease; the last-mentioned neoplasm is particularly common among children with PNP. Rare cases of seronegative PNP have been reported in patients with B cell malignancies previously treated with rituximab. In addition to severe skin lesions, many patients with PNP develop life-threatening bronchiolitis obliterans. PNP is generally resistant to conventional therapies (i.e., those used to treat PV); rarely, a patient’s disease may ameliorate or even remit following ablation or removal of underlying neoplasms. Bullous pemphigoid (BP) is a polymorphic autoimmune subepidermal blistering disease usually seen in the elderly. Initial lesions may consist of urticarial plaques; most patients eventually display tense blisters on either normal-appearing or erythematous skin (Fig. 73-2). The lesions are usually distributed over the lower abdomen, groin, and flexor surface of the extremities; oral mucosal lesions are found in some patients. Pruritus may be nonexistent or severe. As lesions evolve, tense blisters tend to rupture and be replaced by erosions with or without surmounting crust. Nontraumatized blisters heal without scarring. The major histocompatibility complex class II allele HLA-DQβ1*0301 is prevalent in patients with BP. Despite isolated reports, several studies have shown that patients with BP do not have a higher incidence of malignancy than appropriately ageand gender-matched controls. Biopsies of early lesional skin demonstrate subepidermal blisters and histologic features that roughly correlate with the clinical character of the particular lesion under study. Lesions on normal-appearing skin generally contain a sparse perivascular leukocytic infiltrate with some eosinophils; conversely, biopsies of inflammatory lesions typically show an eosinophil-rich infiltrate at sites of vesicle formation and in perivascular areas. In addition to eosinophils, cell-rich lesions also contain mononuclear cells and neutrophils. It is not possible to distinguish BP from other subepidermal blistering diseases by routine histologic studies alone. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 73-2 Bullous pemphigoid with tense vesicles and bullae on erythematous, urticarial bases. (Courtesy of the Yale Resident’s Slide Collection; with permission.) Direct immunofluorescence microscopy of normal-appearing perilesional skin from patients with BP shows linear deposits of IgG and/ or C3 in the epidermal basement membrane. The sera of ~70% of these patients contain circulating IgG autoantibodies that bind the epidermal basement membrane of normal human skin in indirect immunofluorescence microscopy. IgG from an even higher percentage of patients reacts with the epidermal side of 1 M NaCl split skin (an alternative immunofluorescence microscopy test substrate used to distinguish circulating IgG autoantibodies to the basement membrane in patients with BP from those in patients with similar, yet different, subepidermal blistering diseases; see below). In BP, circulating autoantibodies recognize 230and 180-kDa hemidesmosome-associated proteins in basal keratinocytes (i.e., BPAG1 and BPAG2, respectively). Autoantibodies to BPAG2 are thought to deposit in situ, activate complement, produce dermal mast-cell degranulation, and generate granulocyte-rich infiltrates that cause tissue damage and blister formation. BP may persist for months to years, with exacerbations or remissions. Extensive involvement may result in widespread erosions and compromise cutaneous integrity; elderly and/or debilitated patients may die. The mainstay of treatment is systemic glucocorticoids. Local or minimal disease can sometimes be controlled with topical glucocorticoids alone; more extensive lesions generally respond to systemic glucocorticoids either alone or in combination with immunosuppressive agents. Patients usually respond to prednisone (0.75–1 mg/kg per day). In some instances, azathioprine (2–2.5 mg/kg per day), mycophenolate mofetil (20–35 mg/kg per day), or cyclophosphamide (1–2 mg/kg per day) are necessary adjuncts. Pemphigoid gestationis (PG), also known as herpes gestationis, is a rare, nonviral, subepidermal blistering disease of pregnancy and the puerperium. PG may begin during any trimester of pregnancy or present shortly after delivery. Lesions are usually distributed over the abdomen, trunk, and extremities; mucous membrane lesions are rare. Skin lesions in these patients may be quite polymorphic and consist of erythematous urticarial papules and plaques, vesiculopapules, and/or frank bullae. Lesions are almost always extremely pruritic. Severe exacerbations of PG frequently follow delivery, typically within 24–48 h. PG tends to recur in subsequent pregnancies, often beginning earlier during such gestations. Brief flare-ups of disease may occur with resumption of menses and may develop in patients later exposed to oral contraceptives. Occasionally, infants of affected mothers have transient skin lesions. Biopsies of early lesional skin show teardrop-shaped subepidermal vesicles forming in dermal papillae in association with an eosinophilrich leukocytic infiltrate. Differentiation of PG from other subepidermal bullous diseases by light microscopy is difficult. However, direct immunofluorescence microscopy of perilesional skin from PG patients reveals the immunopathologic hallmark of this disorder: linear deposits of C3 in the epidermal basement membrane. These deposits develop as a consequence of complement activation produced by low-titer IgG anti–basement membrane autoantibodies directed against BPAG2, the same hemidesmosome-associated protein that is targeted by auto-antibodies in patients with BP—a subepidermal bullous disease that resembles PG clinically, histologically, and immunopathologically. The goals of therapy in patients with PG are to prevent the development of new lesions, relieve intense pruritus, and care for erosions at sites of blister formation. Many patients require treatment with moderate doses of daily glucocorticoids (i.e., 20–40 mg of prednisone) at some point in their course. Mild cases (or brief flare-ups) may be controlled by vigorous use of potent topical glucocorticoids. Infants born of mothers with PG appear to be at increased risk of being born slightly premature or “small for dates.” Current evidence suggests that there is no difference in the incidence of uncomplicated live births between PG patients treated with systemic glucocorticoids and those managed more conservatively. If systemic glucocorticoids are administered, newborns are at risk for development of reversible adrenal insufficiency. Dermatitis herpetiformis (DH) is an intensely pruritic, papulovesicular skin disease characterized by lesions symmetrically distributed over extensor surfaces (i.e., elbows, knees, buttocks, back, scalp, and posterior neck) (see Fig. 70-8). Primary lesions in this disorder consist of papules, papulovesicles, or urticarial plaques. Because pruritus is prominent, patients may present with excoriations and crusted papules but no observable primary lesions. Patients sometimes report that their pruritus has a distinctive burning or stinging component; the onset of such local symptoms reliably heralds the development of distinct clinical lesions 12–24 h later. Almost all DH patients have associated, usually subclinical, gluten-sensitive enteropathy (Chap. 349), and >90% express the HLA-B8/DRw3 and HLA-DQw2 haplotypes. DH may present at any age, including in childhood; onset in the second to fourth decades is most common. The disease is typically chronic. Biopsy of early lesional skin reveals neutrophil-rich infiltrates within dermal papillae. Neutrophils, fibrin, edema, and microvesicle formation at these sites are characteristic of early disease. Older lesions may demonstrate nonspecific features of a subepidermal bulla or an excoriated papule. Because the clinical and histologic features of this disease can be variable and resemble those of other subepidermal blistering disorders, the diagnosis is confirmed by direct immunofluorescence microscopy of normal-appearing perilesional skin. Such studies demonstrate granular deposits of IgA (with or without complement components) in the papillary dermis and along the epidermal basement membrane zone. IgA deposits in the skin are unaffected by control of disease with medication; however, these immunoreactants diminish in intensity or disappear in patients maintained for long periods on a strict gluten-free diet (see below). Patients with DH have granular deposits of IgA in their epidermal basement membrane zone and should be distinguished from individuals with linear IgA deposits at this site (see below). Although most DH patients do not report overt gastrointestinal symptoms or have laboratory evidence of malabsorption, biopsies of the small bowel usually reveal blunting of intestinal villi and a lymphocytic infiltrate in the lamina propria. As is true for patients with celiac disease, this gastrointestinal abnormality can be reversed by a gluten-free diet. Moreover, if maintained, this diet alone may control the skin disease and eventuate in clearance of IgA deposits from these patients’ epidermal basement membrane zones. Subsequent gluten exposure in such patients alters the morphology of their small bowel, elicits a flareup of their skin disease, and is associated with the reappearance of IgA in their epidermal basement membrane zones. As in patients with celiac disease, dietary gluten sensitivity in patients with DH is associ-373 ated with IgA endomysial autoantibodies that target tissue transglutaminase. Studies indicate that patients with DH also have high-avidity IgA autoantibodies to epidermal transglutaminase 3 and that the latter is co-localized with granular deposits of IgA in the papillary dermis of DH patients. Patients with DH also have an increased incidence of thyroid abnormalities, achlorhydria, atrophic gastritis, and autoantibodies to gastric parietal cells. These associations likely relate to the high frequency of the HLA-B8/DRw3 haplotype in these patients, because this marker is commonly linked to autoimmune disorders. The mainstay of treatment of DH is dapsone, a sulfone. Patients respond rapidly (24–48 h) to dapsone (50–200 mg/d), but require careful pretreatment evaluation and close follow-up to ensure that complications are avoided or controlled. All patients taking dapsone at >100 mg/d will have some hemolysis and methemoglobinemia, which are expected pharmacologic side effects of this agent. Gluten restriction can control DH and lessen dapsone requirements; this diet must rigidly exclude gluten to be of maximal benefit. Many months of dietary restriction may be necessary before a beneficial result is achieved. Good dietary counseling by a trained dietitian is essential. Linear IgA disease, once considered a variant form of DH, is actually a separate and distinct entity. Clinically, patients with linear IgA disease may resemble individuals with DH, BP, or other subepidermal blistering diseases. Lesions typically consist of papulovesicles, bullae, and/or urticarial plaques that develop predominantly on central or flexural sites. Oral mucosal involvement occurs in some patients. Severe pruritus resembles that seen in patients with DH. Patients with linear IgA disease do not have an increased frequency of the HLA-B8/DRw3 haplotype or an associated enteropathy and therefore are not candidates for treatment with a gluten-free diet. Histologic alterations in early lesions may be virtually indistinguishable from those in DH. However, direct immunofluorescence microscopy of normal-appearing perilesional skin reveals a linear band of IgA (and often C3) in the epidermal basement membrane zone. Most patients with linear IgA disease have circulating IgA basement membrane autoantibodies directed against neoepitopes in the proteolytically processed extracellular domain of BPAG2. These patients generally respond to treatment with dapsone (50–200 mg/d). Epidermolysis bullosa acquisita (EBA) is a rare, noninherited, polymorphic, chronic, subepidermal blistering disease. (The inherited form is discussed in Chap. 427.) Patients with classic or noninflammatory EBA have blisters on noninflamed skin, atrophic scars, milia, nail dystrophy, and oral lesions. Because lesions generally occur at sites exposed to minor trauma, classic EBA is considered a mechanobullous disease. Other patients with EBA have widespread inflammatory scarring and bullous lesions that resemble severe BP. Inflammatory EBA may evolve into the classic, noninflammatory form of this disease. Rarely, patients present with lesions that predominate on mucous membranes. The HLA-DR2 haplotype is found with increased frequency in EBA patients. Studies suggest that EBA is sometimes associated with inflammatory bowel disease (especially Crohn’s disease). The histology of lesional skin varies with the character of the lesion being studied. Noninflammatory bullae are subepidermal, feature a sparse leukocytic infiltrate, and resemble the lesions in patients with porphyria cutanea tarda. Inflammatory lesions consist of neutrophilrich subepidermal blisters. EBA patients have continuous deposits of IgG (and frequently C3) in a linear pattern within the epidermal basement membrane zone. Ultrastructurally, these immunoreactants are found in the sublamina densa region in association with anchoring fibrils. Approximately 50% of EBA patients have demonstrable circulating IgG basement membrane autoantibodies directed against type VII collagen—the collagen species that makes up anchoring fibrils. Such IgG autoantibodies bind the dermal side of 1 M NaCl split skin (in contrast to IgG autoantibodies in patients with BP). Studies have shown that passive transfer of experimental or clinical IgG against type 374 VII collagen can produce lesions in mice that clinically, histologically, and immunopathologically resemble those in patients with inflammatory EBA. Treatment of EBA is generally unsatisfactory. Some patients with inflammatory EBA may respond to systemic glucocorticoids, either alone or in combination with immunosuppressive agents. Other patients (especially those with neutrophil-rich inflammatory lesions) may respond to dapsone. The chronic, noninflammatory form of EBA is largely resistant to treatment, although some patients may respond to cyclosporine, azathioprine, or IVIg. Mucous membrane pemphigoid (MMP) is a rare, acquired, subepithelial immunobullous disease characterized by erosive lesions of mucous membranes and skin that result in scarring of at least some sites of involvement. Common sites include the oral mucosa (especially the gingiva) and conjunctiva; other sites that may be affected include the nasopharyngeal, laryngeal, esophageal, and anogenital mucosa. Skin lesions (present in about one-third of patients) tend to predominate on the scalp, face, and upper trunk and generally consist of a few scattered erosions or tense blisters on an erythematous or urticarial base. MMP is typically a chronic and progressive disorder. Serious complications may arise as a consequence of ocular, laryngeal, esophageal, or anogenital lesions. Erosive conjunctivitis may result in shortened fornices, symblepharon, ankyloblepharon, entropion, corneal opacities, and (in severe cases) blindness. Similarly, erosive lesions of the larynx may cause hoarseness, pain, and tissue loss that, if unrecognized and untreated, may eventuate in complete destruction of the airway. Esophageal lesions may result in stenosis and/or strictures that could place patients at risk for aspiration. Strictures may also complicate anogenital involvement. Biopsies of lesional tissue generally show subepithelial vesiculobullae and a mononuclear leukocytic infiltrate. Neutrophils and eosinophils may be seen in biopsies of early lesions; older lesions may demonstrate a scant leukocytic infiltrate and fibrosis. Direct immunofluorescence microscopy of perilesional tissue typically reveals deposits of IgG, IgA, and/or C3 in the epidermal basement membrane. Because many patients with MMP exhibit no evidence of circulating basement membrane autoantibodies, testing of perilesional skin is important diagnostically. Although MMP was once thought to be a single nosologic entity, it is now largely regarded as a disease phenotype that may develop as a consequence of an autoimmune reaction to a variety of molecules in the epidermal basement membrane (e.g., BPAG2, laminin-332, type VII collagen, and other antigens yet to be completely defined). Studies suggest that MMP patients with autoantibodies to laminin-332 have an increased relative risk for cancer. Treatment of MMP is largely dependent upon the sites of involvement. Due to potentially severe complications, patients with ocular, laryngeal, esophageal, and/or anogenital involvement require aggressive systemic treatment with dapsone, prednisone, or the latter in combination with another immunosuppressive agent (e.g., azathioprine, mycophenolate mofetil, cyclophosphamide, or rituximab) or IVIg. Less threatening forms of the disease may be managed with topical or intralesional glucocorticoids. PART 2 Cardinal Manifestations and Presentation of Diseases The cutaneous manifestations of dermatomyositis (Chap. 388) are often distinctive but at times may resemble those of systemic lupus erythematosus (SLE) (Chap. 378), scleroderma (Chap. 382), or other overlapping connective tissue diseases (Chap. 382). The extent and severity of cutaneous disease may or may not correlate with the extent and severity of the myositis. The cutaneous manifestations of dermatomyositis are similar, whether the disease appears in children or in the elderly, except that calcification of subcutaneous tissue is a common late sequela in childhood dermatomyositis. The cutaneous signs of dermatomyositis may precede or follow the development of myositis by weeks to years. Cases lacking muscle FIguRE 73-3 Dermatomyositis. Periorbital violaceous erythema characterizes the classic heliotrope rash. (Courtesy of James Krell, MD; with permission.) involvement (i.e., dermatomyositis sine myositis) have also been reported. The most common manifestation is a purple-red discoloration of the upper eyelids, sometimes associated with scaling (“heliotrope” erythema; Fig. 73-3) and periorbital edema. Erythema on the cheeks and nose in a “butterfly” distribution may resemble the malar eruption of SLE. Erythematous or violaceous scaling patches are common on the upper anterior chest, posterior neck, scalp, and the extensor surfaces of the arms, legs, and hands. Erythema and scaling may be particularly prominent over the elbows, knees, and dorsal interphalangeal joints. Approximately one-third of patients have violaceous, flat-topped papules over the dorsal interphalangeal joints that are pathognomonic of dermatomyositis (Gottron’s papules). Thin violaceous papules and plaques on the elbows and knees of patients with dermatomyositis are referred to as Gottron’s sign (Fig. 73-4). These lesions can be contrasted with the erythema and scaling on the dorsum of the fingers that spares the skin over the interphalangeal joints of some SLE patients. Periungual telangiectasia may be prominent in patients with dermatomyositis. Lacy or reticulated erythema may be associated with fine scaling on the extensor and lateral surfaces of the thighs and upper arms. Other patients, particularly those with long-standing disease, develop areas of hypopigmentation, hyperpigmentation, mild atrophy, and telangiectasia known as poikiloderma. Poikiloderma is rare in both SLE and scleroderma and thus can serve as a clinical sign that distinguishes dermatomyositis from these two diseases. Cutaneous changes may be similar in dermatomyositis and various overlap syndromes where thickening and binding down of the skin of the hands (sclerodactyly) as well as Raynaud’s phenomenon can be seen. However, the presence of severe muscle disease, Gottron’s papules, heliotrope erythema, and poikiloderma serves to distinguish patients with dermatomyositis. Skin biopsy of the erythematous, scaling lesions of dermatomyositis may reveal only mild nonspecific inflammation but sometimes may show changes indistinguishable from those found in SLE, including epidermal atrophy, hydropic degeneration of basal keratinocytes, edema of the upper dermis, and a mild mononuclear cell infiltrate. Direct immunofluorescence microscopy of lesional skin is usually negative, although granular deposits of immunoglobulin(s) and complement in the epidermal basement membrane zone have been described in some patients. Treatment should be directed at the systemic disease. Topical glucocorticoids are sometimes useful; patients should avoid exposure to ultraviolet irradiation and aggressively use photoprotective measures, including broad-spectrum sunscreens. FIguRE 73-4 Gottron’s papules. Dermatomyositis often involves the hands as erythematous flat-topped papules over the knuckles. Periungual telangiectases are also evident. The cutaneous manifestations of lupus erythematosus (LE) (Chap. 378) can be divided into acute, subacute, and chronic or discoid types. Acute cutaneous LE is characterized by erythema of the nose and malar eminences in a “butterfly” distribution (Fig. 73-5A). The erythema is often sudden in onset, accompanied by edema and fine scale, and FIguRE 73-5 Acute cutaneous lupus erythematosus (LE). A. Acute cutaneous LE on the face, showing prominent, scaly, malar erythema. Involvement of other sun-exposed sites is also common. B. Acute cutaneous LE on the upper chest, demonstrating brightly erythematous and slightly edematous papules and plaques. (B, Courtesy of Robert Swerlick, MD; with permission.) correlated with systemic involvement. Patients may have widespread 375 involvement of the face as well as erythema and scaling of the extensor surfaces of the extremities and upper chest (Fig. 73-5B). These acute lesions, while sometimes evanescent, usually last for days and are often associated with exacerbations of systemic disease. Skin biopsy of acute lesions may show only a sparse dermal infiltrate of mononuclear cells and dermal edema. In some instances, cellular infiltrates around blood vessels and hair follicles are notable, as is hydropic degeneration of basal cells of the epidermis. Direct immunofluorescence microscopy of lesional skin frequently reveals deposits of immunoglobulin(s) and complement in the epidermal basement membrane zone. Treatment is aimed at control of systemic disease. Photoprotection is very important in this as well as in other forms of LE. Subacute cutaneous lupus erythematosus (SCLE) is characterized by a widespread photosensitive, nonscarring eruption. In most patients, renal and central nervous system involvement is mild or absent. SCLE may present as a papulosquamous eruption that resembles psoriasis or as annular polycyclic lesions that resemble those seen in erythema multiforme. In the papulosquamous form, discrete erythematous papules arise on the back, chest, shoulders, extensor surfaces of the arms, and dorsum of the hands; lesions are uncommon on the central face and the flexor surfaces of the arms as well as below the waist. These slightly scaling papules tend to merge into large plaques, some with a reticulate appearance. The annular form involves the same areas and presents with erythematous papules that evolve into oval, circular, or polycyclic lesions. The lesions of SCLE are more widespread but have less tendency for scarring than lesions of discoid LE. Skin biopsy reveals a dense mononuclear cell infiltrate around hair follicles and blood vessels in the superficial dermis, combined with hydropic degeneration of basal cells in the epidermis. Direct immunofluorescence microscopy of lesional skin reveals deposits of immunoglobulin(s) in the epidermal basement membrane zone in about one-half of these cases. A particulate pattern of IgG deposition throughout the epidermis has been associated with SCLE. Most SCLE patients have anti-Ro autoantibodies. Local therapy alone is usually unsuccessful. Most patients require treatment with aminoquinoline antimalarial drugs. Low-dose therapy with oral glucocorticoids is sometimes necessary. Photoprotective measures against both ultraviolet B and ultraviolet A wavelengths are very important. Discoid lupus erythematosus (DLE, also called chronic cutaneous LE) is characterized by discrete lesions, most often found on the face, scalp, and/or external ears. The lesions are erythematous papules or plaques with a thick, adherent scale that occludes hair follicles (follicular plugging). When the scale is removed, its underside shows small excrescences that correlate with the openings of hair follicles (so-called “carpet tacking”), a finding relatively specific for DLE. Longstanding lesions develop central atrophy, scarring, and hypopigmentation but frequently have erythematous, sometimes raised borders (Fig. 73-6). These lesions persist for years and tend to expand slowly. Up to 15% of patients with DLE eventually meet the American College of Rheumatology criteria for SLE. However, typical discoid lesions are frequently seen in patients with SLE. Biopsy of DLE lesions shows hyperkeratosis, follicular plugging, atrophy of the epidermis, hydropic degeneration of basal keratinocytes, and a mononuclear cell infiltrate adjacent to epidermal, adnexal, and microvascular basement membranes. Direct immunofluorescence microscopy demonstrates immunoglobulin(s) and complement deposits at the basement membrane zone in ~90% of cases. Treatment is focused on control of local cutaneous disease and consists mainly of photoprotection and topical or intralesional glucocorticoids. If local therapy is ineffective, use of aminoquinoline antimalarial agents may be indicated. The skin changes of scleroderma (Chap. 382) usually begin on the hands, feet, and face, with episodes of recurrent nonpitting edema. Sclerosis of the skin commences distally on the fingers (sclerodactyly) and spreads proximally, usually accompanied by resorption of bone of the fingertips, which may have punched out ulcers, stellate scars, or areas of hemorrhage (Fig. 73-7). The fingers may actually shrink and PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 73-6 Discoid (chronic cutaneous) lupus erythematosus. Violaceous, hyperpigmented, atrophic plaques, often with evidence of follicular plugging that may result in scarring, are typical. become sausage-shaped, and, because the fingernails are usually unaffected, they may curve over the end of the fingertips. Periungual telangiectases are usually present, but periungual erythema is rare. In advanced cases, the extremities show contractures and calcinosis cutis. Facial involvement includes a smooth, unwrinkled brow, taut skin over the nose, shrinkage of tissue around the mouth, and perioral radial furrowing (Fig. 73-8). Matlike telangiectases are often present, particularly on the face and hands. Involved skin feels indurated, smooth, and bound to underlying structures; hyperand hypopigmentation are common as well. Raynaud’s phenomenon (i.e., cold-induced blanching, cyanosis, and reactive hyperemia) is documented in almost all patients and can precede development of scleroderma by many years. Linear scleroderma is a limited form of disease that presents in a linear, bandlike distribution and tends to involve deep as well as superficial layers of skin. The combination of calcinosis cutis, Raynaud’s phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia has been termed the CREST syndrome. Centromere antibodies have been reported in a very high percentage of patients with the CREST syndrome but in only a small minority of patients with scleroderma. Skin biopsy reveals thickening of the dermis and homogenization of collagen bundles. Direct immunofluorescence microscopy of lesional skin is usually negative. Morphea is characterized by localized thickening and sclerosis of skin; it dominates on the trunk. This disorder may affect children or adults. Morphea begins as erythematous or flesh-colored plaques that become sclerotic, develop central hypopigmentation, and have an FIguRE 73-7 Scleroderma showing acral sclerosis and focal digi-tal ulcers. FIguRE 73-8 Scleroderma often eventuates in development of an expressionless, masklike facies. erythematous border. In most cases, patients have one or a few lesions, and the disease is termed localized morphea. In some patients, widespread cutaneous lesions may occur without systemic involvement (generalized morphea). Many adults with generalized morphea have concomitant rheumatic or other autoimmune disorders. Skin biopsy of morphea is indistinguishable from that of scleroderma. Scleroderma and morphea are usually quite resistant to therapy. For this reason, physical therapy to prevent joint contractures and to maintain function is employed and is often helpful. Treatment options for early, rapidly progressive disease include phototherapy (UVA1 or PUVA) or methotrexate (15–20 mg/week) alone or in combination with daily glucocorticoids. Diffuse fasciitis with eosinophilia is a clinical entity that can sometimes be confused with scleroderma. There is usually a sudden onset of swelling, induration, and erythema of the extremities, frequently following significant physical exertion. The proximal portions of the extremities (upper arms, forearms, thighs, calves) are more often involved than are the hands and feet. While the skin is indurated, it usually displays a woody, dimpled, or “pseudocellulite” appearance rather than being bound down as in scleroderma; contractures may occur early secondary to fascial involvement. The latter may also cause muscle groups to be separated and veins to appear depressed (i.e., the “groove sign”). These skin findings are accompanied by peripheral-blood eosinophilia, increased erythrocyte sedimentation rate, and sometimes hypergammaglobulinemia. Deep biopsy of affected areas of skin reveals inflammation and thickening of the deep fascia overlying muscle. An inflammatory infiltrate composed of eosinophils and mononuclear cells is usually found. Patients with eosinophilic fasciitis appear to be at increased risk for developing bone marrow failure or other hematologic abnormalities. While the ultimate course of eosinophilic fasciitis is uncertain, many patients respond favorably to treatment with prednisone in doses of 40–60 mg/d. The eosinophilia-myalgia syndrome, a disorder with epidemic numbers of cases reported in 1989 and linked to ingestion of l-tryptophan manufactured by a single company in Japan, is a multisystem disorder characterized by debilitating myalgias and absolute eosinophilia in association with varying combinations of arthralgias, pulmonary symptoms, and peripheral edema. In a later phase (3–6 months after initial symptoms), these patients often develop localized sclerodermatous skin changes, weight loss, and/or neuropathy (Chap. 382). The precise cause of this syndrome, which may resemble other sclerotic skin conditions, is unknown. However, the implicated lots of l-tryptophan contained the contaminant 1,1-ethylidene bis[tryptophan]. This contaminant may be pathogenic or may be a marker for another substance that provokes the disorder. Cutaneous Drug Reactions Evidence suggests an immunologic basis for most acute drug eruptions. Drug reactions may result from immediate release of preformed Kanade Shinkai, Robert S. Stern, Bruce U. Wintroub mediators (e.g., urticaria, anaphylaxis), antibody-mediated reactions, Cutaneous reactions are among the most frequent adverse reactions to drugs. Most are benign, but a few can be life threatening. Prompt recognition of severe reactions, drug withdrawal, and appropriate therapeutic interventions can minimize toxicity. This chapter focuses on adverse cutaneous reactions to systemic medications; it covers their incidence, patterns, and pathogenesis and provides some practical guidelines on treatment, assessment of causality, and future use of drugs. In the United States, more than 3 billion prescriptions for over 60,000 drug products, which include more than 2000 different active agents, are dispensed annually. Hospital inpatients alone annually receive about 120 million courses of drug therapy, and half of adult Americans receive prescription drugs on a regular outpatient basis. Many patients use over-the-counter medicines that may cause adverse cutaneous reactions. Several large cohort studies established that acute cutaneous reaction to drugs affected about 3% of hospital inpatients. Reactions usually occur a few days to 4 weeks after initiation of therapy. Many drugs of common use are associated with a 1–2% rate of rashes during premarketing clinical trials. The risk is often higher when medications are used in general, unselected populations. The rate may reach 3–7% for amoxicillin, sulfamethoxazole, many anticonvulsants, and anti-HIV agents. In addition to acute eruptions, a variety of skin diseases can be induced or exacerbated by prolonged use of drugs (e.g., pruritus, pigmentation, nail or hair disorders, psoriasis, bullous pemphigoid, photosensitivity, and even cutaneous neoplasms). These drug reactions are not frequent, but neither their incidence nor their impact on public health has been evaluated. In a series of 48,005 inpatients over a 20-year period, morbilliform rash (91%) and urticaria (6%) were the most frequent skin reactions. Severe reactions are actually too rare to be detected in such cohorts. Although rare, severe cutaneous reactions to drugs have an important impact on health because of significant sequelae, including mortality. Adverse drug rashes are responsible for hospitalization, increase the duration of hospital stay, and are life threatening. Some populations are at increased risk of drug reactions, including patients with collagen vascular diseases, bone marrow graft recipients, and those with acute Epstein-Barr virus infection. The pathophysiology underlying this association is unknown, but may be related to immunocompromise or immune dysregulation. Risk of drug allergy, including severe hypersensitivity reactions, is increased with HIV infection; individuals with advanced HIV disease (e.g., CD4 T lymphocyte count <200 cells/μL) have a fortyto fiftyfold increased risk of adverse reactions to sulfamethoxazole (Chap. 226). Adverse cutaneous responses to drugs can arise as a result of immunologic or nonimmunologic mechanisms. Examples of responses that arise from nonimmunologic mechanisms are pigmentary changes related to dermal accumulation of medications or their metabolites; alteration of hair follicles by antimetabolites and signaling inhibitors; and lipodystrophy associated with metabolic effects of anti-HIV medications. These side effects are mostly toxic, predictable, and sometimes can be avoided in part by simple preventive measures. immune complex deposition, and antigen-specific responses. Drug-specific T cell clones can be derived from the blood or from skin lesions of patients with a variety of drug allergies, strongly suggesting that these T cells play a role in drug allergy in an antigen-specific manner. Specific clones were obtained with penicillin G, amoxicillin, cephalosporins, sulfamethoxazole, phenobarbital, carbamazepine, and lamotrigine (medications that are frequently a cause of drug eruptions). Both CD4 and CD8 clones have been obtained; however, their specific roles in the manifestations of allergy have not been elucidated. Drug presentation to T cells was major histocompatibility complex (MHC)-restricted and may involve drug-peptide complex recognition by specific T cell receptors (TCRs). Once a drug has induced an immune response, the final phenotype of the reaction probably depends on the nature of effectors: cytotoxic (CD8+) T cells in blistering and certain hypersensitivity reactions, chemokines for reactions mediated by neutrophils or eosinophils, and collaboration with B cells for production of specific antibodies for urticarial reaction. Immunologic reactions have recently been classified into further subtypes that provide a useful framework for designating adverse drug reactions based on involvement of specific immune pathways (Table 74-1). Immediate Reactions Immediate reactions depend on the release of mediators of inflammation by tissue mast cells or circulating basophils. These mediators include histamine, leukotrienes, prostaglandins, bradykinins, platelet-activating factor, enzymes, and proteoglycans. Drugs can trigger mediator release either directly (“anaphylactoid” reaction) or through IgE-specific antibodies. These reactions usually manifest in the skin and gastrointestinal, respiratory, and cardiovascular systems (Chap. 376). Primary symptoms and signs include pruritus, urticaria, nausea, vomiting, abdominal cramps, bronchospasm, laryngeal edema, and, occasionally, anaphylactic shock with hypotension and death. They occur within minutes of drug exposure. Nonsteroidal anti-inflammatory drugs (NSAIDs), including aspirin, and radiocontrast media are frequent causes of direct mast cell degranulation or anaphylactoid reactions, which can occur on first exposure. Penicillins and muscle relaxants used in general anesthesia are the most frequent causes of IgE-dependent reactions to drugs, which require prior sensitization. Release of mediators is triggered when polyvalent drug protein conjugates cross-link IgE molecules fixed to sensitized cells. Certain routes of administration favor different clinical patterns (e.g., gastrointestinal effects from oral route, circulatory effects from intravenous route). Immune Complex–Dependent Reactions Serum sickness is produced by tissue deposition of circulating immune complexes with consumption of complement. It is characterized by fever, arthritis, nephritis, neuritis, edema, and a urticarial, papular, or purpuric rash (Chap. 385). First described following administration of nonhuman sera, it currently occurs in the setting of monoclonal antibodies and other similar medications. In classic serum sickness, symptoms develop 6 days or more after exposure to a drug, the latent period representing the time needed to synthesize antibody. Cutaneous or systemic vasculitis, a relatively rare complication of drugs, may also be a result of immune complex deposition (Chap. 385). Cephalosporin and other medications, including monoclonal antibodies such as infliximab, rituximab, and omalizumab, may be associated with clinically similar “serum sickness–like” reactions. The mechanism of this reaction is unknown but is unrelated to complement activation and immune complex formation. Delayed Hypersensitivity While not completely understood, delayed hypersensitivity directed by drug-specific T cells is an important mechanism underlying the most common drug eruptions, i.e., morbilliform eruptions, and also rare and severe forms such as drug-induced hypersensitivity syndrome (DIHS) (also known PART 2 Cardinal Manifestations and Presentation of Diseases Abbreviations: GM-CSF, granulocyte-macrophage colony-stimulating factor; IFN, interferon; IL, interleukin; TNF, tumor necrosis factor. as drug rash with eosinophilia and systemic symptoms [DRESS]), acute generalized exanthematous pustulosis (AGEP), Stevens-Johnson syndrome (SJS), and toxic epidermal necrolysis (TEN) (Table 74-1). Drug-specific T cells have been detected in these types of drug eruptions. For example, drug-specific cytotoxic T cells have been detected in the skin lesions of fixed drug eruptions and of TEN. In TEN, skin lesions contain T lymphocytes reactive to autologous lymphocytes and keratinocytes in a drug-specific, HLA-restricted, and perforin/ granzyme-mediated pathway. The mechanism(s) by which medications result in T cell activation is unknown. Two hypotheses prevail: first, that the antigens driving these reactions may be the native drug itself or components of the drug covalently complexed with endogenous proteins, presented in association with HLA molecules to T cells through the classical antigen presentation pathway, or alternatively, through direct interaction of the drug/metabolite with the T cell receptor or peptide-loaded HLA (e.g., the pharmacologic interaction of drugs with immune receptors, or p-i hypothesis). Recent x-ray crystallography data characterizing binding between specific HLA molecules to particular drugs known to cause hypersensitivity reactions demonstrates unique alterations to the MHC peptide-binding groove, suggesting a molecular basis for T cell activation and the development of hypersensitivity reactions. Genetic determinants may predispose individuals to severe responses to drugs. Polymorphisms in cytochrome P450 enzymes, drug acetylation, methylation (such as thiopurine methyltransferase activity and azathioprine), and other forms of metabolism (such as glucose-6-phosphate dehydrogenase) may increase susceptibility to drug toxicity or underdosing, highlighting a role for differential pharmacokinetic or pharmacodynamic effects. Associations between drug hypersensitivities and HLA haplotypes also suggest a key role for immune mechanisms. Hypersensitivity to the anti-HIV medication abacavir is strongly associated with HLAB*57:01 (Chap. 226). In Taiwan, within a homogeneous Han Chinese population, a 100% association was observed between SJS/TEN (but not DIHS) related to carbamazepine and HLA-B*15:02. In the same population, another 100% association was found between SJS, TEN, or DIHS related to allopurinol and HLA-B*58:01. These associations are drug and phenotype specific; that is, HLA-specific T cell stimulation by medications leads to distinct reactions and may explain why the reaction patterns are so clinically diverse. However, the strong associations found in Taiwan have not been observed in other countries with more heterogeneous populations. Recognition of the HLA associations with drug hypersensitiv ity has been acknowledged by recommendations to screen high-risk populations. Genetic screening for HLA-B*57:01 to prevent abacavir hypersensitivity, which carries a 100% negative predictive value when patch test confirmed and 55% positive predictive value generalizable across races, is becoming the clinical standard of care worldwide (number needed to treat = 13). The U.S. Food and Drug Administration recently mandated new labeling of carbamazepine recommending HLA-B*15:02 screening of Asian individuals prior to receiving a new prescription of the medication. The American College of Rheumatology has recommended HLA-B*58:01 screening of Han Chinese patients prescribed allopurinol. To date, screening for a single HLA (but not multiple HLA haplotypes) in specific populations has been determined to be cost-effective. Several investigators have proposed that specific HLA haplotypes associated with drug hypersensitivity indeed play a pathogenic role; stimulation of carbamazepine-specific cytotoxic T lymphocytes (CTL) in the context of HLA-B*15:02 results in production of a putative mediator of keratinocyte necrosis in TEN. Other studies have identified CTLs reactive to carbamazepine that use highly restricted V-alpha and V-beta TCR repertoires in patients with carbamazepine hypersensitivity that are not found in carbamazepine-tolerant individuals. Although not yet clinically available, some investigators have suggested combined genetic testing for specific HLA haplotypes and functional screening for TCR repertoire to best identify patients at risk. NONIMMuNE CuTANEOuS REACTIONS Exacerbation or Induction of Dermatologic Diseases A variety of drugs can exacerbate preexisting diseases or sometimes induce a disease that may or may not disappear after withdrawal of the inducing medication. For example, NSAIDs, lithium, beta blockers, tumor necrosis factor (TNF) α cytokine antagonists, interferon (IFN) α, and angiotensinconverting enzyme (ACE) inhibitors can exacerbate plaque psoriasis, whereas antimalarials and withdrawal of systemic glucocorticoids can worsen pustular psoriasis. The situation of TNF-α inhibitors is unusual, as this class of medications is used to treat psoriasis; however, in other cases, they may induce psoriasis (especially palmar-plantar) in patients being treated for other conditions. Acne may be induced by glucocorticoids, androgens, lithium, and antidepressants. Follicular papular or pustular eruptions of the face and trunk, sometimes mimicking acne, frequently occur with epidermal growth factor (EGF) receptor antagonists. In the case of EGF-receptor antagonists, the severity of the eruption correlates with a better anticancer effect. It may be secondarily impetiginized and often spares areas of prior or active radiation. Tetracycline antibiotics, topical corticosteroids, and topical anti-acne treatments (such as benzoyl peroxide and clindamycin lotion) are helpful. Several medications induce or exacerbate autoimmune disease. Interleukin (IL) 2, IFN-α, and anti-TNF-α are associated with new-onset systemic lupus erythematosus (SLE). Drug-induced lupus is classically marked by antinuclear and antihistone antibodies and, in some cases, anti-double-stranded DNA (D-penicillamine, anti-TNF-α) or p-ANCA (minocycline) antibodies. Minocycline and thiazide diuretics may exacerbate subacute SLE; pemphigus can be induced by D-penicillamine and ACE inhibitors. Furosemide is associated with drug-induced bullous pemphigoid. Vancomycin is associated with linear IgA bullous dermatitis, a transient blistering disorder. Other medications may cause highly selective cutaneous reactions. Gadolinium contrast has been associated with nephrogenic systemic fibrosis, a condition of sclerosing skin with rare internal organ involvement; advanced renal compromise may be an important risk factor. Granulocyte colony-stimulating factor may induce various neutrophilic dermatoses, including Sweet syndrome and pyoderma gangrenosum. Both systemic and topical glucocorticoids cause a variety of atrophic skin changes, including atrophy and striae, and, in sufficiently high doses, can impede wound healing. The hypothesis that a drug may be responsible should always be considered, especially in cases with atypical clinical presentation. Resolution of the cutaneous reaction may be delayed upon discontinuation of the medication (e.g., lichenoid drug eruptions may take years to resolve). Photosensitivity Eruptions Photosensitivity eruptions are usually most marked in sun-exposed areas but may extend to sun-protected areas. The mechanism is almost always phototoxicity. Phototoxic reactions resemble sunburn and can occur with first exposure to a drug. Blistering may occur in drug-related pseudoporphyria, most commonly with NSAIDs (Fig. 74-1). The severity of the reactions depends on the tissue level of the drug, its efficiency as a photosensitizer, and the extent of exposure to the activating wavelengths of ultraviolet (UV) light (Chap. 75). Common orally administered photosensitizing drugs include fluoroquinolones and tetracycline antibiotics. Other drugs less frequently encountered are chlorpromazine, thiazides, and NSAIDs. Voriconazole may result in severe photosensitivity, accelerated photo-induced aging, and cutaneous carcinogenesis in organ transplant recipients. Because UV-A and visible light, which trigger these reactions, are not easily absorbed by nonopaque sunscreens and are transmitted through window glass, photosensitivity reactions may be difficult to block. Photosensitivity reactions abate with removal of either the drug or UV radiation, use of sunscreens that block UV-A light, and treating the reaction as one would a sunburn. Rarely, individuals develop persistent reactivity to light, necessitating long-term avoidance of sun 379 exposure. Pigmentation Changes Drugs, either systemic or topical, may cause a variety of pigmentary changes in the skin. Oral contraceptives may induce melasma. Long-term minocycline, pefloxacin, and amiodarone may cause blue-gray pigmentation. Phenothiazine, gold, and bismuth result in gray-brown pigmentation of sun-exposed areas. Numerous cancer chemotherapeutic agents may be associated with characteristic patterns of pigmentation (e.g., bleomycin, busulfan, daunorubicin, cyclophosphamide, hydroxyurea, and methotrexate). Clofazimine causes a drug-induced lipofuscinosis with characteristic red-brown coloration. Hyperpigmentation of the face, mucous membranes, and pretibial and subungual areas occurs with antimalarials. Quinacrine causes generalized, cutaneous yellow discoloration. Pigmentation changes may also occur in mucous membranes (busulfan, bismuth), conjunctiva (chlorpromazine, thioridazine, imipramine, clomipramine), nails (zidovudine, doxorubicin, cyclophosphamide, bleomycin, fluorouracil, hydroxyurea), hair, and teeth (tetracyclines). Warfarin Necrosis of Skin This rare reaction (0.01–0.1%) usually occurs between the third and tenth days of therapy with warfarin, usually in women. Common sites are breasts, thighs, and buttocks (Fig. 74-2). Lesions are sharply demarcated, indurated, and erythematous or purpuric and may progress to form large, hemorrhagic bullae with eventual necrosis and slow-healing eschar formation. These lesions can be life threatening. Development of the syndrome is unrelated to drug dose, and the course is not altered by discontinuation of the drug after onset of the eruption. Warfarin anticoagulation in heterozygous protein C deficiency causes a precipitous fall in circulating levels of protein C, permitting hypercoagulability and thrombosis in the cutaneous microvasculature, with consequent areas of necrosis. Heparin-induced necrosis may have clinically similar features but is probably due to heparin-induced platelet aggregation with subsequent occlusion of blood vessels; it can affect areas adjacent to the injection site or more distant sites if infused. Warfarin-induced cutaneous necrosis is treated with vitamin K, heparin, surgical debridement, and intensive wound care. Treatment with protein C concentrates may also be helpful. Newer agents such as dabigatran etexilate may avoid warfarin necrosis in high-risk patients. Drug-Induced Hair Disorders • dRUg-INdUCEd HAIR LOSS Medications may affect hair follicles at two different phases of their growth cycle: anagen (growth) or telogen (resting). Anagen effluvium occurs within days of drug administration, especially with antimetabolite or other chemotherapeutic drugs. In contrast, in telogen effluvium, the delay is 2 to 4 months following initiation of a new medication. Both present as diffuse nonscarring alopecia most often reversible after discontinuation FIguRE 74-1 Pseudoporphyria due to nonsteroidal anti-inflammatory drugs. FIguRE 74-2 Warfarin necrosis. 380 of the responsible agent. The prevalence and severity of alopecia depend on the drug as well as on an individual’s predisposition. A considerable number of drugs have been reported to induce hair loss. These include antineoplastic agents (alkylating agents, bleomycin, vinca alkaloids, platinum compounds), anticonvulsants (carbamazepine, valproate), antihypertensive drugs (beta blockers), antidepressants, antithyroid drugs, IFNs (especially IFN-α), oral contraceptives, and cholesterol-lowering agents. dRUg-INdUCEd HAIR gROwTH Medications may also cause hair growth. Hirsutism is an excessive growth of terminal hair with masculine hair growth pattern in a female, most often on the face and trunk, due to androgenic stimulation of hormone-sensitive hair follicles (anabolic steroids, oral contraceptives, testosterone, corticotropin). Hypertrichosis is a distinct pattern of hair growth, not in a masculine pattern, typically located on the forehead and temporal regions of the face. Drugs responsible for hypertrichosis include anti-inflammatory drugs, glucocorticoids, vasodilators (diazoxide, minoxidil), diuretics (acetazolamide), anticonvulsants (phenytoin), immunosuppressive agents (cyclosporine A), psoralens, and zidovudine. Changes in hair color or structure are uncommon adverse effects from medications. Hair discoloration may occur with chloroquine, IFN-α, chemotherapeutic agents, and tyrosine kinase inhibitors. Changes in hair structure have been observed in patients given epidermal growth factor receptor (EGFR) inhibitors, tyrosine kinase inhibitors (Fig. 74-3), and acitretin. Drug-Induced Nail Disorders Drug-related nail disorders usually involve all 20 nails and need months to resolve after withdrawal of the offending agent. The pathogenesis is most often toxic. Drug-induced nail changes include Beau’s line (transverse depression of the nail plate), onycholysis (detachment of the distal part of the nail plate), onychomadesis (detachment of the proximal part of the nail plate), pigmentation, and paronychia (inflammation of periungual skin). ONYCHOLYSIS Onycholysis occurs with tetracyclines, fluoroquinolones, phenothiazines, and psoralens, as well as in persons taking NSAIDs, captopril, retinoids, sodium valproate, and many chemotherapeutic agents such as anthracyclines or taxanes including paclitaxel and docetaxel. The risk of onycholysis in patients receiving cytotoxic drugs, tetracyclines, quinolones, phenothiazines, and psoralens can be increased by exposure to sunlight. ONYCHOMAdESIS Onychomadesis is caused by temporary arrest of nail matrix mitotic activity. Common drugs reported to induce onychomadesis include carbamazepine, lithium, retinoids, and chemotherapeutic agents such as cyclophosphamide and vincristine. PARONYCHIA Paronychia and multiple pyogenic granuloma (Fig. 74-4) with progressive and painful periungual abscess of fingers and toes PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 74-3 Dysmorphic eyelashes in association with erlotinib. FIguRE 74-4 Pyogenic granuloma in association with isotretinoin. are side effects of systemic retinoids, lamivudine, indinavir, and anti-EGFR monoclonal antibodies (cetuximab, gefitinib). NAIL dISCOLORATION Some drugs—including anthracyclines, taxanes, fluorouracil, psoralens, and zidovudine—may induce nail bed hyper-pigmentation through melanocyte stimulation. It appears to be reversible and dose-dependent. Toxic Erythema of Chemotherapy and Other Chemotherapy Reactions Because many agents used in cancer chemotherapy inhibit cell division, rapidly proliferating elements of the skin, including hair, mucous membranes, and appendages, are sensitive to their effects. A broad spectrum of chemotherapy-related skin toxicities have been reported, including neutrophilic eccrine hidradenitis, sterile cellulitis, exfoliative dermatitis, and flexural erythema; although previously designated as distinct skin eruptions, recent nomenclature classifies these under the unifying diagnosis of toxic erythema of chemotherapy (TEC). Acral erythema, marked by dysesthesia and an erythematous, edematous eruption of the palms and soles, is caused by cytarabine, doxorubicin, methotrexate, hydroxyurea, and fluorouracil and may be alleviated by pyridoxine supplementation. The recent introduction of many new monoclonal antibody and small molecular signaling inhibitors for the treatment of cancer has been accompanied by numerous reports of skin and hair toxicity; only the most common of these are mentioned here. Cetuximab and other EGFR antagonists induce follicular eruptions and nail toxicity after a mean interval of 10 days in a majority of patients. Xerosis, eczematous eruptions, acneiform eruptions, and pruritus are common. Erlotinib is associated with marked hair textural changes (Fig. 74-3). Sorafenib, a tyrosine kinase inhibitor, may result in follicular eruptions and bullous palmoplantar eruptions with dysesthesia (Fig. 74-5). BRAF inhibitors are associated with photosensitivity, dyskeratotic (Grover’s-like) rash, hyperkeratotic benign cutaneous neoplasms, and keratoacanthoma-like squamous cell carcinomas. Rash, pruritus, and vitiliginous depigmentation have been reported in association with ipilimumab (anti-CTLA4) treatment. IMMuNE CuTANEOuS REACTIONS: COMMON Maculopapular Eruptions Morbilliform or maculopapular eruptions (Fig. 74-6) are the most common of all drug-induced reactions, often start on the trunk or intertriginous areas, and consist of erythematous macules and papules that are symmetric and confluent. Involvement of mucous membranes is unusual; the eruption may be associated with moderate to severe pruritus and fever. Diagnosis is rarely assisted by laboratory testing. Skin biopsy often shows nonspecific inflammatory changes. A viral exanthem is the principal differential diagnostic consideration, especially in children, and graft-versus-host disease is also a consideration in the proper clinical setting. Absence of enanthems; absence of ear, nose, throat, and upper respiratory tract symptoms; and polymorphism of the skin lesions support a drug rather than a viral eruption. Certain medications carry very high rates of morbilliform eruption, including nevirapine and lamotrigine, even in the absence of hypersensitivity reactions. Lamotrigine morbilliform rash is associated with higher starting doses, rapid dose escalation, concomitant use of valproate (which increases lamotrigine levels and half-life) and use in children, especially with seizure disorder. FIguRE 74-5 Sorafenib-associated hand-foot syndrome. Maculopapular reactions usually develop within 1 week of initiation of therapy and last less than 2 weeks. Occasionally, these eruptions resolve despite continued use of the responsible drug. Because the eruption may also worsen, the suspect drug should be discontinued unless it is essential; it is important to note that the rash may continue to progress for a few days to up to one week following medication discontinuation. Oral antihistamines and emollients may help relieve pruritus. Short courses of potent topical glucocorticoids can reduce inflammation and symptoms. Systemic glucocorticoid treatment is rarely indicated. Pruritus Pruritus is associated with almost all drug eruptions and, in some cases, may represent the only symptom of the adverse cutaneous reaction. It is usually alleviated by antihistamines such as hydroxyzine or diphenhydramine. Pruritus stemming from specific medications may require distinct treatment; opiate-related pruritus may require selective opiate antagonists to gain relief. Pruritus is a common complication of antimalarial therapy, occurring in up to 50% of black patients receiving chloroquine, and may be severe enough to lead to discontinuation of treatment. It is much rarer in Caucasians taking chloroquine. Intense pruritus, sometimes accompanied by an eczematous rash, may occur in 20% of patients receiving IFN and ribavirin for hepatitis C; addition of the protease inhibitor telaprevir may increase 381 this occurrence to 50% of treated patients. FIguRE 74-6 Morbilliform drug eruption. urticaria/Angioedema/Anaphylaxis Urticaria, the second most frequent type of cutaneous reaction to drugs, is characterized by pruritic, red wheals of varying size rarely lasting more than 24 h. It has been observed in association with nearly all drugs, most frequently ACE inhibitors, aspirin, NSAIDs, penicillin, and blood products. However, medications account for no more than 10–20% of acute urticaria cases. Deep edema within dermal and subcutaneous tissues is known as angioedema and may involve respiratory and gastrointestinal mucous membranes as well. Urticaria and angioedema may be part of a life-threatening anaphylactic reaction. Drug-induced urticaria may be caused by three mechanisms: an IgE-dependent mechanism, circulating immune complexes (serum sickness), and nonimmunologic activation of effector pathways. IgE-dependent urticarial reactions usually occur within 36 h of drug exposure but can occur within minutes. Immune complex–induced urticaria associated with serum sickness–like reactions usually occurs 6–12 days after first exposure. In this syndrome, the urticarial eruption (typically polycyclic plaques) may be accompanied by fever, hematuria, arthralgias, hepatic dysfunction, and neurologic symptoms. Certain drugs, such as NSAIDs, ACE inhibitors, angiotensin II antagonists, radiographic dye, and opiates, may induce urticarial reactions, angioedema, and anaphylaxis in the absence of drug-specific antibody through direct mast-cell degranulation. Radiocontrast agents are a common cause of urticaria and, in rare cases, can cause anaphylaxis. High-osmolality radiocontrast media were about five times more likely to induce urticaria (1%) or anaphylaxis than were newer low-osmolality media. About one-third of those with mild reactions to previous exposure react on reexposure. Pretreatment with prednisone and diphenhydramine reduces reaction rates. Persons with a reaction to a high-osmolality contrast media may be given low-osmolality media if later contrast studies are required. The treatment of urticaria or angioedema depends on the severity of the reaction. In severe cases with respiratory or cardiovascular compromise, epinephrine is the mainstay of therapy, but its effect is reduced in patients using beta blockers. Treatment with intravenous systemic glucocorticoids is helpful. For patients with urticaria without symptoms of angioedema or anaphylaxis, drug withdrawal and oral antihistamines are usually sufficient. Future drug avoidance is recommended; rechallenge, especially in individuals with severe reactions, should only occur in an intensive care setting. Anaphylactoid Reactions Vancomycin is associated with red man syndrome, a histamine-related anaphylactoid reaction characterized by flushing, diffuse maculopapular eruption, and hypotension. In rare cases, cardiac arrest may be associated with rapid IV infusion of the medication. Irritant/Allergic Contact Dermatitis Patients using topical medications may develop an irritant or allergic contact dermatitis to the medication itself or to a preservative or other component of the formulation. Reactions to chlorhexidine, neomycin sulfate, and polymyxin B are common. Allergic contact dermatitis to topical glucocorticoids may also occur and is paradoxically partially masked by the anti-inflammatory nature of the medication itself; typically this allergy is selective for one of the four classes of glucocorticoid types, as subdivided by allergenic properties. Patch testing can be useful to determine whether a patient is steroid allergic. Desoximetasone is rarely allergenic. Fixed Drug Eruptions These less common reactions are characterized by one or more sharply demarcated, dull red to brown lesions, sometimes with central bulla (Fig. 74-7). Hyperpigmentation often results after resolution of the acute inflammation. With rechallenge, the lesion recurs in the same (e.g., fixed) location. Lesions often involve the lips, hands, legs, face, genitalia, and oral mucosa and cause a burning sensation. Most patients have multiple lesions. Fixed drug eruptions have been associated with pseudoephedrine (frequently a nonpigmented reaction), phenolphthalein (in laxatives), sulfonamides, tetracyclines, NSAIDs, and barbiturates. FIguRE 74-7 Fixed drug eruption. PART 2 Cardinal Manifestations and Presentation of Diseases IMMuNE CuTANEOuS REACTIONS: RARE AND SEVERE Vasculitis Cutaneous small-vessel vasculitis often presents as palpable purpuric lesions that may be generalized or limited to the lower extremities or other dependent areas (Chap. 385). Pustular lesions and hemorrhagic blisters also occur. Vasculitis may involve other organs, including the liver, kidney, brain, and joints. Drugs are implicated as a cause of 10–15% of all cases of small-vessel vasculitides. Infection, malignancy, and collagen vascular disease are responsible for the majority of non-drug-related cases. Propylthiouracil induces a cutaneous vasculitis that is accompanied by leukopenia and splenomegaly. Direct immunofluorescent changes in these lesions suggest immune-complex deposition. Common drugs implicated in vasculitis include allopurinol, thiazides, sulfonamides, antimicrobials, and NSAIDs. The presence of eosinophils in the perivascular infiltrate of skin biopsy suggests a drug etiology. Pustular Eruptions AGEP is a rare reaction pattern (3–5 cases per million per year) that is often associated with exposure to drugs (Fig. 74-8). Usually beginning on the face or intertriginous areas, small nonfollicular pustules overlying erythematous and edematous skin may coalesce and lead to superficial erosion. Differentiating this eruption from TEN in its initial stages may be difficult. A skin biopsy is important and shows neutrophil collections and sparse necrotic keratinocytes in the upper part of the epidermis instead of the full-thickness epidermal necrosis that characterizes TEN. Fever and leukocytosis are common, and eosinophilia occurs in one-third of cases. Acute pustular psoriasis is the principal differential diagnostic consideration. DIHS with pustular features must also be clinically considered, although the timing for the onset of DIHS is distinct (much later onset). AGEP often begins within a few days of initiating drug treatment, most notably antibiotics, but may occur as late as 7–14 days after initiation of treatment. A broad range of drug classes (anticonvulsants, mercury, radiocontrast dye) and infections (viral, Mycoplasma) are also associated with AGEP. Patch testing with the responsible drug results in a localized pustular eruption. FIguRE 74-8 Acute generalized exanthematous pustulosis. Drug-induced Hypersensitivity Syndrome Drug-induced hypersensitivity syndrome (DIHS) is a multiorgan drug reaction previously known as DRESS (drug reaction with eosinophilia and systemic symptoms); since eosinophilia is not always present, the term DIHS is now preferred. Allopurinol is the most common cause. Although less frequently prescribed, abacavir has been reported to cause DIHS with an incidence as high as 4–8%. It presents as a widespread erythematous eruption that may become purpuric, pustular, or lichenoid and is accompanied by many of the following features: fever, facial edema, lymphadenopathy, leukocytosis (often with atypical lymphocytes and eosinophilia), hepatitis, myositis (including myocarditis), and sometimes nephritis (with proteinuria) or pneumonitis. Distinct patterns of timing of onset and organ involvement may exist; e.g., allopurinol classically induces DIHS with renal involvement. Cardiac and lung involvement is more common with minocycline; gastrointestinal involvement is almost exclusively seen with abacavir, and some medications typically lack eosinophilia (abacavir, dapsone, lamotrigine). The cutaneous reaction usually begins 2–8 weeks after the drug is started and lasts longer than mild eruptions after drug cessation. Signs and symptoms may persist for several weeks, especially those associated with hepatitis. The eruption recurs with rechallenge, and cross-reactions among aromatic anticonvulsants, including phenytoin, carbamazepine, and barbiturates, are frequent. Other drugs causing this syndrome include sulfonamides and other antibiotics. Hypersensitivity to reactive drug metabolites, hydroxylamine for sulfamethoxazole, and arene oxide for aromatic anticonvulsants may be involved in the pathogenesis of DIHS. Reactivation of herpes viruses, especially herpesvirus 6 and Epstein-Barr virus (EBV), has been frequently reported in this syndrome, although the causal role of viral infection has been debated. Recent research suggests that inciting drugs may reactivate quiescent herpes viruses, resulting in expansion of viral-specific CD8+ T lymphocytes with subsequent end-organ damage. Viral reactivation may be associated with a worse clinical prognosis. Mortality rates as high as 10% have been reported; mortality is highest in association with hepatitis. Systemic glucocorticoids (prednisone, 1–2 mg/kg daily) should be started with slow taper over 8–12 weeks. A steroid-sparing agent, such as mycophenolate mofetil, may be indicated in cases of rapid recurrence upon steroid taper. In all cases, rapid withdrawal of the suspected drug is required. Given the severe long-term complications of myocarditis, patients should undergo cardiac evaluation if heart involvement is suspected by hypotension or arrhythmia. Patients should be closely monitored for resolution of organ dysfunction and for development of late-onset autoimmune thyroiditis (up to 6 months). Stevens-Johnson Syndrome and Toxic Epidermal Necrolysis SJS and TEN are characterized by blisters and mucosal/epidermal detachment resulting from full-thickness epidermal necrosis in the absence of substantial dermal inflammation (Fig. 74-9). The term Stevens-Johnson syndrome describes cases with blisters developing on target lesions, dusky or purpuric macules in which mucosal involvement is significant, and total body surface area blistering and eventual detachment in <10% of cases. The term Stevens-Johnson syndrome/toxic epidermal necrolysis overlap is used to describe cases with 10–30% detachment, and TEN is used to describe cases with >30% detachment. Other blistering eruptions with mucositis associated with infections may be confused with SJS/TEN. Erythema multiforme (EM) associated with herpes simplex virus is characterized by mucosal involvement and target lesions often more acrally distributed and with limited skin detachment. Mycoplasma infection in children causes a clinically distinct presentation with prominent mucositis and limited blistering lesions; some believe that this clinical entity is the syndrome originally described by Stevens and Johnson. FIguRE 74-9 Toxic epidermal necrolysis. (Photo credit: Lindy Peta Fox, MD, and Jubin Ryu, MD, PhD.) Patients with SJS, SJS/TEN, or TEN initially present with acute onset of painful skin lesions, fever >39°C (102.2°F), sore throat, and conjunctivitis resulting from mucosal lesions. Intestinal and pulmonary involvement is associated with a poor prognosis, as are a greater extent of epidermal detachment and older age. About 10% and 30% of SJSand TEN-affected persons die from their disease, respectively. Drugs that most commonly cause SJS or TEN are sulfonamides, nevirapine (1 in 1000 risk of SJS or TEN), allopurinol, lamotrigine, aromatic anticonvulsants, and NSAIDs, specifically oxicam. Frozen-section skin biopsy may aid in rapid diagnosis. At this time, SJS and TEN have no proven effective treatment. The best results come from early diagnosis, immediate discontinuation of any suspected drug, supportive therapy, and paying close attention to ocular complications and infection. Systemic glucocorticoid therapy (prednisone 1–2 mg/kg) may be useful early in the evolution of the disease, but long-term systemic glucocorticoid use has been associated with higher mortality. Cyclosporine may be a possible therapy for SJS/TEN. After initial enthusiasm for the use of intravenous immunoglobulin (IVIG) in the treatment of SJS/TEN, some recent data questions whether IVIG benefits these patients. Randomized studies to more definitively assess the potential benefit of systemic glucocorticoids and IVIG are lacking and difficult to perform but are necessary. Overlap Hypersensitivity Syndromes An important emerging concept in the clinical approach to severe drug eruptions is the presence of overlap syndromes, most notably DIHS with TEN-like features, DIHS with pustular eruption (AGEP-like), and AGEP with TEN-like features. In several case series of AGEP, 50% of cases had TEN-like or DRESS-like features, and 20% of cases had mucosal involvement resembling SJS/ TEN. In one study, up to 20% of all severe drug eruptions had overlap features, suggesting that AGEP, DIHS, and SJS/TEN represent a clinical spectrum with common pathophysiologic mechanisms. Designation of a single diagnosis based on cutaneous and extracutaneous involvement may not always be possible in cases of hypersensitivity. There are four main questions to answer regarding an eruption: 1. Is it a drug reaction? 2. Is it a severe eruption or the onset of a form that may become severe? 3. Which drug(s) is (are) suspected, and which drug(s) should be withdrawn? 4. What is recommended for future use of drugs? Generalized erythema Facial edema Skin pain Palpable purpura Target lesions Skin necrosis Blisters or epidermal detachment Positive Nikolsky's sign Mucous membrane erosions Urticaria Swelling of tongue High fever (temperature >40°C [>104°F]) Enlarged lymph nodes Arthralgias or arthritis Shortness of breath, wheezing, hypotension Lymphocytosis with atypical lymphocytes Source: Adapted from JC Roujeau, RS Stern: N Engl J Med 331:1272, 1994. Rapid recognition of adverse drug reactions that may become serious or life threatening is paramount. Table 74-2 lists clinical and laboratory features that, if present, suggest that the reaction may be serious. Table 74-3 provides key features of the most serious adverse cutaneous reactions. Intensity of symptoms and rapid progression of signs should raise the suspicion of a severe eruption. Any doubt should lead to prompt consultation with a dermatologist and/or referral of the patient to a specialized center. The probability of drug etiology varies with the pattern of the reaction. Only fixed drug eruptions are always drug-induced. Morbilliform eruptions are usually viral in children and drug-induced in adults. Among severe reactions, drugs account for 10–20% of anaphylaxis and vasculitis and between 70–90% of AGEP, DIHS, SJS, or TEN. Skin biopsy helps in characterizing the reaction but does not indicate drug causality. Blood counts and liver and renal function tests are important for evaluating organ involvement. The association of mild elevation of liver enzymes and high eosinophil count is frequent but not specific for a drug reaction. Blood tests that could identify an alternative cause, antihistone antibody tests (to rule out drug-induced lupus), and serology or polymerase chain reaction for infections may be of great importance to determine a cause. Most cases of drug eruptions occur during the first course of treatment with a new medication. A notable exception is IgE-mediated urticaria and anaphylaxis that need presensitization and develop a few minutes to a few hours after rechallenge. Characteristic times of onset to drug reaction are as follows: 4–14 days for morbilliform eruptions, 2–4 days for AGEP, 5–28 days for SJS/TEN, and 14–48 days for DIHS. A drug chart, compiling information of all current and past medications/ supplements and the timing of administration relative to the rash, is a key diagnostic tool to identifying the inciting drug. Medications introduced for the first time in the relevant time frame are prime suspects. Two other important elements to suspect causality at this stage are (1) previous experience with the drug in the population and (2) alternative etiologic candidates. PART 2 Cardinal Manifestations and Presentation of Diseases aOverlap of Stevens-Johnson syndrome and toxic epidermal necrolysis have features of both and attachment of 10–30% of body surface area may occur. Source: Adapted from JC Roujeau, RS Stern: N Engl J Med 331:1272, 1994. The decision to continue or discontinue any medication will depend The usefulness of laboratory tests to determine causality is still on the severity of the reaction, the severity of the primary disease, the debated. Many in vitro immunologic assays have been developed, but degree of suspicion of causality, and the feasibility of an alternative the predictive value of these tests has not been validated in any large safer treatment. In any potentially fatal drug reaction, elimination series of affected patients; these tests exist primarily for research and of all possible suspect drugs or unnecessary medications should be not clinical purposes. attempted. Some rashes may resolve when “treating through” a benign In some cases, diagnostic rechallenge may be appropriate, even for drug-related eruption. The decision to treat through an eruption drugs with high rates of adverse reactions. Desensitization is often should, however, remain the exception and withdrawal of every sus-successful in HIV-infected patients with morbilliform eruptions to pect drug the general rule. On the other hand, drugs that are not sus-sulfonamides but is not recommended in HIV-infected patients who pected and are important for the patient (e.g., antihypertensive agents) manifested erythroderma or a bullous reaction in response to their generally should not be quickly withdrawn. This approach prevents earlier sulfonamide exposure. reluctance to future use of these agents. In patients with history suggesting immediate IgE-mediated reac tions to penicillin, skin-prick testing with penicillins or cephalosporins has proved useful for identifying patients at risk of anaphylactic reac-The aims are (1) to prevent the recurrence of the drug eruption and (2) tions to these agents. However, skin tests themselves carry a small risknot to compromise future treatments by contraindicating otherwise of anaphylaxis. Negative skin tests do not totally rule out IgE-mediateduseful medications. reactivity, but the risk of anaphylaxis in response to penicillin admin-Begin with thorough assessment of drug causality. Drug causality istration in patients with negative skin tests is about 1%. In contrast, is evaluated based on timing of the reaction, evaluation of other postwo-thirds of patients with a positive skin test experience an allergicsible causes, effect of drug withdrawal or continuation, and knowledge response upon rechallenge. of medications that have been associated with the observed reaction. For patients with delayed-type hypersensitivity, the clinical utilityCombination of these criteria leads to considering the causality as of skin tests is more questionable. At least one of a combination ofdefinite, probable, possible, or unlikely. The RegiSCAR group has several tests (prick, patch, and intradermal) is positive in 50–70% ofproposed a useful algorithm called Algorithm of Drug Causality for patients with a reaction “definitely” attributed to a single medication. Epidermal Necrolysis (ALDEN) to determine drug causality in SJS/ This low sensitivity corresponds to the observation that readministra-TEN. A drug with a “definite” or “probable” causality should be tion of drugs with negative skin testing resulted in eruptions in 17%contraindicated, a warning card or medical alert tag (e.g., wristband) of cases. should be given to the patient, and the drugs should be listed in the patient’s medical chart as an allergy. A drug with a “possible” causality may be submitted to further CROSS-SENSITIVITY investigations depending on the expected need for future treatment. Because of the possibility of cross-sensitivity among chemically related A drug with “unlikely” causality or that has been continued when drugs, many physicians recommend avoidance of not only the medicathe reaction improved or was reintroduced without a reaction can be tion that induced the reaction but also all drugs of the same pharmaadministered safely. cologic class. There are two types of cross-sensitivity. Reactions that depend on a pharmacologic interaction may occur with all drugs that target the same pathway, whether they are structurally similar or not. This is the case with angioedema caused by NSAIDs and ACE inhibitors. In this situation, the risk of recurrence varies from drug to drug in a particular class; however, avoidance of all drugs in the class is usually recommended. Immune recognition of structurally related drugs is the second mechanism by which cross-sensitivity occurs. A classic example is hypersensitivity to aromatic antiepileptics (barbiturates, phenytoin, carbamazepine) with up to 50% reaction to a second drug in patients who reacted to one. For other drugs, in vitro as well as in vivo data have suggested that cross-reactivity existed only between compounds with very similar chemical structures. Sulfamethoxazolespecific lymphocytes may be activated by other antibacterial sulfonamides but not diuretics, antidiabetic drugs, or anti-COX2 NSAIDs with a sulfonamide group. Approximately 10% of patients with penicillin allergies will also develop allergic reactions to cephalosporin class antibiotics. Recent data suggest that although the risk of a drug eruption to another drug was increased in persons with a prior reaction, “crosssensitivity” was probably not the explanation. As an example, persons with a history of an allergic-like reaction to penicillin were at higher risk to develop a reaction to antibacterial sulfonamides than to cephalosporins. These data suggest that the list of drugs to avoid after a drug reaction should be limited to the causative one(s) and to a few very similar medications. Because of growing evidence that some severe cutaneous reactions to drugs are associated with HLA genes, it is recommended that first-degree family members of patients with severe cutaneous reactions also should avoid these causative medications. This may be most relevant to sulfonamides and antiseizure medications. Desensitization can be considered in those with a history of reaction to a medication that must be used again. Efficacy of such procedures has been demonstrated in cases of immediate reaction to penicillin and positive skin tests, anaphylactic reactions to platinum chemotherapy, and delayed reactions to sulfonamides in patients with AIDS. Various protocols are available, including oral and parenteral approaches. Oral desensitization appears to have a lower risk of serious anaphylactic reactions. However, desensitization carries the risk of anaphylaxis regardless of how it is performed and should be performed in monitored clinical settings such as an intensive care unit. After desensitization, many patients experience non-life-threatening reactions during therapy with the culprit drug. Any severe reaction to drugs should be reported to a regulatory agency or to pharmaceutical companies (e.g., MedWatch, http://www.fda.gov/ Safety/MedWatch/default.htm). Because severe reactions are too rare to be detected in premarketing clinical trials, spontaneous reports are of critical importance for early detection of unexpected life-threatening events. To be useful, the report should contain enough details to permit ascertainment of severity and drug causality. This enables recognition of similar cases that may be reported from several different sources. We acknowledge the contribution of Dr. Jean-Claude Roujeau to this chapter in the 17th edition. Photosensitivity and other 75 Reactions to Light Alexander G. Marneros, David R. Bickers Sunlight is the most visible and obvious source of comfort in the environment. The sun provides the beneficial effects of warmth and vitamin D synthesis. However, acute and chronic sun exposure also has pathologic consequences. Few effects of sun exposure beyond those affecting the skin have been identified, but cutaneous exposure to sunlight is the major cause of human skin cancer and can have immunosuppressive effects as well. The sun’s energy reaching the earth’s surface is limited to components of the ultraviolet (UV) spectrum, the visible spectrum, and portions of the infrared spectrum. The cutoff at the short end of the UV spectrum at ~290 nm is due primarily to stratospheric ozone—formed by highly energetic ionizing radiation—that prevents penetration to the earth’s surface of the shorter, more energetic, potentially more harmful wavelengths of solar radiation. Indeed, concern about destruction of the ozone layer by chlorofluorocarbons released into the atmosphere has led to international agreements to reduce production of those chemicals. Measurements of solar flux showed a twentyfold regional variation in the amount of energy at 300 nm that reaches the earth’s surface. This variability relates to seasonal effects, the path that sunlight traverses through ozone and air, the altitude (a 4% increase for each 300 m of elevation), the latitude (increasing intensity with decreasing latitude), and the amount of cloud cover, fog, and pollution. The major components of the photobiologic action spectrum that are capable of affecting human skin include the UV and visible wavelengths between 290 and 700 nm. In addition, the wavelengths beyond 700 nm in the infrared spectrum primarily emit heat and in certain circumstances may exacerbate the pathologic effects of energy in the UV and visible spectra. The UV spectrum reaching the earth represents <10% of total incident solar energy and is arbitrarily divided into two major segments, UV-B and UV-A, which constitute the wavelengths from 290 to 400 nm. UV-B consists of wavelengths between 290 and 320 nm. This portion of the photobiologic action spectrum is the most efficient in producing redness or erythema in human skin and thus is sometimes known as the “sunburn spectrum.” UV-A includes wavelengths between 320 and 400 nm and is ~1000-fold less efficient in producing skin redness than is UV-B. The wavelengths between 400 and 700 nm are visible to the human eye. The photon energy in the visible spectrum is not capable of damaging human skin in the absence of a photosensitizing chemical. Without the absorption of energy by a molecule, there can be no photosensitivity. Thus, the absorption spectrum of a molecule is defined as the range of wavelengths it absorbs, whereas the action spectrum for an effect of incident radiation is defined as the range of wavelengths that evoke the response. Photosensitivity occurs when a photon-absorbing chemical (chromophore) present in the skin absorbs incident energy, becomes excited, and transfers the absorbed energy to various structures or to molecular oxygen. Human skin consists of two major compartments: the outer epidermis, which is a stratified squamous epithelium, and the underlying dermis, which is rich in matrix proteins such as collagens and elastin. Both compartments are susceptible to damage from sun exposure. The epidermis and the dermis contain several chromophores capable of absorbing incident solar energy, including nucleic acids, proteins, and lipids. The outermost epidermal layer, the stratum corneum, is a major absorber of UV-B, and <10% of incident UV-B wavelengths penetrate through the epidermis to the dermis. Approximately 3% of radiation CHAPTER 75 Photosensitivity and Other Reactions to Light 386 below 300 nm, 20% of radiation below 360 nm, and 33% of short visible radiation reach the basal cell layer in untanned human skin. In contrast, UV-A readily penetrates to the dermis and is capable of altering structural and matrix proteins that contribute to photoaging of chronically sun-exposed skin, particularly in individuals of light complexion. Thus, longer wavelengths can penetrate more deeply into the skin. Molecular Targets for uVR-Induced Skin Effects Epidermal DNA— predominantly in keratinocytes and in Langerhans cells, which are dendritic antigen-presenting cells—absorbs UV-B and undergoes structural changes between adjacent pyrimidine bases (thymine or cytosine), including the formation of cyclobutane dimers and 6,4-photoproducts. These structural changes are potentially mutagenic and are found in most basal cell and squamous cell carcinomas (BCCs and SCCs, respectively). They can be repaired by cellular mechanisms that result in their recognition and excision and the restoration of normal base sequences. The efficient repair of these structural aberrations is crucial, since individuals with defective DNA repair are at high risk for the development of cutaneous cancer. For example, patients with xeroderma pigmentosum, an autosomal recessive disorder, have a variably deficient repair of UV-induced photoproducts. The skin of these patients often shows the dry, leathery appearance of prematurely photoaged skin, and these patients have an increased frequency of skin cancer already in the first two decades of life. Studies in transgenic mice have verified the importance of functional genes that regulate these repair pathways in preventing the development of UV-induced skin cancer. DNA damage in Langerhans cells may also contribute to the known immunosuppressive effects of UV-B (see “Photoimmunology,” below). In addition to DNA, molecular oxygen is a target for incident solar UVR, leading to the generation of reactive oxygen species (ROS). These ROS can damage skin components, such as epidermal lipids— either free lipids in the stratum corneum or cell membrane lipids. UVR also can target proteins, leading to increased cross-linking and degradation of matrix proteins in the dermis and accumulation of abnormal dermal elastin leading to photoaging changes known as solar elastosis. Cutaneous Optics and Chromophores Chromophores are endogenous or exogenous chemical components that can absorb physical energy. Endogenous chromophores are of two types: (1) normal components of skin, including nucleic acids, proteins, lipids, and 7-dehydrocholesterol (the precursor of vitamin D); and (2) components that are synthesized elsewhere in the body and that circulate in the bloodstream and diffuse into the skin, such as porphyrins. Normally, only trace amounts of porphyrins are present in the skin, but, in selected diseases known as the porphyrias (Chap. 430), porphyrins are released into the circulation in increased amounts from the bone marrow and the liver and are transported to the skin, where they absorb incident energy both in the Soret band (around 400 nm; short visible) and, to a lesser extent, in the red portion of the visible spectrum (580–660 nm). This energy absorption results in the generation of ROS that can mediate structural damage to the skin, manifested as erythema, edema, urticaria, or blister formation. It is of interest that photoexcited porphyrins are currently used in the treatment of nonmelanoma skin cancers and their precursor lesions, actinic keratoses. Known as photodynamic therapy, this modality generates ROS in the skin, leading to cell death. Topical photosensitizers used in photodynamic therapy are the porphyrin precursors 5-aminolevulinic acid and methyl aminolevulinate, which are converted to porphyrins in the skin. It is believed that photodynamic therapy targets tumor cells for destruction more selectively than it targets adjacent nonneoplastic cells. The efficacy of such therapy requires appropriate timing of the application of methyl aminolevulinate or 5-aminolevulinic acid to the affected skin followed by exposure to artificial sources of visible light. High-intensity blue light has been used successfully for the treatment of thin actinic keratoses. Red light has a longer wavelength, penetrates more deeply into the skin, and is more beneficial in the treatment of superficial BCCs. Acute Effects of Sun Exposure The acute effects of skin exposure to sunlight include sunburn and vitamin D synthesis. PART 2 Cardinal Manifestations and Presentation of Diseases SUNBURN This painful skin condition is an acute inflammatory response of the skin, predominantly to UV-B. Generally, an individual’s ability to tolerate sunlight is inversely proportional to that individual’s degree of melanin pigmentation. Melanin, a complex polymer of tyrosine derivatives, is synthesized in specialized epidermal dendritic cells known as melanocytes and is packaged into melanosomes that are transferred via dendritic processes into keratinocytes, thereby providing photoprotection and simultaneously darkening the skin. Sun-induced melanogenesis is a consequence of increased tyrosinase activity in melanocytes. Central to the suntan response is the melanocortin-1 receptor (MC1R), and mutations in this gene contribute to the wide variation in human skin and hair color; individuals with red hair and fair skin typically have low MC1R activity. Genetic studies have revealed additional genes that influence skin color variation in humans, such as the gene for tyrosinase (TYR) and the genes APBA2[OCA2], SLC45A2, and SLC24A5. The human MC1R gene encodes a G protein–coupled receptor that binds α-melanocyte-stimulating hormone, which is secreted in the skin mainly by keratinocytes in response to UVR. The UV-induced expression of this hormone is controlled by the tumor suppressor p53, and absence of functional p53 attenuates the tanning response. Activation of the melanocortin receptor leads to increased intracellular cyclic adenosine 5′-monophosphate (cAMP) and protein kinase A activation, resulting in an increased transcription of the microphthalmia-associated transcription factor (MITF), which stimulates melanogenesis. Since the precursor of α-melanocytestimulating hormone, proopiomelanocortin, is also the precursor of β-endorphins, UVR may result in not only increased pigmentation but also in increased β-endorphin production, an effect that has been hypothesized to promote sun-seeking behaviors. The Fitzpatrick classification of human skin phototypes is based on the efficiency of the epidermal-melanin unit, which usually can be ascertained by asking an individual two questions: (1) Do you burn after sun exposure? (2) Do you tan after sun exposure? The answers to these questions permit division of the population into six skin types, varying from type I (always burn, never tan) to type VI (never burn, always tan) (Table 75-1). Sunburn erythema is due to vasodilation of dermal blood vessels. There is a lag time (usually 4–12 h) between skin exposure to sunlight and the development of visible redness. The action spectrum for sunburn erythema includes UV-B and UV-A, although UV-B is much more efficient than UV-A in evoking the response. However, UV-A may contribute to sunburn erythema at midday, when much more UV-A than UV-B is present in the solar spectrum. The erythema that accompanies the inflammatory response induced by UVR results from the orchestrated release of cytokines along with growth factors and the generation of ROS. Furthermore, UV-induced activation of nuclear factor κB–dependent gene transcription can augment release of several pro-inflammatory cytokines and vasoactive mediators. These cytokines and mediators accumulate locally in sunburned skin, providing chemotactic factors that attract neutrophils, macrophages, and T lymphocytes, which promote the inflammatory response. UVR also stimulates infiltration of inflammatory cells through induced expression of adhesion molecules such as E-selectin and intercellular adhesion molecule 1 on endothelial cells and keratinocytes. UVR also has been shown to activate phospholipase A2, resulting in increases in eicosanoids such as prostaglandin E2, which is known to be a potent inducer of sunburn erythema. The role of eicosanoids in this reaction has been verified by studies showing that nonsteroidal anti-inflammatory drugs (NSAIDs) can reduce it. Epidermal changes in sunburn include the induction of “sunburn cells,” which are keratinocytes undergoing p53-dependent apoptosis as a defense, with elimination of cells that harbor UV-B-induced structural DNA damage. VITAMIN d SYNTHESIS ANd PHOTOCHEMISTRY Cutaneous exposure to UV-B causes photolysis of epidermal 7-dehydrocholesterol, converting it to pre–vitamin D3, which then undergoes temperature-dependent isomerization to form the stable hormone vitamin D3. This compound diffuses to the dermal vasculature and circulates to the liver and kidney, where it is converted to the dihydroxylated functional hormone 1,25-dihydroxyvitamin D3. Vitamin D metabolites from the circulation and those produced in the skin itself can augment epidermal differentiation signaling and inhibit keratinocyte proliferation. These effects are exploited therapeutically in psoriasis with the topical application of synthetic vitamin D analogues. In addition, vitamin D is increasingly thought to have beneficial effects in several other inflammatory conditions, and some evidence suggests that—besides its classic physiologic effects on calcium metabolism and bone homeostasis—it is associated with a reduced risk of various internal malignancies. There is controversy regarding the risk-to-benefit ratio of sun exposure in vitamin D homeostasis. At present, it is important to emphasize that no clear-cut evidence suggests that the use of sunscreens substantially diminishes vitamin D levels. Since aging also substantially decreases the ability of human skin to photocatalytically produce vitamin D3, the widespread use of sunscreens that filter out UV-B has led to concerns that the elderly might be unduly susceptible to vitamin D deficiency. However, the amount of sunlight needed to produce sufficient vitamin D is small and does not justify the risks of skin cancer and other types of photodamage linked to increased sun exposure or tanning behavior. Nutritional supplementation of vitamin D is a preferable strategy for patients with vitamin D deficiency. Chronic Effects of Sun Exposure: Nonmalignant The clinical features of photoaging (dermatoheliosis) consist of wrinkling, blotchiness, and telangiectasia as well as a roughened, irregular, “weather-beaten” leathery appearance. UVR is important in the pathogenesis of photoaging in human skin, and ROS are likely involved. The dermis and its connective tissue matrix are major targets for sun-associated chronic damage that manifests as solar elastosis, a massive increase in thickened irregular masses of abnormal-appearing elastic fibers. Collagen fibers are also abnormally clumped in the deeper dermis of sun-damaged skin. The chromophore(s), the action spectra, and the specific biochemical events orchestrating these changes are only partially understood, although more deeply penetrating UV-A seems to be primarily involved. Chronologically aged sun-protected skin and photoaged skin share important molecular features, including connective tissue damage and elevated levels of matrix metalloproteinases (MMPs). MMPs are enzymes involved in the degradation of the extracellular matrix. UV-A induces expression of some MMPs, including MMP-1 and MMP-3, leading to increased collagen breakdown. In addition, UV-A reduces type I procollagen mRNA expression. Thus, chronic UVR alters the structure and function of dermal collagen. On the basis of these observations, it is not surprising that high-dose UV-A phototherapy may have beneficial effects in some patients with localized fibrotic diseases of the skin, such as localized scleroderma. Chronic Effects of Sun Exposure: Malignant One of the major known consequences of chronic excessive skin exposure to sunlight is non-melanoma skin cancer. The two most common types of nonmelanoma skin cancer are BCC and SCC (Chap. 105). A model for skin cancer induction involves three major steps: initiation, promotion, and progression. Exposure of human skin to sunlight results in initiation, a step by which structural (mutagenic) changes in DNA evoke an irreversible alteration in the target cell (keratinocyte) that begins the tumorigenic process. Exposure to a tumor initiator such as UV-B is believed to be a necessary but not a sufficient step in the malignant process, since initiated skin cells not exposed to tumor promoters 387 generally do not develop tumors. The second stage in tumor development is promotion, a multistep process by which chronic exposure to sunlight evokes further changes that culminate in the clonal expansion of initiated cells and cause the development, over many years, of premalignant growths known as actinic keratoses, a minority of which may progress to form SCCs. As a result of extensive studies, it seems clear that UV-B is a complete carcinogen, meaning that it can act as both a tumor initiator and a tumor promoter. The third and final step in the malignant process is malignant conversion of benign precursors into malignant lesions, a process thought to require additional genetic alterations. On a molecular level, skin carcinogenesis results from the accumulation of gene mutations that cause inactivation of tumor suppressors, activation of oncogenes, or reactivation of cellular signaling pathways that normally are expressed only during epidermal embryologic development. Accumulation of mutations in the tumor-suppressor gene p53 secondary to UV-induced DNA damage occurs in both SCCs and BCCs and is important in promoting skin carcinogenesis. Indeed, the majority of both human and murine UV-induced skin cancers have characteristic p53 mutations (C → T and CC → TT transitions). Studies in mice have shown that sunscreens can substantially reduce the frequency of these signature mutations in p53 and inhibit the induction of tumors. BCCs also harbor inactivating mutations in the tumor-suppressor gene patched, which result in activation of the sonic hedgehog signaling pathway and increased cell proliferation. Thus, these tumors can manifest mutations in tumor suppressors (p53 and patched) or oncogenes (smoothened). New evidence links alterations in the Wnt/βcatenin signaling pathway, which is known to be critical for hair follicle development, to skin cancer as well. Thus interactions between this pathway and the hedgehog signaling pathway appear to be involved in both skin carcinogenesis and embryologic development of the skin and hair follicles. Clonal analysis in mouse models of BCC revealed that tumor cells arise from long-term resident progenitor cells of the interfollicular epidermis and the upper infundibulum of the hair follicle. These BCC-initiating cells are reprogrammed to resemble embryonic hair follicle progenitors, whose tumor-initiating ability depends on activation of the Wnt/β-catenin signaling pathway. SCC initiation occurs both in the interfollicular epidermis and in the hair follicle bulge stem cell populations. In mouse models, the combination of mutant K-Ras and p53 is sufficient to induce invasive SCCs from these cell populations. The transcription factor Myc is important for stem cell maintenance in the skin, and oncogenic activation of Myc has been implicated in the development of BCCs and SCCs. Thus, nonmelanoma skin cancer involves mutations and alterations in multiple genes and pathways that occur as a result of their chronic accumulation driven by exposure to environmental factors such as UVR. Epidemiologic studies have linked excessive sun exposure to an increased risk of nonmelanoma cancers and melanoma of the skin; the evidence is far more direct for nonmelanoma skin cancers (BCCs and SCCs) than for melanoma. Approximately 80% of nonmelanoma skin cancers develop on sun-exposed body areas, including the face, neck, and hands. Major risk factors include male sex, childhood sun exposures, older age, fair skin, and residence at latitudes relatively close to the equator. Individuals with darker-pigmented skin have a lower risk of skin cancer than do fair-skinned individuals. More than 2 million individuals in the United States develop nonmelanoma skin cancer annually, and the lifetime risk that a fair-skinned individual will develop such a neoplasm is estimated at ~15%. The incidence of non-melanoma skin cancer in the population is increasing at a rate of 2–3% per year. One potential explanation is the widespread use of indoor tanning. It is estimated that 30 million people tan indoors in the United States annually, including >2 million adolescents. The relationship of sun exposure to melanoma development is less direct, but strong evidence supports an association. Clear-cut risk CHAPTER 75 Photosensitivity and Other Reactions to Light 388 factors include a positive family or personal history of melanoma and multiple dysplastic nevi. Melanomas can occur during adolescence; the implication is that the latent period for tumor growth is shorter than that for nonmelanoma skin cancer. For reasons that are only partially understood, melanomas are among the most rapidly increasing human malignancies (Chap. 105). Epidemiologic studies indicate that indoor tanning is a risk factor for melanoma, which may contribute to the increasing incidence of melanoma formation. Furthermore, epidemiologic studies suggest that life in a sunny climate from birth or early childhood may increase the risk of melanoma development. In general, risk does not correlate with cumulative sun exposure but may be related to the duration and extent of exposure in childhood. However, in contrast to nonmelanoma skin cancers, melanoma frequently develops in sun-protected skin, and oncogenic mutations in melanoma may also not be UVR-signature mutations; these observations suggest that UVR-independent factors contribute to melanomagenesis. Low MC1R activity leads to production of the red/yellow pheomelanin pigment in individuals with red hair and fair skin, while high MC1R activity results in increased production of the black/brown eumelanin. Experiments in mice suggest that high pheomelanin content in skin (as in individuals with red hair and fair skin) leads to a UVR-independent increase in the risk of melanoma through a mechanism that involves oxidative damage. Thus, both UVR-dependent and UVR-independent factors are likely to contribute to melanoma formation. Photoimmunology Exposure to solar radiation causes both local immunosuppression (through inhibition of immune responses to antigens applied at the irradiated site) and systemic immunosuppression (through inhibition of immune responses to antigens applied at remote, unirradiated sites). For example, human skin exposure to modest doses of UV-B can deplete the epidermal antigen-presenting cells known as Langerhans cells, thereby reducing the degree of allergic sensitization to application of the potent contact allergen dinitrochlorobenzene at the irradiated skin site. An example of the systemic immunosuppressive effects of higher doses of UVR is the diminished immunologic response to antigens introduced either epicutaneously or intracutaneously at sites distant from the irradiated site. Various immunomodulatory factors and immune cells have been implicated in UVR-induced systemic immunosuppression, including tumor necrosis factor α, interleukin 4, interleukin 10, cis-urocanic acid, and eicosanoids. Experimental evidence suggests that prostaglandin E2 signaling through prostaglandin E receptor subtype 4 mediates UVR-induced systemic immunosuppression by elevating the number of regulatory T cells, and this effect can be inhibited with NSAIDs. The major chromophores in the upper epidermis that are known to initiate UV-mediated immunosuppression include DNA, transurocanic acid, and membrane components. The action spectrum for UV-induced immunosuppression closely mimics the absorption spectrum of DNA. Pyrimidine dimers in Langerhans cells may inhibit antigen presentation. The absorption spectrum of epidermal urocanic acid closely mimics the action spectrum for UV-B-induced immunosuppression. Urocanic acid is a metabolic product of the essential amino acid histidine and accumulates in the upper epidermis through breakdown of the histidine-rich protein filaggrin due to the absence of its catabolizing enzyme in keratinocytes. Urocanic acid is synthesized as a trans-isomer, and UV-induced trans-cis isomerization of urocanic acid in the stratum corneum drives immunosuppression. Cis-urocanic acid may exert its immunosuppressive effects through a variety of mechanisms, including inhibition of antigen presentation by Langerhans cells. One important consequence of chronic sun exposure and associated immunosuppression is an enhanced risk of skin cancer. In part, UV-B activates regulatory T cells that suppress antitumor immune responses via interleukin 10 expression, whereas, in the absence of high UV-B exposure, epidermal Langerhans cells present tumor-associated antigens and induce protective immunity, thereby inhibiting skin tumorigenesis. UV-induced DNA damage is a major molecular trigger of this immunosuppressive effect. PART 2 Cardinal Manifestations and Presentation of Diseases Perhaps the most graphic demonstration of the role of immunosuppression in enhancing the risk of nonmelanoma skin cancer comes from studies of organ transplant recipients who require lifelong immunosuppressive/antirejection drug regimens. More than 50% of organ transplant recipients develop BCCs and SCCs, and these cancers are the most common types of malignancies arising in these patients. Rates of BCC and SCC increase with the duration and degree of immunosuppression. These patients ideally should be screened prior to organ transplantation, be monitored closely thereafter, and adhere to rigorous photoprotection measures, including the use of sunscreens and protective clothing as well as sun avoidance. Notably, immunosuppressive drugs that target the mTOR pathway, such as sirolimus and everolimus, may reduce the risk of nonmelanoma skin cancer in organ transplant recipients from that associated with the use of calcineurin inhibitors (cyclosporine and tacrolimus), which may contribute to nonmelanoma skin cancer formation not only through their immunosuppressive effects but also through suppression of p53-dependent cancer cell senescence pathways independent of host immunity. The diagnosis of photosensitivity requires elicitation of a careful history in order to define the duration of signs and symptoms, the length of time between exposure to sunlight and the development of subjective symptoms, and visible changes in the skin. The age of onset can also be a helpful diagnostic clue; for example, the acute photosensitivity of erythropoietic protoporphyria almost always begins in childhood, whereas the chronic photosensitivity of porphyria cutanea tarda (PCT) typically begins in the fourth and fifth decades of life. A patient’s history of exposure to topical and systemic drugs and chemicals may provide important diagnostic clues. Many classes of drugs can cause photosensitivity on the basis of either phototoxicity or photoallergy. Fragrances such as musk ambrette that were previously present in numerous cosmetic products are also potent photosensitizers. Examination of the skin may offer important clues. Anatomic areas that are naturally protected from direct sunlight, such as the hairy scalp, the upper eyelids, the retroauricular areas, and the infranasal and submental regions, may be spared, whereas exposed areas show characteristic features of the pathologic process. These anatomic localization patterns are often helpful, but not infallible, in making the diagnosis. For example, airborne contact sensitizers that are blown onto the skin may produce dermatitis that can be difficult to distinguish from photosensitivity despite the fact that such material may trigger skin reactivity in areas shielded from direct sunlight. Many dermatologic conditions may be caused or aggravated by sunlight (Table 75-2). The role of light in evoking these responses may be dependent on genetic abnormalities ranging from well-described defects in DNA repair that occur in xeroderma pigmentosum to the inherited abnormalities in heme synthesis that characterize the porphyrias. The chromophore has been identified in certain photosensitivity diseases, but the energy-absorbing agent remains unknown in the majority. Polymorphous Light Eruption A common type of photosensitivity disease is polymorphous light eruption (PMLE). Many affected individuals never seek medical attention because the condition is often transient, becoming manifest in the spring with initial sun exposure but then subsiding spontaneously with continuing exposure, a phenomenon known as “hardening.” The major manifestations of PMLE include (often intensely) pruritic erythematous papules that may coalesce into plaques in a patchy distribution on exposed areas of the trunk and forearms. The face is usually less seriously involved. Whereas the morphologic skin findings remain similar for each patient with subsequent recurrences, significant interindividual variations in skin findings are characteristic (hence the term “polymorphous”). A skin biopsy and phototest procedures in which skin is exposed to multiple erythemal doses of UV-A and UV-B may aid in the diagnosis. The action spectrum for PMLE is usually within these portions of the solar spectrum. Whereas the treatment of an acute flare of PMLE may require topical or systemic glucocorticoids, approaches to preventing PMLE are External Drugs, plants, food important and include the use of high-SPF and high UVA-protection broad-spectrum sunscreens as well as the induction of “hardening” by the cautious administration of artificial UV-B (broad-band or narrow-band) and/or UV-A radiation or the use of psoralen plus UV-A (PUVA) photochemotherapy for 2–4 weeks before initial sun exposure. Such prophylactic phototherapy or photochemotherapy at the beginning of spring may prevent the occurrence of PMLE throughout the summer. Phototoxicity and Photoallergy These photosensitivity disorders are related to the topical or systemic administration of drugs and other chemicals. Both reactions require the absorption of energy by a drug or chemical with consequent production of an excited-state photosensitizer that can transfer its absorbed energy to a bystander molecule or to molecular oxygen, thereby generating tissue-destructive chemical species, including ROS. Phototoxicity is a nonimmunologic reaction that can be caused by drugs and chemicals, a few of which are listed in Table 75-3. The usual clinical manifestations include erythema resembling a sunburn reaction that quickly desquamates, or “peels,” within several days. In addition, edema, vesicles, and bullae may occur. Photoallergy is much less common and is distinct in that it is an immunopathologic process. The excited-state photosensitizer may create highly unstable haptenic free radicals that bind covalently to macromolecules to form a functional antigen capable of evoking a delayed-type hypersensitivity response. Some drugs and chemicals that can produce photoallergy are listed in Table 75-4. The clinical manifestations typically differ from those of phototoxicity in that an intensely pruritic eczematous dermatitis tends to predominate and evolves into lichenified, thickened, “leathery” changes in sun-exposed areas. A small subset (perhaps 5–10%) of patients with photoallergy may develop a persistent exquisite hypersensitivity to light even when the offending drug or chemical is identified and eliminated, a condition known as persistent light reaction. A very uncommon type of persistent photosensitivity is known as chronic actinic dermatitis. The affected patients are typically elderly men with a long history of preexisting allergic contact dermatitis or photosensitivity. These individuals are usually exquisitely sensitive to UV-B, UV-A, and visible wavelengths. Phototoxicity and photoallergy often can be diagnostically confirmed by phototest procedures. In patients with suspected photo-toxicity, determining the minimal erythemal dose (MED) while the patient is exposed to a suspected agent and then repeating the MED after discontinuation of the agent may provide a clue to the causative drug or chemical. Photopatch testing can be performed to confirm the diagnosis of photoallergy. In this simple variant of ordinary patch testing, a series of known photoallergens is applied to the skin in duplicate, and one set is irradiated with a suberythemal dose of UV-A. The development of eczematous changes at sites exposed to sensitizer and light is a positive result. The characteristic abnormality in patients with persistent light reaction is a diminished threshold to erythema evoked + Halogenated salicylanilides + Hypericin (St. John’s wort) + + Musk ambrette + Piroxicam CHAPTER 75 Photosensitivity and Other Reactions to Light 390 by UV-B. Patients with chronic actinic dermatitis usually manifest a broad spectrum of UV hyperresponsiveness and require meticulous photoprotection, including avoidance of sun exposure, use of high-SPF (>30) sunscreens, and, in severe cases, systemic immunosuppression, such as with azathioprine. The management of drug photosensitivity involves first and foremost the elimination of exposure to the chemical agents responsible for the reaction and the minimization of sun exposure. The acute symptoms of phototoxicity may be ameliorated by cool moist compresses, topical glucocorticoids, and systemically administered NSAIDs. In severely affected individuals, a rapidly tapered course of systemic glucocorticoids may be useful. Judicious use of analgesics may be necessary. Photoallergic reactions require a similar management approach. Furthermore, patients with persistent light reaction and chronic actinic dermatitis must be meticulously protected against light exposure. In selected patients to whom chronic systemic high-dose glucocorticoids pose unacceptable risks, it may be necessary to employ an immunosuppressive drug such as azathioprine, cyclophosphamide, cyclosporine, or mycophenolate mofetil. Porphyria The porphyrias (Chap. 430) are a group of diseases that have in common inherited or acquired derangements in the synthesis of heme. Heme is an iron-chelated tetrapyrrole or porphyrin, and the nonmetal chelated porphyrins are potent photosensitizers that absorb light intensely in both the short (400–410 nm) and the long (580–650 nm) portions of the visible spectrum. Heme cannot be reutilized and must be synthesized continuously. The two body compartments with the largest capacity for its production are the bone marrow and the liver. Accordingly, the porphyrias originate in one or the other of these organs, with an end result of excessive endogenous production of potent photosensitizing porphyrins. The porphyrins circulate in the bloodstream and diffuse into the skin, where they absorb solar energy, become photoexcited, generate ROS, and evoke cutaneous photosensitivity. The mechanism of porphyrin photosensitization is known to be photodynamic, or oxygen-dependent, and is mediated by ROS such as singlet oxygen and superoxide anions. Porphyria cutanea tarda is the most common type of porphyria and is associated with decreased activity of the enzyme uroporphyrinogen decarboxylase. There are two basic types of PCT: (1) the sporadic or acquired type, generally seen in individuals ingesting ethanol or receiving estrogens; and (2) the inherited type, in which there is autosomal dominant transmission of deficient enzyme activity. Both forms are associated with increased hepatic iron stores. In both types of PCT, the predominant feature is chronic photo-sensitivity characterized by increased fragility of sun-exposed skin, particularly areas subject to repeated trauma such as the dorsa of the hands, the forearms, the face, and the ears. The predominant skin lesions are vesicles and bullae that rupture, producing moist erosions (often with a hemorrhagic base) that heal slowly, with crusting and purplish discoloration of the affected skin. Hypertrichosis, mottled pigmentary change, and scleroderma-like induration are associated features. The diagnosis can be confirmed biochemically by measurement of urinary porphyrin excretion, plasma porphyrin assay, and assay of erythrocyte and/or hepatic uroporphyrinogen decarboxylase. Multiple mutations of the uroporphyrinogen decarboxylase gene have been identified in human populations. Some patients with PCT have associated mutations in the HFE gene, which is linked to hemochromatosis; these mutations could contribute to the iron overload seen in PCT, although iron status as measured by serum ferritin, iron levels, and transferrin saturation is no different from that in PCT patients without HFE mutations. Prior hepatitis C virus infection appears to be an independent risk factor for PCT. Treatment of PCT consists of repeated phlebotomies to diminish the excessive hepatic iron stores and/or intermittent low doses of chloroquine and hydroxychloroquine. Long-term remission of the disease can be achieved if the patient eliminates exposure to porphyrinogenic agents and prolonged exposure to sunlight. PART 2 Cardinal Manifestations and Presentation of Diseases Erythropoietic protoporphyria originates in the bone marrow and is due to a decrease in the mitochondrial enzyme ferrochelatase secondary to numerous gene mutations. The major clinical features include acute photosensitivity characterized by subjective burning and stinging of exposed skin that often develops during or just after sun exposure. There may be associated skin swelling and, after repeated episodes, a waxlike scarring. The diagnosis is confirmed by demonstration of elevated levels of free erythrocyte protoporphyrin. Detection of increased plasma protoporphyrin helps distinguish erythropoietic protoporphyria from lead poisoning and iron-deficiency anemia, in both of which erythrocyte protoporphyrin levels are elevated in the absence of cutaneous photo-sensitivity and elevated plasma protoporphyrin levels. Treatment includes reduction of sun exposure and oral administration of the carotenoid β-carotene, which is an effective scavenger of free radicals. This drug increases tolerance to sun exposure in some affected individuals, although it has no effect on deficient ferrochelatase. An algorithm for managing patients with photosensitivity is presented in Fig. 75-1. Laboratory screen Delayed Photosensitivity Phototesting Photo Patch Test Discontinue drug Rash persists History of exposure to photosensitizing drug History of association of rash to exposure UV-A Immediate Drug photosensitivity Unrelated Drug photosensitivity Photoallergic contact dermatitis Plasma porphyrin ANA Ro/La Rash disappears Solar urticaria UV-B (± UV-A) Lupus erythematosus dermatomyositis Porphyria Polymorphous light eruption Lupus erythematosus Atopic dermatitis with photoaggravation Chronic actinic dermatitis + – Phototest with UV-B, UV-A, and visible; read MED at 30 min Phototest with UV-B, UV-A, and visible; read MED at 24 h + + – – + – + + + – – FIguRE 75-1 Algorithm for the diagnosis of a patient with photo-sensitivity. ANA, antinuclear antibody; MED, minimal erythemal dose; UV-A and UV-B, ultraviolet spectrum segments including wavelengths of 320–400 nm and 290–320 nm, respectively. Since photosensitivity of the skin results from exposure to sunlight, it follows that absolute avoidance of sunlight will eliminate these disorders. However, contemporary lifestyles make this approach impractical for most individuals. Thus better approaches to photoprotection have been sought. Natural photoprotection is provided by structural proteins in the epidermis, particularly keratins and melanin. The amount of melanin and its distribution in cells are genetically regulated, and individuals of darker complexion (skin types IV–VI) are at decreased risk for the development of acute sunburn and cutaneous malignancy. Other forms of photoprotection include clothing and sunscreens. Clothing constructed of tightly woven sun-protective fabrics, irrespective of color, affords substantial protection. Wide-brimmed hats, long sleeves, and trousers all reduce direct exposure. Sunscreens are now considered over-the-counter drugs, and a monograph from the U.S. Food and Drug Administration (FDA) has recognized category I ingredients as safe and effective. Those ingredients are listed in Table 75-5. Sunscreens are rated for their photoprotective effect by their sun protection factor (SPF). The SPF is simply a ratio of the time required to produce sunburn erythema with and without sunscreen application. The SPF of most sunscreens reflects protection from UV-B but not from UV-A. The FDA monograph stipulates that sunscreens must be rated on a scale ranging from minimal (SPF ffi2 and <12) to moderate (SPF ffi12 and <30) to high (SPF ffi30, labeled as 30+). Broad-spectrum sunscreens contain both UV-B-absorbing and UV-A-absorbing chemicals, the latter including avobenzone and ecamsule (terephthalylidene dicamphor sulfonic acid). These chemicals absorb UVR and transfer the absorbed energy to surrounding cells. In contrast, physical UV blockers (zinc oxide and titanium dioxide) scatter or reflect UVR. In addition to light absorption, a critical determinant of the sustained photoprotective effect of sunscreens is their water resistance. The FDA monograph has defined strict testing criteria for sunscreens that claim to possess a high degree of water resistance. Some degree of photoprotection can be achieved by limiting the time of sun exposure during the day. Since a large part of an individual’s total lifetime sun exposure may occur by age 18, it is important to educate parents and young children about the hazards of sunlight. Simply eliminating exposure at midday will substantially reduce lifetime UVR exposure. UVR can be used therapeutically. The administration of UV-B alone or in combination with topically applied agents can induce remissions of many dermatologic diseases, including psoriasis and atopic dermatitis. In particular, narrow-band UV-B treatments (with fluorescent bulbs emitting radiation at ~311 nm) have enhanced efficacy over that obtained with broad-band UV-B in the treatment of psoriasis. Photochemotherapy in which topically applied or systemically administered psoralens are combined with UV-A (PUVA) is effective in treating psoriasis and the early stages of cutaneous T cell lymphoma and vitiligo. Psoralens are tricyclic furocoumarins that, when intercalated into DNA and exposed to UV-A, form adducts with pyrimidine bases and eventually form DNA cross-links. These structural changes are thought to decrease DNA synthesis and to be related to the amelioration of psoriasis. Why PUVA photochemotherapy is effective in cutaneous T cell lymphoma is only partially understood, but it has been shown to induce apoptosis of atypical T lymphocyte populations in the skin. Consequently, direct treatment of circulating atypical lymphocytes by extracorporeal photochemotherapy (photopheresis) has been used in Sézary syndrome as well as in other severe systemic diseases with circulating atypical lymphocytes, such as graft-versus-host disease. In addition to its effects on DNA, PUVA photochemotherapy stimulates epidermal thickening and melanin synthesis; the latter property, together with its anti-inflammatory effects, provides the rationale for use of PUVA in the depigmenting disease vitiligo. Oral 8-methoxypsoralen and UV-A appear to be most effective in this regard, but as many as 100 treatments extending over 12–18 months may be required for satisfactory repigmentation. Not surprisingly, the major side effects of long-term UV-B photo-therapy and PUVA photochemotherapy mimic those seen in individuals with chronic sun exposure and include skin dryness, actinic keratoses, and an increased risk of skin cancer. Despite these risks, the therapeutic index of these modalities continues to be excellent. It is important to choose the most appropriate phototherapeutic approach for a specific dermatologic disease. For example, narrow-band UV-B has been reported in several studies to be as effective as PUVA photo-chemotherapy in the treatment of psoriasis but to pose a lower risk of skin cancer development than PUVA. CHAPTER 75 Photosensitivity and Other Reactions to Light Atlas of Skin Manifestations of Internal Disease Thomas J. Lawley, Calvin McCall, Robert A. Swerlick In the practice of medicine, virtually every clinician encounters patients with skin disease. Physicians of all specialties face the daily 76e task of determining the nature and clinical implication of dermatologic disease. In patients with skin disease, the physician must confront the question of whether the cutaneous process is confined to the skin, representing a purely dermatologic event, or whether it is a manifestation of internal disease related to the patient’s overall medical condition. Evaluation and accurate diagnosis of skin lesions are particularly critical given the marked rise in both melanoma and nonmelanoma skin cancer. Dermatologic conditions can be classified and categorized in many ways. In this atlas, a selected group of inflammatory skin eruptions and neoplastic conditions are grouped in the following manner: common skin diseases and lesions, (2) nonmelanoma skin cancer, melanoma and benign pigmented lesions, (4) infectious disease and the skin, (5) immunologically mediated skin disease, and (6) skin manifestations of internal disease. (Figs. 76e-1 to 76e-19) While most of these common inflammatory skin diseases and benign neoplastic and reactive lesions usually present as a predominantly dermatologic process, underlying systemic associations may be found in some settings. Atopic dermatitis is often present in patients with an atopic diathesis, including asthma or sinusitis. Psoriasis ranges from limited patches on the elbows and knees to severe erythrodermic and pustular involvement and associated psoriatic arthritis. Some patients with alopecia areata may have an underlying thyroid abnormality requiring screening. Finally, even acne vulgaris, one of the most common inflammatory dermatoses, can be associated with a systemic process such as polycystic ovarian syndrome. (Figs. 76e-20 to 76e-27) In fair-skinned ethnic populations, rates of nonmelanoma skin cancer are increasing at an alarming rate. Basal cell carcinoma is the most common cancer in humans and is strongly linked to ultraviolet radiation. Squamous cell carcinoma, including keratoacanthoma, is the second most common skin cancer in most ethnic groups and is also most commonly linked to ultraviolet radiation. Less common cutaneous malignancies include cutaneous T cell lymphoma (mycosis fungoides) and carcinoma and lymphoma metastatic to skin. FIguRE 76e-1 Acne vulgaris, with inflammatory papules, pustules, and comedones. (Courtesy of Kalman Watsky, MD; with permission.) FIguRE 76e-2 Acne rosacea, with prominent facial erythema, telangiectasias, scattered papules, and small pustules. (Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-3 Psoriasis. A. Typical psoriasis is characterized by small and large erythematous plaques with adherent silvery scale. B. Acute inflammatory variants of psoriasis may present with widespread superficial pustules. CHAPTER 76e Atlas of Skin Manifestations of Internal Disease PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-4 Atopic dermatitis, with hyperpigmentation, licheni-fication, and scaling in the antecubital fossae. (Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-5 Dyshidrotic eczema, characterized by deep-seated vesicles and scaling on palms and lateral fingers, is often associated with an atopic diathesis. FIguRE 76e-6 Seborrheic dermatitis, with erythema and scale in the nasolabial fold. (Courtesy of Robert A. Swerlick, MD; with permission.) FIguRE 76e-7 Stasis dermatitis, with erythematous, scaly, and oozing patches over the lower leg. Several stasis ulcers are also seen in this patient. FIguRE 76e-8 Allergic contact dermatitis. A. Acute phase, with sharply demarcated, weeping, eczematous plaques in a perioral distribution. B. Allergic contact reaction to nickel, chronic phase, with an erythematous, lichenified, weeping plaque on skin chronically exposed to a metal snap. (B: Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-9 Lichen planus, with multiple flat-topped, violaceous papules and plaques. Nail dystrophy, as seen in this patient’s thumb-nail, may also be a feature. (Courtesy of Robert Swerlick, MD; with per-mission.) FIguRE 76e-11 Vitiligo in a typical acral distribution, with striking cutaneous depigmentation as a result of melanocyte loss. CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-10 Seborrheic keratoses are “stuck on,” waxy, verrucous papules and plaques with a variety of colors ranging from light tan to black. FIguRE 76e-12 Alopecia areata, characterized by a sharply demar-cated circular patch of scalp completely devoid of hairs. Preservation of follicular orifices is indicative of nonscarring alopecia. (Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-13 Pityriasis rosea. Multiple round or oval erythematous patches with fine central scale are distributed along the skin tension lines on the trunk. FIguRE 76e-16 Keloids resulting from ear piercing, with firm exo-phytic flesh-colored to erythematous nodules of scar tissue. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-14 A. Urticaria, with characteristic discrete and confluent, edematous, erythematous papules and plaques. B. Dermatographism. Erythema and whealing developed after firm stroking of the skin. (B: Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-15 Epidermoid cysts. Several inflamed and noninflamed firm cystic nodules are seen in this patient. Often a patulous follicular punctum is observed on the overlying epidermal surface. FIguRE 76e-17 Cherry hemangiomas—multiple erythematous to dark-purple papules, usually located on the trunk—are very common and arise in middle-aged to older adults. FIguRE 76e-18 Frostbite of the hand, with vesiculation surrounded by edema and erythema. (Courtesy of Daniel F. Danzl, MD; with permission.) FIguRE 76e-22 Basal cell carcinoma, with central ulceration and a pearly, rolled, telangiectatic tumor border. FIguRE 76e-21 Non-Hodgkin’s lymphoma involving the skin, with typical violaceous, “plum-colored” nodules. (Courtesy of Jean Bolognia, MD; with permission.) FIguRE 76e-19 Frostbite of the foot, with vesiculation surrounded by edema and erythema. (Courtesy of Daniel F. Danzl, MD; with permission.) FIguRE 76e-20 Kaposi’s sarcoma in a patient with AIDS. Patch, plaque, and tumor stages are shown. CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-23 Mycosis fungoides is a cutaneous T cell lymphoma. Plaque-stage lesions are seen in this patient. (Figs. 76e-28 to 76e-33) As the prognosis of melanoma is related primarily to the microscopic depth of invasion, and as early detection with surgical treatment can be curative in a high percentage of patients, it is essential that all clinicians acquire some facility in evaluating pigmented lesions. Three clinicopathologic subtypes of melanoma—superficial spreading, lentigo maligna, and acral lentiginous melanoma—typically display features noted in the “ABCD rule”: asymmetry (one half of the lesion varies from the other half); border irregularity (the circumferential border exhibits an irregular, sometimes jagged appearance); color (there is uneven coloration and tone to the pigmented lesion, with various shades of brown, black, red, and white in different areas); and diameter (the diameter is typically >6 mm). The more uncommon subtype, nodular melanoma, may not manifest all these features but rather may present as a more symmetric, evenly pigmented, or amelanotic lesion. Dysplastic (atypical) melanocytic nevi may occur as solitary or multiple lesions as well as in the setting of familial melanoma. These nevi display some degree of asymmetry, border irregularity, and color variation. Ordinary nevi may be acquired or congenital and are quite common. FIguRE 76e-24 Metastatic carcinoma to the skin is characterized by inflammatory, often ulcerated dermal nodules. FIguRE 76e-27 Actinic keratoses consist of hyperkeratotic erythema-tous papules and patches on sun-exposed skin. They arise in middle-aged to older adults and have some potential for malignant transfor-mation. (Courtesy of Robert Swerlick, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-25 Keratoacanthoma is a low-grade squamous cell carci-noma that presents as an exophytic nodule with central keratinous debris. FIguRE 76e-26 Squamous cell carcinoma is seen here as a hyper-keratotic, crusted, and somewhat eroded plaque on the lower lip. Sun-exposed skin of the head, neck, hands, and arms are other typical sites of involvement. FIguRE 76e-28 Nevi are benign proliferations of nevomelanocytes characterized by regularly shaped hyperpigmented macules or pap-ules of a uniform color. FIguRE 76e-29 Dysplastic nevi are irregularly pigmented and shaped nevomelanocytic lesions that may be associated with familial melanoma. FIguRE 76e-32 Nodular melanoma most commonly manifests as a rapidly growing, often ulcerated or crusted black nodule. (Courtesy of S. Wright Caughman, MD; with permission.) CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-30 Superficial spreading melanoma, the most com-mon type of malignant melanoma, is characterized by color variega-tion (black, blue, brown, pink, and white) and irregular borders. (Figs. 76e-34 to 76e-58) One of the roles of the skin is to function as a barrier from the outside world. In this capacity, exposure to infectious agents occurs, and bacterial, viral, fungal, and parasitic infections may result. In addition, the skin may be secondarily involved and provides diagnostic clues to systemic infections such as meningococcemia, Rocky Mountain spotted fever, Lyme disease, and septic emboli. Most sexually transmitted bacterial and viral diseases exhibit cutaneous involvement; examples include primary and secondary syphilis, chancroid, genital herpes simplex, and condyloma acuminatum. (Figs. 76e-59 to 76e-70) Immunologically mediated skin disease may be largely localized to skin and mucous membranes and manifest with blisters and erosions such as pemphigus, pemphigoid, and dermatitis herpetiformis. In diseases such as systemic lupus erythematosus, dermatomyositis, and vasculitis, skin manifestations are often only one element of a widespread process. FIguRE 76e-31 Lentigo maligna melanoma occurs on sun-exposed skin as a large, hyperpigmented macule or plaque with irregular bor-ders and variable pigmentation. (Courtesy of Alvin Solomon, MD; with permission.) FIguRE 76e-33 Acral lentiginous melanoma is more common among blacks, Asians, and Hispanics and occurs as an enlarging hyperpigmented macule or plaque on the palms or soles. Lateral pigment diffusion is present. FIguRE 76e-37 Impetigo contagiosa is a superficial streptococcal or Staphylococcus aureus infection consisting of honey-colored crusts and erythematous weeping erosions. Bullous lesions are occasionally seen. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-34 Erysipelas is a streptococcal infection of the superficial dermis and consists of well-demarcated, erythematous, edematous, warm plaques. FIguRE 76e-38 Tender vesicles and erosions in the mouth of a patient with hand-foot-and-mouth disease. (Courtesy of Stephen D. Gellis, MD; with permission.) FIguRE 76e-35 Varicella, with numerous lesions in various stages of evolution: vesicles on an erythematous base, umbilicated vesicles, and crusts. (Courtesy of Robert Hartman, MD; with permission.) FIguRE 76e-39 Lacy reticular rash of erythema infectiosum (fifth disease). FIguRE 76e-36 Herpes zoster is seen in this HIV-infected patient as hemorrhagic vesicles and pustules on an erythematous base in a dermatomal distribution. (Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-40 Molluscum contagiosum is a cutaneous poxvirus infection characterized by multiple umbilicated flesh-colored or hypopigmented papules. (Courtesy of Yale Resident’s Slide Collection; with permission.) FIguRE 76e-43 Rocky Mountain spotted fever, with pinpoint pete-chial lesions on the palm and volar aspect of the wrist. (Courtesy of Robert Swerlick, MD; with permission.) CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-44 Erythema migrans, the early cutaneous manifes-tation of Lyme disease, is characterized by erythematous annular patches, often with a central erythematous papule at the tick-bite site. (Courtesy of Yale Resident’s Slide Collection; with permission.) FIguRE 76e-41 Oral hairy leukoplakia often presents as white plaques on the lateral tongue and is associated with Epstein-Barr virus infection. (From K Wolff et al: Fitzpatrick’s Color Atlas & Synopsis of Clinical Dermatology, 5th ed. New York, McGraw-Hill, 2005. www .accessmedicine.com.) FIguRE 76e-42 Fulminant meningococcemia, with extensive angu-lar purpuric patches. (Courtesy of Stephen D. Gellis, MD; with permission.) FIguRE 76e-45 Primary syphilis, with a firm, nontender chancre. (Courtesy of Gregory Cox, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-46 Secondary syphilis commonly affects the palms and soles, with scaling, firm, red-brown papules. (Courtesy of Alvin Solomon, MD; with permission.) FIguRE 76e-47 Condylomata lata are moist, somewhat verrucous inter-triginous plaques seen in secondary syphilis. (Courtesy of Yale Resident’s Slide Collection; with permission.) FIguRE 76e-49 A. Tinea corporis is a superficial fungal infection, seen here as an erythematous annular scaly plaque with central clearing. B. A common presentation of chronic dermatophyte infection involves the feet (tinea pedis), hands (tinea manum), and nails (tinea unguium). FIguRE 76e-48 Secondary syphilis, with the characteristic papulo-squamous truncal eruption. FIguRE 76e-50 Scabies, with typical scaling erythematous papules and few linear burrows. FIguRE 76e-51 Skin lesions caused by Chironex fleckeri sting. (Courtesy of V. Pranava Murthy, MD; with permission.) FIguRE 76e-53 Condylomata acuminata are lesions induced by human papillomavirus and in this patient are seen as multiple verrucous papules coalescing into plaques. (Courtesy of S. Wright Caughman, MD; with permission.) CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-54 A patient with features of polar lepromatous FIguRE 76e-52 Chancroid, with characteristic penile ulcers and leprosy: multiple nodular skin lesions, particularly of the forehead, associated left inguinal adenitis (bubo). and loss of eyebrows. (Courtesy of Robert Gelber, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-55 Skin lesions of neutropenic patients. A. Hemorrhagic papules on the foot of a patient undergoing treatment for multiple myeloma. Biopsy and culture demonstrated Aspergillosis species. B. Eroded nodule on the hard palate of a patient undergoing chemotherapy. Biopsy and culture demonstrated Mucor species. C. Ecthyma gangrenosum in a neutropenic patient with Pseudomonas aeruginosa bacteremia. FIguRE 76e-56 Septic emboli, with hemorrhage and infarction due to acute Staphylococcus aureus endocarditis. (Courtesy of L. Baden, MD; with permission.) FIguRE 76e-57 Vegetations (arrows) due to viridans streptococcal endocarditis involving the mitral valve. (Courtesy of AW Karchmer, MD; with permission.) FIguRE 76e-58 Disseminated gonococcemia in the skin is seen as hemorrhagic papules and pustules with purpuric centers in an acral distribution. (Courtesy of Daniel M. Musher, MD; with permission.) CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-60 Discoid lupus erythematosus. Atrophic, depigmented plaques and patches surrounded by hyperpigmentation and erythema in association with scarring and alopecia are characteristic of this cutaneous form of lupus. FIguRE 76e-59 Lupus erythematosus. A. Systemic lupus erythematosus, with prominent, scaly malar erythema. Involvement of other sun-exposed sites is also common. B. Acute lupus erythematosus on the upper chest, with brightly erythematous and slightly edematous coalescence of papules and plaques. (B: Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-61 Dermatomyositis. Periorbital violaceous erythema characterizes the classic heliotrope rash. (Courtesy of James Krell, MD; with permission.) FIguRE 76e-62 Scleroderma characterized by typical expressionless, mask-like facies. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-65 Erythema multiforme is characterized by multiple erythematous plaques with a target or iris morphology and usually represents a hypersensitivity reaction to drugs or infections (especially herpes simplex virus). (Courtesy of Yale Resident’s Slide Collection; with permission.) FIguRE 76e-63 Scleroderma, with acral sclerosis and focal digital ulcers. FIguRE 76e-64 Dermatomyositis often involves the hands as ery-thematous flat-topped papules over the knuckles (Gottron’s sign) and periungual telangiectasias. FIguRE 76e-66 Dermatitis herpetiformis, manifested by pruritic, grouped vesicles in a typical location. The vesicles are often excori-ated and may also occur on the knees, buttocks, elbows, and poste-rior scalp. FIguRE 76e-67 Pemphigus vulgaris. A. Eroded bullae on the back. B. The oral mucosa is almost invariably involved, sometimes with erosions on the gingiva, buccal mucosa, palate, posterior pharynx, or tongue. (B: Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-68 Erythema nodosum is a panniculitis characterized by tender deep-seated nodules and plaques, usually located on the lower extremities. (Courtesy of Robert Swerlick, MD; with permission.) FIguRe 76e-69 Vasculitis. Palpable purpuric papules on the lower legs are seen in this patient with cutaneous small-vessel vasculitis. (Courtesy of Robert Swerlick, MD; with permission.) CHAPTER 76e Atlas of Skin Manifestations of Internal Disease FIguRE 76e-70 Bullous pemphigoid, with tense vesicles and bullae on an erythematous, urticarial base. (Courtesy of Yale Resident’s Slide Collection; with permission.) (Figs. 76e-71 to 76e-78) While many systemic diseases also have cutaneous manifestations, there are well-recognized dermatologic markers of internal disease, some of which are shown in this section. Many of these dermatologic markers may precede, accompany, or follow diagnosis of systemic disease. Acanthosis nigricans is a prototypical dermatologic process that often occurs in association with underlying systemic abnormalities, most commonly obesity and insulin resistance. It may also be associated with other endocrine disorders and several rare genetic syndromes. Malignant acanthosis nigricans may occur in association with several malignancies, especially adenocarcinoma of the gastrointestinal tract, lung, and breast. Other markers of internal disease in this section include pretibial myxedema, which is associated with thyroid disease, and Sweet syndrome, which may be associated with hematologic malignancies, solid tumors, infections, or inflammatory bowel disease. The skin is also involved in many systemic inflammatory diseases such as sarcoidosis, rheumatoid arthritis, and lupus erythematosus. FIguRE 76e-71 Acanthosis nigricans, with typical hyperpigmented plaques on a velvet-like, verrucous surface on the neck. FIguRE 76e-74 Bilateral rheumatoid nodules of the upper extremi-ties. (Courtesy of Robert Swerlick, MD; with permission.) PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 76e-75 Neurofibromatosis, with numerous flesh-colored cutaneous neurofibromas. FIguRE 76e-72 Pretibial myxedema manifesting as waxy, infiltrated plaques in a patient with Graves’ disease. FIguRE 76e-73 Erythematous, indurated plaque of Sweet syn-drome, with a pseudovesicular border. (Courtesy of Robert Swerlick, MD, with permission.) FIguRE 76e-76 Coumarin necrosis. Shown is cutaneous and subcutaneous necrosis of a breast. Other fatty areas, such as buttocks and thighs, are also common sites of involvement. (Courtesy of Kim Yancey, MD; with permission.) FIguRE 76e-77 Sarcoid. A. Infiltrated papules and plaques of variable color are seen in a typical paranasal and periorbital location. B. Infiltrated, hyperpigmented, and slightly erythematous coalescent papules and plaques on the upper arm. (B: Courtesy of Robert Swerlick, MD; with permission.) FIguRE 76e-78 Pyoderma gangrenosum on the dorsal aspect of both hands. Multiple necrotic ulcers are surrounded by a violaceous and undermined border. (Courtesy of Robert Swerlick, MD; with permission.) CHAPTER 76e Atlas of Skin Manifestations of Internal Disease PART 2 Anemia and Polycythemia John W. Adamson, Dan L. Longo HEMATOPOIESIS AND THE PHYSIOLOgIC BASIS OF RED CELL PRODuCTION Hematopoiesis is the process by which the formed elements of blood are produced. The process is regulated through a series of steps begin-ning with the hematopoietic stem cell. Stem cells are capable of pro-red blood cells. The size of the red cell mass reflects the balance of red cell production and destruction. The physiologic basis of red cell pro-duction and destruction provides an understanding of the mechanisms that can lead to anemia. The physiologic regulator of red cell production, the glycoprotein hormone EPO, is produced and released by peritubular capillary lining cells within the kidney. These cells are highly specialized epithelial-like cells. A small amount of EPO is produced by hepatocytes. The funda-mental stimulus for EPO production is the availability of O2 for tissue metabolic needs. Key to EPO gene regulation is hypoxia-inducible factor (HIF)-1α. In the presence of O2, HIF-1α is hydroxylated at a key 77 SECTion 10 HEMAToLogiC ALTERATionS proline, allowing HIF-1α to be ubiquitinated and degraded via the pro ducing red cells, all classes of granulocytes, monocytes, platelets, and the cells of the immune system. The precise molecular mechanism— either intrinsic to the stem cell itself or through the action of extrinsic factors—by which the stem cell becomes committed to a given lineage is not fully defined. However, experiments in mice suggest that ery tor that does not develop in the absence of expression of the GATA-1 teasome pathway. If O2 becomes limiting, this critical hydroxylation step does not occur, allowing HIF-1α to partner with other proteins, translocate to the nucleus, and upregulate the expression of the EPO gene, among others. Impaired O2 delivery to the kidney can result from a decreased red cell mass (anemia), impaired O2 loading of the hemoglobin molecule or a high O2 affinity mutant hemoglobin (hypoxemia), or, rarely, impaired blood flow to the kidney (renal artery stenosis). EPO governs the day to-day production of red cells, and ambient levels of the hormone can be measured in the plasma by sensitive immunoassays—the normal level being 10–25 U/L. When the hemoglobin concentration falls below 100–120 g/L (10–12 g/dL), plasma EPO levels increase in proportion to the severity of the anemia (Fig. 77-2). In circulation, EPO has a half-clearance time of 6–9 h. EPO acts by binding to specific receptors on the surface of marrow erythroid precursors, inducing them to pro liferate and to mature. With EPO stimulation, red cell production can increase fourto fivefold within a 1to 2-week period, but only in the presence of adequate nutrients, especially iron. The functional capacity of the erythron, therefore, requires normal renal production of EPO, a functioning erythroid marrow, and an adequate supply of substrates for hemoglobin synthesis. A defect in any of these key components can lead to anemia. Generally, anemia is recognized in the laboratory when a patient’s hemoglobin level or hematocrit is reduced below an expected value (the normal range). The likelihood and severity of anemia are defined based on the deviation of the patient’s hemoglobin/hematocrit from values expected for ageand sex-matched normal subjects. The hemoglobin concentration in adults has a Gaussian distribution. The and FOG-1 (friend of GATA-1) transcription factors (Chap. 89e). Following lineage commitment, hematopoietic progenitor and precur sor cells come increasingly under the regulatory influence of growth factors and hormones. For red cell production, erythropoietin (EPO) is the primary regulatory hormone. EPO is required for the maintenance of committed erythroid progenitor cells that, in the absence of the hormone, undergo programmed cell death (apoptosis). The regulated process of red cell production is erythropoiesis, and its key elements are illustrated in Fig. 77-1. In the bone marrow, the first morphologically recognizable ery throid precursor is the pronormoblast. This cell can undergo four to five cell divisions, which result in the production of 16–32 mature red cells. With increased EPO production, or the administration of EPO as a drug, early progenitor cell numbers are amplified and, in turn, give rise to increased numbers of erythrocytes. The regulation of EPO production itself is linked to tissue oxygenation. In mammals, O2 is transported to tissues bound to the hemoglobin contained within circulating red cells. The mature red cell is 8 μm in diameter, anucleate, discoid in shape, and extremely pliable in order to traverse the microcirculation successfully; its membrane integrity is maintained by the intracellular generation of ATP. Normal red cell production results in the daily replacement of 0.8–1% of all circulating red cells in the body, since the average red cell lives 100–120 days. The organ responsible for red cell production is called the erythron. The erythron is a dynamic organ made up of a rapidly proliferating pool of marrow erythroid precursor cells and a large mass of mature circulating FIguRE 77-2 Erythropoietin (EPO) levels in response to anemia. When the hemoglobin level falls to 120 g/L (12 g/dL), plasma EPO levels increase logarithmically. In the presence of chronic kidney Cardinal Manifestations and Presentation of Diseases disease or chronic inflammation, EPO levels are typically lower than expected for the degree of anemia. As individuals age, the level of EPO needed to sustain normal hemoglobin levels appears to increase. FIguRE 77-1 The physiologic regulation of red cell production by (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, tissue oxygen tension. Hb, hemoglobin. McGraw-Hill, 2010.) mean hematocrit value for adult males is 47% (standard deviation, ±7%) and that for adult females is 42% (±5%). Any single hematocrit or hemoglobin value carries with it a likelihood of associated anemia. Thus, a hematocrit of <39% in an adult male or <35% in an adult female has only about a 25% chance of being normal. Hematocrit levels are less useful than hemoglobin levels in assessing anemia because they are calculated rather than measured directly. Suspected low hemoglobin or hematocrit values are more easily interpreted if previous values for the same patient are known for comparison. The World Health Organization (WHO) defines anemia as a hemoglobin level <130 g/L (13 g/dL) in men and <120 g/L (12 g/dL) in women. The critical elements of erythropoiesis—EPO production, iron availability, the proliferative capacity of the bone marrow, and effective maturation of red cell precursors—are used for the initial classification of anemia (see below). CLINICAL PRESENTATION OF ANEMIA Signs and Symptoms Anemia is most often recognized by abnormal screening laboratory tests. Patients less commonly present with advanced anemia and its attendant signs and symptoms. Acute anemia is due to blood loss or hemolysis. If blood loss is mild, enhanced O2 delivery is achieved through changes in the O2–hemoglobin dissociation curve mediated by a decreased pH or increased CO2 (Bohr effect). With acute blood loss, hypovolemia dominates the clinical picture, and the hematocrit and hemoglobin levels do not reflect the volume of blood lost. Signs of vascular instability appear with acute losses of 10–15% of the total blood volume. In such patients, the issue is not anemia but hypotension and decreased organ perfusion. When >30% of the blood volume is lost suddenly, patients are unable to compensate with the usual mechanisms of vascular contraction and changes in regional blood flow. The patient prefers to remain supine and will show postural hypotension and tachycardia. If the volume of blood lost is >40% (i.e., >2 L in the average-sized adult), signs of hypovolemic shock including confusion, dyspnea, diaphoresis, hypotension, and tachycardia appear (Chap. 129). Such patients have significant deficits in vital organ perfusion and require immediate volume replacement. With acute hemolysis, the signs and symptoms depend on the mechanism that leads to red cell destruction. Intravascular hemolysis with release of free hemoglobin may be associated with acute back pain, free hemoglobin in the plasma and urine, and renal failure. Symptoms associated with more chronic or progressive anemia depend on the age of the patient and the adequacy of blood supply to critical organs. Symptoms associated with moderate anemia include fatigue, loss of stamina, breathlessness, and tachycardia (particularly with physical exertion). However, because of the intrinsic compensatory mechanisms that govern the O2–hemoglobin dissociation curve, the gradual onset of anemia—particularly in young patients—may not be associated with signs or symptoms until the anemia is severe (hemoglobin <70–80 g/L [7–8 g/dL]). When anemia develops over a period of days or weeks, the total blood volume is normal to slightly increased, and changes in cardiac output and regional blood flow help compensate for the overall loss in O2-carrying capacity. Changes in the position of the O2–hemoglobin dissociation curve account for some of the compensatory response to anemia. With chronic anemia, intracellular levels of 2,3-bisphosphoglycerate rise, shifting the dissociation curve to the right and facilitating O2 unloading. This compensatory mechanism can only maintain normal tissue O2 delivery in the face of a 20–30 g/L (2–3 g/dL) deficit in hemoglobin concentration. Finally, further protection of O2 delivery to vital organs is achieved by the shunting of blood away from organs that are relatively rich in blood supply, particularly the kidney, gut, and skin. Certain disorders are commonly associated with anemia. Chronic inflammatory states (e.g., infection, rheumatoid arthritis, cancer) are associated with mild to moderate anemia, whereas lymphoproliferative disorders, such as chronic lymphocytic leukemia and certain other B cell neoplasms, may be associated with autoimmune hemolysis. APPROACH TO THE PATIENT: The evaluation of the patient with anemia requires a careful history and physical examination. Nutritional history related to drugs or alcohol intake and family history of anemia should always be assessed. Certain geographic backgrounds and ethnic origins are associated with an increased likelihood of an inherited disorder of the hemoglobin molecule or intermediary metabolism. Glucose-6phosphate dehydrogenase (G6PD) deficiency and certain hemoglobinopathies are seen more commonly in those of Middle Eastern or African origin, including African Americans who have a high frequency of G6PD deficiency. Other information that may be useful includes exposure to certain toxic agents or drugs and symptoms related to other disorders commonly associated with anemia. These include symptoms and signs such as bleeding, fatigue, malaise, fever, weight loss, night sweats, and other systemic symptoms. Clues to the mechanisms of anemia may be provided on physical examination by findings of infection, blood in the stool, lymphadenopathy, splenomegaly, or petechiae. Splenomegaly and lymphadenopathy suggest an underlying lymphoproliferative disease, whereas petechiae suggest platelet dysfunction. Past laboratory measurements are helpful to determine a time of onset. In the anemic patient, physical examination may demonstrate a forceful heartbeat, strong peripheral pulses, and a systolic “flow” murmur. The skin and mucous membranes may be pale if the hemoglobin is <80–100 g/L (8–10 g/dL). This part of the physical examination should focus on areas where vessels are close to the surface such as the mucous membranes, nail beds, and palmar creases. If the palmar creases are lighter in color than the surrounding skin when the hand is hyperextended, the hemoglobin level is usually <80 g/L (8 g/dL). Table 77-1 lists the tests used in the initial workup of anemia. A routine complete blood count (CBC) is required as part of the evaluation and includes the hemoglobin, hematocrit, and red cell indices: the mean cell volume (MCV) in femtoliters, mean cell hemoglobin (MCH) in picograms per cell, and mean concentration of hemoglobin per volume of red cells (MCHC) in grams per liter (non-SI: grams per deciliter). The red cell indices are calculated as shown in Table 77-2, and the normal variations in the hemoglobin and hematocrit with age are shown in Table 77-3. A number of physiologic factors affect the CBC, including age, sex, pregnancy, smoking, and altitude. High-normal hemoglobin values may be seen in men and women who live at altitude or smoke heavily. Hemoglobin elevations due to smoking reflect normal compensation due to the displacement of O2 by CO in hemoglobin binding. Other important information is provided by the reticulocyte count and measurements of iron supply including serum iron, total iron-binding capacity (TIBC; an indirect measure of serum transferrin), and serum ferritin. Marked alterations in the red cell indices usually reflect disorders of maturation or iron deficiency. A careful evaluation of the peripheral blood smear is important, and clinical laboratories often provide a description of both the red and white cells, a white cell differential count, and the platelet count. In patients with severe anemia and abnormalities in red blood cell morphology and/or low reticulocyte counts, a bone marrow aspirate or biopsy can assist in the diagnosis. Other tests of value in the diagnosis of specific anemias are discussed in chapters on specific disease states. The components of the CBC also help in the classification of anemia. Microcytosis is reflected by a lower than normal MCV (<80), whereas high values (>100) reflect macrocytosis. The MCH and MCHC reflect defects in hemoglobin synthesis (hypochromia). Automated cell counters describe the red cell volume distribution width (RDW). The MCV (representing the peak of the distribution curve) is insensitive to the appearance of small populations of macrocytes or microcytes. An experienced laboratory technician I. Complete blood count (CBC) A. Red blood cell count 1. 2. 3. B. Red blood cell indices 1. 2. 3. 4. C. White blood cell count 1. 2. Nuclear segmentation of neutrophils D. Platelet count E. Cell morphology 1. 2. 3. 4. 5. II. A. B. C. III. A. 1. 2. 3. B. Biopsy 1. 2. Morphology aM/E ratio, ratio of myeloid to erythroid precursors. PART 2 Cardinal Manifestations and Presentation of Diseases Mean cell hemoglobin concentration = (hemoglobin × 33 ± 2% 10)/hematocrit, or MCH/MCV TABLE 77-3 CHAngES in noRMAL HEMogLoBin/HEMAToCRiT vALuES wiTH AgE, SEx, AnD PREgnAnCy Age/Sex Hemoglobin, g/dL Hematocrit, % Source: From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010. FIguRE 77-3 Normal blood smear (Wright stain). High-power field showing normal red cells, a neutrophil, and a few platelets. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) will be able to identify minor populations of large or small cells or hypochromic cells before the red cell indices change. Peripheral Blood Smear The peripheral blood smear provides important information about defects in red cell production (Chap. 81e). As a complement to the red cell indices, the blood smear also reveals variations in cell size (anisocytosis) and shape (poikilocytosis). The degree of anisocytosis usually correlates with increases in the RDW or the range of cell sizes. Poikilocytosis suggests a defect in the maturation of red cell precursors in the bone marrow or fragmentation of circulating red cells. The blood smear may also reveal polychromasia—red cells that are slightly larger than normal and grayish blue in color on the Wright-Giemsa stain. These cells are reticulocytes that have been prematurely released from the bone marrow, and their color represents residual amounts of ribosomal RNA. These cells appear in circulation in response to EPO stimulation or to architectural damage of the bone marrow (fibrosis, infiltration of the marrow by malignant cells, etc.) that results in their disordered release from the marrow. The appearance of nucleated red cells, Howell-Jolly bodies, target cells, sickle cells, and others may provide clues to specific disorders (Figs. 77-3 to 77-11). Reticulocyte Count An accurate reticulocyte count is key to the initial classification of anemia. Reticulocytes are red cells that FIguRE 77-4 Severe iron-deficiency anemia. Microcytic and hypo-chromic red cells smaller than the nucleus of a lymphocyte associated with marked variation in size (anisocytosis) and shape (poikilocytosis). (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) FIguRE 77-5 Macrocytosis. Red cells are larger than a small lympho-cyte and well hemoglobinized. Often macrocytes are oval shaped (macro-ovalocytes). FIguRE 77-8 Target cells. Target cells have a bull’s-eye appearance and are seen in thalassemia and in liver disease. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) FIguRE 77-6 Howell-Jolly bodies. In the absence of a functional spleen, nuclear remnants are not culled from the red cells and remain as small homogeneously staining blue inclusions on Wright stain. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) FIguRE 77-9 Red cell fragmentation. Red cells may become frag-mented in the presence of foreign bodies in the circulation, such as mechanical heart valves, or in the setting of thermal injury. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) FIguRE 77-7 Red cell changes in myelofibrosis. The left panel shows a teardrop-shaped cell. The right panel shows a nucleated red cell. These forms can be seen in myelofibrosis. FIguRE 77-10 Uremia. The red cells in uremia may acquire numerous regularly spaced, small, spiny projections. Such cells, called burr cells or echinocytes, are readily distinguishable from irregularly spiculated acanthocytes shown in Fig. 77-11. PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 77-11 Spur cells. Spur cells are recognized as distorted red cells containing several irregularly distributed thornlike projections. Cells with this morphologic abnormality are also called acanthocytes. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) have been recently released from the bone marrow. They are identified by staining with a supravital dye that precipitates the ribosomal RNA (Fig. 77-12). These precipitates appear as blue or black punctate spots and can be counted manually or, currently, by fluorescent emission of dyes that bind to RNA. This residual RNA is metabolized over the first 24–36 h of the reticulocyte’s life span in circulation. Normally, the reticulocyte count ranges from 1 to 2% and reflects the daily replacement of 0.8–1.0% of the circulating red cell population. A corrected reticulocyte count provides a reliable measure of effective red cell production. In the initial classification of anemia, the patient’s reticulocyte count is compared with the expected reticulocyte response. In general, if the EPO and erythroid marrow responses to moderate anemia [hemoglobin <100 g/L (10 g/dL)] are intact, the red cell production rate increases to two to three times normal within 10 days following the onset of anemia. In the face of established anemia, a reticulocyte response less than two to three times normal indicates an inadequate marrow response. To use the reticulocyte count to estimate marrow response, two corrections are necessary. The first correction adjusts the reticulocyte count based on the reduced number of circulating red cells. With anemia, the percentage of reticulocytes may be increased while the absolute number is unchanged. To correct for this effect, the reticulocyte percentage is multiplied by the ratio of the patient’s hemoglobin or hematocrit to the expected hemoglobin/hematocrit FIguRE 77-12 Reticulocytes. Methylene blue stain demonstrates residual RNA in newly made red cells. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) Correction #1 for Anemia: This correction produces the corrected reticulocyte count. In a person whose reticulocyte count is 9%, hemoglobin 7.5 g/dL, and hematocrit 23%, the absolute reticulocyte count = 9 × (7.5/15) [or × (23/45)] = 4.5% Note. This correction is not done if the reticulocyte count is reported in absolute numbers (e.g., 50,000/μL of blood) Correction #2 for Longer Life of Prematurely Released Reticulocytes in the Blood: This correction produces the reticulocyte production index. In a person whose reticulocyte count is 9%, hemoglobin 7.5 gm/dL, and hematocrit 23%, the reticulocyte production index (7.5/15)(hemoglobincorrection) =×9 =2.25 for the age and sex of the patient (Table 77-4). This provides an estimate of the reticulocyte count corrected for anemia. To convert the corrected reticulocyte count to an index of marrow production, a further correction is required, depending on whether some of the reticulocytes in circulation have been released from the marrow pre maturely. For this second correction, the peripheral blood smear is examined to see if there are polychromatophilic macrocytes present. These cells, representing prematurely released reticulocytes, are referred to as “shift” cells, and the relationship between the degree of shift and the necessary shift correction factor is shown in Fig. 77-13. The correction is necessary because these prematurely released cells survive as reticulocytes in circulation for >1 day, thereby providing a falsely high estimate of daily red cell produc tion. If polychromasia is increased, the reticulocyte count, already corrected for anemia, should be divided again by 2 to account for the prolonged reticulocyte maturation time. The second correction fac tor varies from 1 to 3 depending on the severity of anemia. In gen eral, a correction of 2 is simply used. An appropriate correction is shown in Table 77-4. If polychromatophilic cells are not seen on the blood smear, the second correction is not required. The now doubly 1.5 2.5 FIguRE 77-13 Correction of the reticulocyte count. To use the reticulocyte count as an indicator of effective red cell production, the reticulocyte percentage must be corrected based on the level of anemia and the circulating life span of the reticulocytes. Erythroid cells take ∼4.5 days to mature. At a normal hemoglobin, reticulocytes are released to the circulation with ∼1 day left as reticulocytes. However, with different levels of anemia, reticulocytes (and even earlier erythroid cells) may be released from the marrow prematurely. Most patients come to clinical attention with hematocrits in the mid20s, and thus a correction factor of 2 is commonly used because the observed reticulocytes will live for 2 days in the circulation before losing their RNA. corrected reticulocyte count is the reticulocyte production index, and it provides an estimate of marrow production relative to normal. In many hospital laboratories, the reticulocyte count is reported not only as a percentage but also in absolute numbers. If so, no correction for dilution is required. A summary of the appropriate marrow response to varying degrees of anemia is shown in Table 77-5. Premature release of reticulocytes is normally due to increased EPO stimulation. However, if the integrity of the bone marrow release process is lost through tumor infiltration, fibrosis, or other disorders, the appearance of nucleated red cells or polychromatophilic macrocytes should still invoke the second reticulocyte correction. The shift correction should always be applied to a patient with anemia and a very high reticulocyte count to provide a true index of effective red cell production. Patients with severe chronic hemolytic anemia may increase red cell production as much as sixto sevenfold. This measure alone confirms the fact that the patient has an appropriate EPO response, a normally functioning bone marrow, and sufficient iron available to meet the demands for new red cell formation. If the reticulocyte production index is <2 in the face of established anemia, a defect in erythroid marrow proliferation or maturation must be present. Tests of Iron Supply and Storage The laboratory measurements that reflect the availability of iron for hemoglobin synthesis include the serum iron, the TIBC, and the percent transferrin saturation. The percent transferrin saturation is derived by dividing the serum iron level (× 100) by the TIBC. The normal serum iron ranges from 9 to 27 μmol/L (50–150 μg/dL), whereas the normal TIBC is 54–64 μmol/L (300–360 μg/dL); the normal transferrin saturation ranges from 25 to 50%. A diurnal variation in the serum iron leads to a variation in the percent transferrin saturation. The serum ferritin is used to evaluate total body iron stores. Adult males have serum ferritin levels that average ∼100 μg/L, corresponding to iron stores of ∼1 g. Adult females have lower serum ferritin levels averaging 30 μg/L, reflecting lower iron stores (∼300 mg). A serum ferritin level of 10–15 μg/L indicates depletion of body iron stores. However, ferritin is also an acute-phase reactant and, in the presence of acute or chronic inflammation, may rise several-fold above baseline levels. As a rule, a serum ferritin >200 μg/L means there is at least some iron in tissue stores. Bone Marrow Examination A bone marrow aspirate and smear or a needle biopsy can be useful in the evaluation of some patients with anemia. In patients with hypoproliferative anemia and normal iron status, a bone marrow is indicated. Marrow examination can diagnose primary marrow disorders such as myelofibrosis, a red cell maturation defect, or an infiltrative disease (Figs. 77-14 to 77-16). The increase or decrease of one cell lineage (myeloid vs erythroid) compared to another is obtained by a differential count of nucleated cells in a bone marrow smear (the myeloid/erythroid [M/E] ratio). A patient with a hypoproliferative anemia (see below) and a reticulocyte production index <2 will demonstrate an M/E ratio of 2 or 3:1. In contrast, patients with hemolytic disease and a production index >3 will have an M/E ratio of at least 1:1. Maturation disorders are identified from the discrepancy between the M/E ratio and the reticulocyte production index (see below). Either the marrow smear or biopsy can be stained for the presence of iron stores or iron in developing red cells. The storage iron is in the form of ferritin or hemosiderin. On carefully prepared bone marrow smears, small ferritin granules can normally be seen under oil immersion in 20–40% of developing erythroblasts. Such cells are called sideroblasts. FIguRE 77-14 Normal bone marrow. This is a low-power view of a section of a normal bone marrow biopsy stained with hematoxylin and eosin (H&E). Note that the nucleated cellular elements account for ∼40–50% and the fat (clear areas) accounts for ∼50–60% of the area. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) FIguRE 77-15 Erythroid hyperplasia. This marrow shows an increase in the fraction of cells in the erythroid lineage as might be seen when a normal marrow compensates for acute blood loss or hemolysis. The myeloid/erythroid (M/E) ratio is about 1:1. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) FIguRE 77-16 Myeloid hyperplasia. This marrow shows an increase in the fraction of cells in the myeloid or granulocytic lineage as might be seen in a normal marrow responding to infection. The myeloid/ erythroid (M/E) ratio is >3:1. (From RS Hillman et al: Hematology in Clinical Practice, 5th ed. New York, McGraw-Hill, 2010.) Additional laboratory tests may be of value in confirming specific diagnoses. For details of these tests and how they are applied in individual disorders, see Chaps. 126 to 130. PART 2 Cardinal Manifestations and Presentation of Diseases DEFINITION AND CLASSIFICATION OF ANEMIA Initial Classification of Anemia The functional classification of anemia has three major categories. These are (1) marrow production defects (hypoproliferation), (2) red cell maturation defects (ineffective erythropoiesis), and (3) decreased red cell survival (blood loss/hemolysis). The classification is shown in Fig. 77-17. A hypoproliferative anemia is typically seen with a low reticulocyte production index together with little or no change in red cell morphology (a normocytic, normochromic anemia) (Chap. 126). Maturation disorders typically have a slight to moderately elevated reticulocyte production index that is accompanied by either macrocytic (Chap. 128) or microcytic (Chaps. 126, 127) red cell indices. Increased red blood cell destruction secondary to hemolysis results in an increase in the reticulocyte production index to at least three times normal (Chap. 129), provided sufficient iron is available. Hemorrhagic anemia does not typically result in production indices of more than 2.0–2.5 times normal because of the limitations placed on expansion of the erythroid marrow by iron availability. In the first branch point of the classification of anemia, a reticulocyte production index >2.5 indicates that hemolysis is most likely. A reticulocyte production index <2 indicates either a hypoproliferative anemia or maturation disorder. The latter two possibilities can often be distinguished by the red cell indices, by examination of the peripheral blood smear, or by a marrow examination. If the red cell indices are normal, the anemia is almost certainly hypoproliferative in nature. Maturation disorders are characterized by ineffective red cell production and a low reticulocyte production index. Bizarre red cell shapes—macrocytes or hypochromic microcytes—are seen on the peripheral blood smear. With a hypoproliferative anemia, no erythroid Hemolysis/ hemorrhage Blood loss Intravascular hemolysis Metabolic defect Membrane abnormality Hemoglobinopathy Immune destruction Fragmentation Hypoproliferative Marrow damage • Infiltration/fibrosis • Aplasia Iron deficiency Stimulation • Inflammation Maturation disorder Cytoplasmic defects • Iron deficiency • Thalassemia • Sideroblastic anemia Nuclear defects Normocytic normochromic Micro or macrocytic Red cell morphology Index < 2.5 Index ˜ 2.5 Anemia CBC, reticulocyte count FIguRE 77-17 The physiologic classification of anemia. CBC, complete blood count. hyperplasia is noted in the marrow, whereas patients with ineffective red cell production have erythroid hyperplasia and an M/E ratio <1:1. Hypoproliferative Anemias At least 75% of all cases of anemia are hypoproliferative in nature. A hypoproliferative anemia reflects absolute or relative marrow failure in which the erythroid marrow has not proliferated appropriately for the degree of anemia. The majority of hypoproliferative anemias are due to mild to moderate iron deficiency or inflammation. A hypoproliferative anemia can result from marrow damage, iron deficiency, or inadequate EPO stimulation. The last may reflect impaired renal function, suppression of EPO production by inflammatory cytokines such as interleukin 1, or reduced tissue needs for O2 from metabolic disease such as hypothyroidism. Only occasionally is the marrow unable to produce red cells at a normal rate, and this is most prevalent in patients with renal failure. With diabetes mellitus or myeloma, the EPO deficiency may be more marked than would be predicted by the degree of renal insufficiency. In general, hypoproliferative anemias are characterized by normocytic, normochromic red cells, although microcytic, hypochromic cells may be observed with mild iron deficiency or long-standing chronic inflammatory disease. The key laboratory tests in distinguishing between the various forms of hypoproliferative anemia include the serum iron and iron-binding capacity, evaluation of renal and thyroid function, a marrow biopsy or aspirate to detect marrow damage or infiltrative disease, and serum ferritin to assess iron stores. An iron stain of the marrow will determine the pattern of iron distribution. Patients with the anemia of acute or chronic inflammation show a distinctive pattern of serum iron (low), TIBC (normal or low), percent transferrin saturation (low), and serum ferritin (normal or high). These changes in iron values are brought about by hepcidin, the iron regulatory hormone that is produced by the liver and is increased in inflammation (Chap. 126). A distinct pattern of results is noted in mild to moderate iron deficiency (low serum iron, high TIBC, low percent transferrin saturation, low serum ferritin) (Chap. 126). Marrow damage by drugs, infiltrative disease such as leukemia or lymphoma, or marrow aplasia is diagnosed from the peripheral blood and bone marrow morphology. With infiltrative disease or fibrosis, a marrow biopsy is required. Maturation Disorders The presence of anemia with an inappropriately low reticulocyte production index, macroor microcytosis on smear, and abnormal red cell indices suggests a maturation disorder. Maturation disorders are divided into two categories: nuclear maturation defects, associated with macrocytosis, and cytoplasmic maturation defects, associated with microcytosis and hypochromia usually from defects in hemoglobin synthesis. The inappropriately low reticulocyte production index is a reflection of the ineffective erythropoiesis that results from the destruction within the marrow of developing erythroblasts. Bone marrow examination shows erythroid hyperplasia. Nuclear maturation defects result from vitamin B12 or folic acid deficiency, drug damage, or myelodysplasia. Drugs that interfere with cellular DNA synthesis, such as methotrexate or alkylating agents, can produce a nuclear maturation defect. Alcohol, alone, is also capable of producing macrocytosis and a variable degree of anemia, but this is usually associated with folic acid deficiency. Measurements of folic acid and vitamin B12 are critical not only in identifying the specific vitamin deficiency but also because they reflect different pathogenetic mechanisms (Chap. 128). Cytoplasmic maturation defects result from severe iron deficiency or abnormalities in globin or heme synthesis. Iron deficiency occupies an unusual position in the classification of anemia. If the iron-deficiency anemia is mild to moderate, erythroid marrow proliferation is blunted and the anemia is classified as hypoproliferative. However, if the anemia is severe and prolonged, the erythroid marrow will become hyperplastic despite the inadequate iron supply, and the anemia will be classified as ineffective erythropoiesis with a cytoplasmic maturation defect. In either case, an inappropriately low reticulocyte production index, microcytosis, and a classic pattern of iron values make the diagnosis clear and easily distinguish iron deficiency from other cytoplasmic maturation defects such as the thalassemias. Defects in heme synthesis, in contrast to globin synthesis, are less common and may be acquired or inherited (Chap. 430). Acquired abnormalities are usually associated with myelodysplasia, may lead to either a macroor microcytic anemia, and are frequently associated with mitochondrial iron loading. In these cases, iron is taken up by the mitochondria of the developing erythroid cell but not incorporated into heme. The iron-encrusted mitochondria surround the nucleus of the erythroid cell, forming a ring. Based on the distinctive finding of so-called ringed sideroblasts on the marrow iron stain, patients are diagnosed as having a sideroblastic anemia—almost always reflecting myelodysplasia. Again, studies of iron parameters are helpful in the differential diagnosis of these patients. Blood Loss/Hemolytic Anemia In contrast to anemias associated with an inappropriately low reticulocyte production index, hemolysis is associated with red cell production indices ≥2.5 times normal. The stimulated erythropoiesis is reflected in the blood smear by the appearance of increased numbers of polychromatophilic macrocytes. A marrow examination is rarely indicated if the reticulocyte production index is increased appropriately. The red cell indices are typically normocytic or slightly macrocytic, reflecting the increased number of reticulocytes. Acute blood loss is not associated with an increased reticulocyte production index because of the time required to increase EPO production and, subsequently, marrow proliferation. Subacute blood loss may be associated with modest reticulocytosis. Anemia from chronic blood loss presents more often as iron deficiency than with the picture of increased red cell production. The evaluation of blood loss anemia is usually not difficult. Most problems arise when a patient presents with an increased red cell production index from an episode of acute blood loss that went unrecognized. The cause of the anemia and increased red cell production may not be obvious. The confirmation of a recovering state may require observations over a period of 2–3 weeks, during which the hemoglobin concentration will rise and the reticulocyte production index fall (Chap. 129). Hemolytic disease, while dramatic, is among the least common forms of anemia. The ability to sustain a high reticulocyte production index reflects the ability of the erythroid marrow to compensate for hemolysis and, in the case of extravascular hemolysis, the efficient recycling of iron from the destroyed red cells to support red cell production. With intravascular hemolysis, such as paroxysmal nocturnal hemoglobinuria, the loss of iron may limit the marrow response. The level of response depends on the severity of the anemia and the nature of the underlying disease process. Hemoglobinopathies, such as sickle cell disease and the thalassemias, present a mixed picture. The reticulocyte index may be high but is inappropriately low for the degree of marrow erythroid hyperplasia (Chap. 127). Hemolytic anemias present in different ways. Some appear suddenly as an acute, self-limited episode of intravascular or extravascular hemolysis, a presentation pattern often seen in patients with autoimmune hemolysis or with inherited defects of the Embden-Meyerhof pathway or the glutathione reductase pathway. Patients with inherited disorders of the hemoglobin molecule or red cell membrane generally have a lifelong clinical history typical of the disease process. Those with chronic hemolytic disease, such as hereditary spherocytosis, may actually present not with anemia but with a complication stemming from the prolonged increase in red cell destruction such as symptomatic bilirubin gallstones or splenomegaly. Patients with chronic hemolysis are also susceptible to aplastic crises if an infectious process interrupts red cell production. The differential diagnosis of an acute or chronic hemolytic event requires the careful integration of family history, the pattern of clinical presentation, and—whether the disease is congenital or acquired— careful examination of the peripheral blood smear. Precise diagnosis may require more specialized laboratory tests, such as hemoglobin electrophoresis or a screen for red cell enzymes. Acquired defects in red cell survival are often immunologically mediated and require a direct or indirect antiglobulin test or a cold agglutinin titer to detect the presence of hemolytic antibodies or complement-mediated red cell destruction (Chap. 129). An overriding principle is to initiate treatment of mild to moderate anemia only when a specific diagnosis is made. Rarely, in the acute setting, anemia may be so severe that red cell transfusions are required before a specific diagnosis is available. Whether the anemia is of acute or gradual onset, the selection of the appropriate treatment is determined by the documented cause(s) of the anemia. Often, the cause of the anemia is multifactorial. For example, a patient with severe rheumatoid arthritis who has been taking anti-inflammatory drugs may have a hypoproliferative anemia associated with chronic inflammation as well as chronic blood loss associated with intermittent gastrointestinal bleeding. In every circumstance, it is important to evaluate the patient’s iron status fully before and during the treatment of any anemia. Transfusion is discussed in Chap. 138e; iron therapy is discussed in Chap. 126; treatment of megaloblastic anemia is discussed in Chap. 128; treatment of other entities is discussed in their respective chapters (sickle cell anemia, Chap. 127; hemolytic anemias, Chap. 129; aplastic anemia and myelodysplasia, Chap. 130). Therapeutic options for the treatment of anemias have expanded dramatically during the past 30 years. Blood component therapy is available and safe. Recombinant EPO as an adjunct to anemia management has transformed the lives of patients with chronic renal failure on dialysis and reduced transfusion needs of anemic cancer patients receiving chemotherapy. Eventually, patients with inherited disorders of globin synthesis or mutations in the globin gene, such as sickle cell disease, may benefit from the successful introduction of targeted genetic therapy (Chap. 91e). Polycythemia is defined as an increase in the hemoglobin above normal. This increase may be real or only apparent because of a decrease in plasma volume (spurious or relative polycythemia). The term erythrocytosis may be used interchangeably with polycythemia, but some draw a distinction between them: erythrocytosis implies documentation of increased red cell mass, whereas polycythemia refers to any increase in red cells. Often patients with polycythemia are detected through an incidental finding of elevated hemoglobin or hematocrit levels. Concern that the hemoglobin level may be abnormally high is usually triggered at 170 g/L (17 g/dL) for men and 150 g/L (15 g/dL) for women. Hematocrit levels >50% in men or >45% in women may be abnormal. Hematocrits >60% in men and >55% in women are almost invariably associated with an increased red cell mass. Given that the machine that quantitates red cell parameters actually measures hemoglobin concentrations and calculates hematocrits, hemoglobin levels may be a better index. Features of the clinical history that are useful in the differential diagnosis include smoking history; current living at high altitude; or a history of congenital heart disease, sleep apnea, or chronic lung disease. Patients with polycythemia may be asymptomatic or experience symptoms related to the increased red cell mass or the underlying disease process that leads to the increased red cell mass. The dominant symptoms from an increased red cell mass are related to hyperviscosity and thrombosis (both venous and arterial), because the blood viscosity increases logarithmically at hematocrits >55%. Manifestations range from digital ischemia to Budd-Chiari syndrome with hepatic vein thrombosis. Abdominal vessel thromboses are particularly common. Neurologic symptoms such as vertigo, tinnitus, headache, and visual disturbances may occur. Hypertension is often present. Patients with polycythemia vera may have aquagenic pruritus and symptoms related to hepatosplenomegaly. Patients may have easy bruising, epistaxis, or bleeding from the gastrointestinal tract. Peptic ulcer disease is common. Patients with hypoxemia may develop cyanosis on minimal exertion or have headache, impaired mental acuity, and fatigue. The physical examination usually reveals a ruddy complexion. Splenomegaly favors polycythemia vera as the diagnosis (Chap. 131). 400 The presence of cyanosis or evidence of a right-to-left shunt suggests congenital heart disease presenting in the adult, particularly tetralogy of Fallot or Eisenmenger’s syndrome (Chap. 236). Increased blood viscosity raises pulmonary artery pressure; hypoxemia can lead to increased pulmonary vascular resistance. Together, these factors can produce cor pulmonale. Polycythemia can be spurious (related to a decrease in plasma volume; Gaisbock’s syndrome), primary, or secondary in origin. The secondary causes are all associated with increases in EPO levels: either a physiologically adapted appropriate elevation based on tissue hypoxia (lung disease, high altitude, CO poisoning, high-affinity hemoglobinopathy) or an abnormal overproduction (renal cysts, renal artery stenosis, tumors with ectopic EPO production). A rare familial form of polycythemia is associated with normal EPO levels but hyper-responsive EPO receptors due to mutations. APPROACH TO THE PATIENT: PART 2 Cardinal Manifestations and Presentation of Diseases As shown in Fig. 77-18, the first step is to document the presence of an increased red cell mass using the principle of isotope dilution by administering 51Cr-labeled autologous red blood cells to the patient and sampling blood radioactivity over a 2-h period. If the red cell mass is normal (<36 mL/kg in men, <32 mL/kg in women), the patient has spurious or relative polycythemia. If the red cell mass is increased (>36 mL/kg in men, >32 mL/kg in women), serum EPO levels should be measured. If EPO levels are low or unmeasurable, the patient most likely has polycythemia vera. A mutation in JAK2 (Val617Phe), a key member of the cytokine intracellular signaling pathway, can be found in 90–95% of patients with polycythemia vera. Many of those without this particular JAK2 mutation have mutations in exon 12. As a practical matter, few centers assess red Dx: Relative erythrocytosis Measure RBC mass Measure serum EPO levels Measure arterial O2 saturation elevated elevated Dx: O2 affinity hemoglobinopathy increased elevated normal Dx: Polycythemia vera Confirm JAK2mutation smoker? normal normal Dx: Smoker’s polycythemia normal Increased hct or hgb low low Diagnostic evaluation for heart or lung disease, e.g., COPD, high altitude, AV or intracardiac shunt Measure hemoglobin O2 affinity Measure carboxyhemoglobin levels Search for tumor as source of EPO IVP/renal ultrasound (renal Ca or cyst) CT of head (cerebellar hemangioma) CT of pelvis (uterine leiomyoma) CT of abdomen (hepatoma) no yes FIguRE 77-18 An approach to the differential diagnosis of patients with an elevated hemoglobin (possible polycythemia). AV, atrioventricular; COPD, chronic obstructive pulmonary disease; CT, computed tomography; EPO, erythropoietin; hct, hematocrit; hgb, hemoglobin; IVP, intravenous pyelogram; RBC, red blood cell. cell mass in the setting of an increased hematocrit. The short workup is to measure EPO levels, check for JAK2 mutation, and perform an abdominal ultrasound to assess spleen size. Tests that support the diagnosis of polycythemia vera include elevated white blood cell count, increased absolute basophil count, and thrombocytosis. If serum EPO levels are elevated, one needs to distinguish whether the elevation is a physiologic response to hypoxia or related to autonomous EPO production. Patients with low arterial O2 saturation (<92%) should be further evaluated for the presence of heart or lung disease, if they are not living at high altitude. Patients with normal O2 saturation who are smokers may have elevated EPO levels because of CO displacement of O2. If carboxyhemoglobin (COHb) levels are high, the diagnosis is “smoker’s polycythemia.” Such patients should be urged to stop smoking. Those who cannot stop smoking require phlebotomy to control their polycythemia. Patients with normal O2 saturation who do not smoke either have an abnormal hemoglobin that does not deliver O2 to the tissues (evaluated by finding elevated O2–hemoglobin affinity) or have a source of EPO production that is not responding to the normal feedback inhibition. Further workup is dictated by the differential diagnosis of EPO-producing neoplasms. Hepatoma, uterine leiomyoma, and renal cancer or cysts are all detectable with abdominopelvic computed tomography scans. Cerebellar hemangiomas may produce EPO, but they present with localizing neurologic signs and symptoms rather than polycythemia-related symptoms. Barbara A. Konkle The human hemostatic system provides a natural balance between procoagulant and anticoagulant forces. The procoagulant forces include platelet adhesion and aggregation and fibrin clot formation; anticoagulant forces include the natural inhibitors of coagulation and fibrinolysis. Under normal circumstances, hemostasis is regulated to promote blood flow; however, it is also prepared to clot blood rapidly to arrest blood flow and prevent exsanguination. After bleeding is successfully halted, the system remodels the damaged vessel to restore normal blood flow. The major components of the hemostatic system, which function in concert, are (1) platelets and other formed elements of blood, such as monocytes and red cells; (2) plasma proteins (the coagulation and fibrinolytic factors and inhibitors); and (3) the vessel wall. On vascular injury, platelets adhere to the site of injury, usually the denuded vascular intimal surface. Platelet adhesion is mediated primarily by Von Willebrand factor (VWF), a large multimeric protein present in both plasma and the extracellular matrix of the subendothelial vessel wall, which serves as the primary “molecular glue,” providing sufficient strength to withstand the high levels of shear stress that would tend to detach them with the flow of blood. Platelet adhesion is also facilitated by direct binding to subendothelial collagen through specific platelet membrane collagen receptors. Platelet adhesion results in subsequent platelet activation and aggregation. This process is enhanced and amplified by humoral mediators in plasma (e.g., epinephrine, thrombin); mediators released from activated platelets (e.g., adenosine diphosphate, serotonin); and vessel wall extracellular matrix constituents that come in contact with adherent platelets (e.g., collagen, VWF). Activated platelets undergo the release reaction, during which they secrete contents that further promote aggregation and inhibit the naturally anticoagulant endothelial cell factors. During platelet aggregation (platelet-platelet interaction), additional platelets are recruited from the circulation to the site of vascular injury, leading to the formation of an occlusive platelet thrombus. The platelet plug is anchored and stabilized by the developing fibrin mesh. The platelet glycoprotein (Gp) IIb/IIIa (αIIbβ3) complex is the most abundant receptor on the platelet surface. Platelet activation converts the normally inactive Gp IIb/IIIa receptor into an active receptor, enabling binding to fibrinogen and VWF. Because the surface of each platelet has about 50,000 Gp IIb/IIIa–binding sites, numerous activated platelets recruited to the site of vascular injury can rapidly form an occlusive aggregate by means of a dense network of intercellular fibrinogen bridges. Because this receptor is the key mediator of platelet aggregation, it has become an effective target for antiplatelet therapy. Plasma coagulation proteins (clotting factors) normally circulate in plasma in their inactive forms. The sequence of coagulation protein reactions that culminate in the formation of fibrin was originally described as a waterfall or a cascade. Two pathways of blood coagulation have been described in the past: the so-called extrinsic, or tissue factor, pathway and the so-called intrinsic, or contact activation, pathway. We now know that coagulation is normally initiated through tissue factor (TF) exposure and activation through the classic extrinsic pathway but with critically important amplification through elements of the classic intrinsic pathway, as illustrated in Fig. 78-1. These reactions take place on phospholipid surfaces, usually the activated platelet surface. Coagulation testing in the laboratory can reflect other influences due to the artificial nature of the in vitro systems used (see below). The immediate trigger for coagulation is vascular damage that exposes blood to TF that is constitutively expressed on the surfaces of subendothelial cellular components of the vessel wall, such as smooth muscle cells and fibroblasts. TF is also present in circulating microparticles, presumably shed from cells including monocytes and platelets. TF binds the serine protease factor VIIa; the complex activates factor X to factor Xa. Alternatively, the complex can indirectly activate factor X by initially converting factor IX to factor IXa, which then activates factor X. The participation of factor XI in hemostasis is not dependent on its activation by factor XIIa but rather on its positive feedback acti- FIguRE 78-2 Fibrin formation and dissolution. (A) Fibrinogen is a trinodular structure consisting of two D domains and one E domain. Thrombin activation results in an ordered lateral assembly of protofibrils (B) with noncovalent associations. Factor XIIIa cross-links the D domains on adjacent molecules (C). Fibrin and fibrinogen (not shown) lysis by plasmin occurs at discrete sites and results in intermediary fibrin(ogen) degradation products (not shown). D-Dimers are the product of complete lysis of fibrin (D), maintaining the cross-linked D domains. vation by thrombin. Thus, factor XIa functions in the propagation and amplification, rather than in the initiation, of the coagulation cascade. Factor Xa can be formed through the actions of either the TF/ factor VIIa complex or factor IXa (with factor VIIIa as a cofactor) and converts prothrombin to thrombin, the pivotal protease of the coagulation system. The essential cofactor for this reaction is factor Va. Like the homologous factor VIIIa, factor Va is produced by thrombininduced limited proteolysis of factor V. Thrombin is a multifunctional enzyme that converts soluble plasma fibrinogen to an insoluble fibrin matrix. Fibrin polymerization involves an orderly process of intermolecular associations (Fig. 78-2). Thrombin also activates factor XIII (fibrin-stabilizing factor) to factor XIIIa, which covalently cross-links and thereby stabilizes the fibrin clot. The assembly of the clotting factors on activated cell membrane surfaces greatly accelerates their reaction rates and also serves to localize blood clotting to sites of vascular injury. The critical cell membrane components, acidic phospholipids, are not normally exposed on resting cell membrane surfaces. However, when Vessel injury TFPI IX IX XI VIIIa IXa (Prothrombin) Thrombin (IIa) Fibrinogen Fibrin XIaX Va Xa II X VIIa platelets, monocytes, and endothelial cells are activated by vascular injury or FIguRE 78-1 Coagulation is initiated by tissue factor (TF) exposure, which, with factor (F) VIIa, inflammatory stimuli, the procoagulant activates FIX and FX, which in turn, with FVIII and FV as cofactors, respectively, results in thrombin head groups of the membrane anionic formation and subsequent conversion of fibrinogen to fibrin. Thrombin activates FXI, FVIII, and FV, phospholipids become translocated to amplifying the coagulation signal. Once the TF/FVIIa/FXa complex is formed, tissue factor pathway the surfaces of these cells or released inhibitor (TFPI) inhibits the TF/FVIIa pathway, making coagulation dependent on the amplification as part of microparticles, making them loop through FIX/FVIII. Coagulation requires calcium (not shown) and takes place on phospholipid available to support and promote the surfaces, usually the activated platelet membrane. plasma coagulation reactions. Several physiologic antithrombotic mechanisms act in concert to prevent clotting under normal circumstances. These mechanisms operate to preserve blood fluidity and to limit blood clotting to specific focal sites of vascular injury. Endothelial cells have many antithrombotic effects. They produce prostacyclin, nitric oxide, and ectoADPase/ CD39, which act to inhibit platelet binding, secretion, and aggregation. Endothelial cells produce anticoagulant factors including heparan proteoglycans, antithrombin, TF pathway inhibitor, and thrombomodulin. They also activate fibrinolytic mechanisms through the production of tissue plasminogen activator 1, urokinase, plasminogen activator inhibitor, and annexin-2. The sites of action of the major physiologic antithrombotic pathways are shown in Fig. 78-3. Antithrombin (or antithrombin III) is the major plasma protease inhibitor of thrombin and the other clotting factors in coagulation. Antithrombin neutralizes thrombin and other activated coagulation factors by forming a complex between the active site of the enzyme and the reactive center of antithrombin. The rate of formation of these inactivating complexes increases by a factor of several thousand in the presence of heparin. Antithrombin inactivation of thrombin and other activated clotting factors occurs physiologically on vascular surfaces, where glycosoaminoglycans, including heparan sulfates, are present to catalyze these reactions. Inherited quantitative or qualitative deficiencies of antithrombin lead to a lifelong predisposition to venous thromboembolism. Protein C is a plasma glycoprotein that becomes an anticoagulant when it is activated by thrombin. The thrombin-induced activation of protein C occurs physiologically on thrombomodulin, a transmembrane proteoglycan-binding site for thrombin on endothelial cell surfaces. The binding of protein C to its receptor on endothelial cells places it in PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 78-3 Sites of action of the four major physiologic anti-thrombotic pathways: antithrombin (AT); protein C/S (PC/PS); tissue factor pathway inhibitor (TFPI); and the fibrinolytic system, consisting of plasminogen, plasminogen activator (PA), and plasmin. PT, prothrombin; Th, thrombin; FDP, fibrin(ogen) degradation products. (Modified from BA Konkle, AI Schafer, in DP Zipes et al [eds]: Braunwald’s Heart Disease, 7th ed. Philadelphia, Saunders, 2005.) proximity to the thrombin-thrombomodulin complex, thereby enhancing its activation efficiency. Activated protein C acts as an anticoagulant by cleaving and inactivating activated factors V and VIII. This reaction is accelerated by a cofactor, protein S, which, like protein C, is a glycoprotein that undergoes vitamin K–dependent posttranslational modification. Quantitative or qualitative deficiencies of protein C or protein S, or resistance to the action of activated protein C by a specific mutation at its target cleavage site in factor Va (factor V Leiden), lead to hypercoagulable states. Tissue factor pathway inhibitor (TFPI) is a plasma protease inhibitor that regulates the TF-induced extrinsic pathway of coagulation. TFPI inhibits the TF/factor VIIa/factor Xa complex, essentially turning off the TF/factor VIIa initiation of coagulation, which then becomes dependent on the “amplification loop” via factor XI and factor VIII activation by thrombin. TFPI is bound to lipoprotein and can also be released by heparin from endothelial cells, where it is bound to glycosaminoglycans, and from platelets. The heparin-mediated release of TFPI may play a role in the anticoagulant effects of unfractionated and low-molecular-weight heparins. Any thrombin that escapes the inhibitory effects of the physiologic anticoagulant systems is available to convert fibrinogen to fibrin. In response, the endogenous fibrinolytic system is then activated to dispose of intravascular fibrin and thereby maintain or reestablish the patency of the circulation. Just as thrombin is the key protease enzyme of the coagulation system, plasmin is the major protease enzyme of the fibrinolytic system, acting to digest fibrin to fibrin degradation products. The general scheme of fibrinolysis and its control is shown in Fig. 78-4. The plasminogen activators, tissue type plasminogen activator (tPA) and the urokinase-type plasminogen activator (uPA), cleave the Arg560-Val561 bond of plasminogen to generate the active enzyme plasmin. The lysine-binding sites of plasmin (and plasminogen) permit it to bind to fibrin, so that physiologic fibrinolysis is “fibrin specific.” Both plasminogen (through its lysine-binding sites) and tPA possess specific affinity for fibrin and thereby bind selectively to clots. The assembly of a ternary complex, consisting of fibrin, plasminogen, and tPA, promotes the localized interaction between plasminogen and tPA and greatly accelerates the rate of plasminogen activation to plasmin. Moreover, partial degradation of fibrin by plasmin exposes new plasminogen and tPA-binding sites in carboxy-terminus lysine FIguRE 78-4 A schematic diagram of the fibrinolytic system. Tissue plasminogen activator (tPA) is released from endothelial cells, binds the fibrin clot, and activates plasminogen to plasmin. Excess fibrin is degraded by plasmin to distinct degradation products (FDPs). Any free plasmin is complexed with α2-antiplasmin (α2Pl). PAI, plasminogen activator inhibitor; UPA, urokinase-type plasminogen activator. residues of fibrin fragments to enhance these reactions further. This creates a highly efficient mechanism to generate plasmin focally on the fibrin clot, which then becomes plasmin’s substrate for digestion to fibrin degradation products. Plasmin cleaves fibrin at distinct sites of the fibrin molecule, leading to the generation of characteristic fibrin fragments during the process of fibrinolysis (Fig. 78-2). The sites of plasmin cleavage of fibrin are the same as those in fibrinogen. However, when plasmin acts on covalently cross-linked fibrin, d-dimers are released; hence, d-dimers can be measured in plasma as a relatively specific test of fibrin (rather than fibrinogen) degradation. d-Dimer assays can be used as sensitive markers of blood clot formation and have been validated for clinical use to exclude the diagnosis of deep venous thrombosis (DVT) and pulmonary embolism in selected populations. In addition, d-dimer measurement can be used to stratify patients, particularly women, for risk of recurrent venous thromboembolism (VTE) when measured 1 month after discontinuation of anticoagulation given for treatment of an initial idiopathic event. d-Dimer levels may be elevated in the absence of VTE in elderly people. Physiologic regulation of fibrinolysis occurs primarily at three levels: (1) plasminogen activator inhibitors (PAIs), specifically PAI-1 and PAI-2, inhibit the physiologic plasminogen activators; (2) the thrombin-activatable fibrinolysis inhibitor (TAFI) limits fibrinolysis; and (3) α2-antiplasmin inhibits plasmin. PAI-1 is the primary inhibitor of tPA and uPA in plasma. TAFI cleaves the N-terminal lysine residues of fibrin, which aid in localization of plasmin activity. α2-Antiplasmin is the main inhibitor of plasmin in human plasma, inactivating any nonfibrin clot-associated plasmin. APPROACH TO THE PATIENT: Disorders of hemostasis may be either inherited or acquired. A detailed personal and family history is key in determining the chronicity of symptoms and the likelihood of the disorder being inherited, as well as providing clues to underlying conditions that have contributed to the bleeding or thrombotic state. In addition, the history can give clues as to the etiology by determining (1) the bleeding (mucosal and/or joint) or thrombosis (arterial and/ or venous) site and (2) whether an underlying bleeding or clotting tendency was enhanced by another medical condition or the introduction of medications or dietary supplements. History of Bleeding A history of bleeding is the most important predictor of bleeding risk. In evaluating a patient for a bleeding disorder, a history of at-risk situations, including the response to past surgeries, should be assessed. Does the patient have a history of spontaneous or trauma/surgery-induced bleeding? Spontaneous hemarthroses are a hallmark of moderate and severe factor VIII and IX deficiency and, in rare circumstances, of other clotting factor deficiencies. Mucosal bleeding symptoms are more suggestive of underlying platelet disorders or Von Willebrand disease (VWD), termed disorders of primary hemostasis or platelet plug formation. Disorders affecting primary hemostasis are shown in Table 78-1. A bleeding score has been validated as a tool to predict patients more likely to have type 1 VWD (International Society on Thrombosis and Haemostasis Bleeding Assessment Tool [www.isth .org/resource/resmgr/ssc/isth-ssc_bleeding_assessment.pdf ]). This is most useful tool in excluding the diagnosis of a bleeding disorder, and thus avoiding unnecessary testing. One study found that a low bleeding score (≤3) and a normal activated partial thromboplastin time (aPTT) had 99.6% negative predictive value for the diagnosis of VWD. Bleeding symptoms that appear to be more common in patients with bleeding disorders include prolonged bleeding with surgery, dental procedures and extractions, and/or trauma, menorrhagia or postpartum hemorrhage, and large bruises (often described with lumps). Defects of Platelet Aggregation Glanzmann’s thrombasthenia (absence or dysfunction of platelet glycoprotein [Gp] IIb/IIIa) Defects of Platelet Secretion Drug-induced (aspirin, nonsteroidal anti-inflammatory agents, thienopyridines) Inherited Nonspecific inherited secretory defects Nonspecific drug effects Uremia Platelet coating (e.g., paraprotein, penicillin) Defect of Platelet Coagulant Activity Easy bruising and menorrhagia are common complaints in patients with and without bleeding disorders. Easy bruising can also be a sign of medical conditions in which there is no identifiable coagulopathy; instead, the conditions are caused by an abnormality of blood vessels or their supporting tissues. In Ehlers-Danlos syndrome, there may be posttraumatic bleeding and a history of joint hyperextensibility. Cushing’s syndrome, chronic steroid use, and aging result in changes in skin and subcutaneous tissue, and subcutaneous bleeding occurs in response to minor trauma. The latter has been termed senile purpura. Epistaxis is a common symptom, particularly in children and in dry climates, and may not reflect an underlying bleeding disorder. However, it is the most common symptom in hereditary hemorrhagic telangiectasia and in boys with VWD. Clues that epistaxis is a symptom of an underlying bleeding disorder include lack of seasonal variation and bleeding that requires medical evaluation or treatment, including cauterization. Bleeding with eruption of primary teeth is seen in children with more severe bleeding disorders, such as moderate and severe hemophilia. It is uncommon in children with mild bleeding disorders. Patients with disorders of primary hemostasis (platelet adhesion) may have increased bleeding after dental cleanings and other procedures that involve gum manipulation. Menorrhagia is defined quantitatively as a loss of >80 mL of blood per cycle, based on the quantity of blood loss required to produce iron-deficiency anemia. A complaint of heavy menses is subjective and has a poor correlation with excessive blood loss. Predictors of menorrhagia include bleeding resulting in iron-deficiency anemia or a need for blood transfusion, passage of clots >1 inch in diameter, and changing a pad or tampon more than hourly. Menorrhagia is a common symptom in women with underlying bleeding disorders and is reported in the majority of women with VWD, women with factor XI deficiency, and symptomatic carriers of hemophilia. Women with underlying bleeding disorders are more likely to have other bleeding symptoms, including bleeding after dental extractions, postoperative bleeding, and postpartum bleeding, and are much more likely to have menorrhagia beginning at menarche than women with menorrhagia due to other causes. Postpartum hemorrhage (PPH) is a common symptom in women with underlying bleeding disorders. In women with type 1 VWD and symptomatic carriers of hemophilia A in whom levels of VWF and factor VIII usually normalize during pregnancy, PPH may be delayed. Women with a history of PPH have a high risk of recurrence with subsequent pregnancies. Rupture of ovarian cysts with intraabdominal hemorrhage has also been reported in women with underlying bleeding disorders. Tonsillectomy is a major hemostatic challenge, because intact hemostatic mechanisms are essential to prevent excessive bleeding from the tonsillar bed. Bleeding may occur early after surgery or after approximately 7 days postoperatively, with loss of the eschar at the operative site. Similar delayed bleeding is seen after colonic polyp resection. Gastrointestinal (GI) bleeding and hematuria are usually due to underlying pathology, and procedures to identify and treat the bleeding site should be undertaken, even in patients with known bleeding disorders. VWD, particularly types 2 and 3, has been associated with angiodysplasia of the bowel and GI bleeding. Hemarthroses and spontaneous muscle hematomas are characteristic of moderate or severe congenital factor VIII or IX deficiency. They can also be seen in moderate and severe deficiencies of fibrinogen, prothrombin, and factors V, VII, and X. Spontaneous hemarthroses occur rarely in other bleeding disorders except for severe VWD, with associated factor VIII levels <5%. Muscle and soft tissue bleeds are also common in acquired factor VIII deficiency. Bleeding into a joint results in severe pain and swelling, as well as loss of function, but is rarely associated with discoloration from bruising around the joint. Life-threatening sites of bleeding include bleeding into the oropharynx, where bleeding can obstruct the airway, into the central nervous system, and into the retroperitoneum. Central nervous system bleeding is the major cause of bleeding-related deaths in patients with severe congenital factor deficiencies. Prohemorrhagic Effects of Medications and Dietary Supplements Aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs) that inhibit cyclooxygenase 1 impair primary hemostasis and may exacerbate bleeding from another cause or even unmask a previously occult mild bleeding disorder such as VWD. All NSAIDs, however, can precipitate GI bleeding, which may be more severe in patients with underlying bleeding disorders. The aspirin effect on platelet function as assessed by aggregometry can persist for up to 7 days, although it has frequently returned to normal by 3 days after the last dose. The effect of other NSAIDs is shorter, as the inhibitor effect is reversed when the drug is removed. Thienopyridines (clopidogrel and prasugrel) inhibit ADP-mediated platelet aggregation and, like NSAIDs, can precipitate or exacerbate bleeding symptoms. Many herbal supplements can impair hemostatic function (Table 78-2). Some are more convincingly associated with a bleeding risk than others. Fish oil or concentrated omega-3 fatty acid supplements impair platelet function. They alter platelet biochemistry to produce more PGI3, a more potent platelet inhibitor than prostacyclin (PGI2), and more thromboxane A3, a less potent platelet activator than thromboxane A2. In fact, diets naturally rich in omega-3 fatty acids can result in a prolonged bleeding time and abnormal platelet aggregation studies, but the actual associated bleeding risk is unclear. Vitamin E appears to inhibit protein kinase C–mediated platelet aggregation and nitric oxide production. In patients with unexplained bruising or bleeding, it is prudent to review any new medications or supplements and discontinue those that may be associated with bleeding. underlying Systemic Diseases That Cause or Exacerbate a Bleeding Tendency Acquired bleeding disorders are commonly secondary to, or associated with, systemic disease. The clinical evaluation of a patient with a bleeding tendency must therefore include a thorough assessment for evidence of underlying disease. Bruising or mucosal bleeding may be the presenting complaint in liver disease, severe renal impairment, hypothyroidism, paraproteinemias or amyloidosis, and conditions causing bone marrow failure. All coagulation factors are synthesized in the liver, and hepatic failure results in combined factor deficiencies. This is often compounded by thrombocytopenia from splenomegaly due to portal PART 2 Cardinal Manifestations and Presentation of Diseases Herbs with Potential Antiplatelet Activity Ginkgo (Ginkgo biloba L.) Garlic (Allium sativum) Bilberry (Vaccinium myrtillus) Ginger (Gingiber officinale) Dong quai (Angelica sinensis) Feverfew (Tanacetum parthenium) Asian ginseng (Panax ginseng) American ginseng (Panax quinquefolius) Siberian ginseng/eleuthero (Eleutherococcus senticosus) Turmeric (Circuma longa) Meadowsweet (Filipendula ulmaria) Willow (Salix spp.) Chamomile (Matricaria recutita, Chamaemelum mobile) hypertension. Coagulation factors II, VII, IX, and X and proteins C, S, and Z are dependent on vitamin K for posttranslational modi fication. Although vitamin K is required in both procoagulant and anticoagulant processes, the phenotype of vitamin K deficiency or the warfarin effect on coagulation is bleeding. The normal blood platelet count is 150,000–450,000/μL. Thrombocytopenia results from decreased production, increased destruction, and/or sequestration. Although the bleeding risk varies somewhat by the reason for the thrombocytopenia, bleeding rarely occurs in isolated thrombocytopenia at counts <50,000/μL and usually not until <10,000–20,000/μL. Coexisting coagulopathies, as is seen in liver failure or disseminated coagulation; infection; all increase the risk of bleeding in the thrombocytopenic patient. Most procedures can be performed in patients with a platelet count of 50,000/μL. The level needed for major surgery will depend on the type of surgery and the patient’s underlying medical state, although a count of approximately 80,000/μL is likely sufficient. The risk of thrombosis, like that of bleeding, is influenced by both genetic and environmental influences. The major risk factor for arterial thrombosis is atherosclerosis, whereas for venous throm bosis, the risk factors are immobility, surgery, underlying medical conditions such as malignancy, medications such as hormonal therapy, obesity, and genetic predispositions. Factors that increase risks for venous and for both venous and arterial thromboses are shown in Table 78-3. The most important point in a history related to venous throm bosis is determining whether the thrombotic event was idiopathic tated event. In patients without underlying malignancy, having an idiopathic event is the strongest predictor of recurrence of VTE. In patients who have a vague history of thrombosis, a history of being treated with warfarin suggests a past DVT. Age is an important risk factor for venous thrombosis—the risk of DVT increases per decade, with an approximate incidence of 1/100,000 per year in early childhood to 1/200 per year among octogenarians. Family history is helpful in determining if there is a genetic predisposition and how strong that predisposition appears to be. A genetic thrombophilia that confers a relatively small increased risk, such as being a het erozygote for the prothrombin G20210A or factor V Leiden muta tion, may be a minor determinant of risk in an elderly individual Age Previous thrombosis Immobilization Major surgery Pregnancy and puerperium Hospitalization Obesity Infection APC resistance, nongenetic Smoking Elevated factor II, IX, XI Elevated TAFI levels Low levels of TFPI aUnknown whether risk is inherited or acquired. Abbreviations: APC, activated protein C; TAFI, thrombin-activatable fibrinolysis inhibitor; TFPI, tissue factor pathway inhibitor. undergoing a high-risk surgical procedure. As illustrated in Fig. 78-5, a thrombotic event usually has more than one contributing factor. Predisposing factors must be carefully assessed to determine the risk of recurrent thrombosis and, with consideration of the patient’s bleeding risk, determine the length of anticoagulation. FIguRE 78-5 Thrombotic risk over time. Shown schematically is an individual’s thrombotic risk over time. An underlying factor V Leiden mutation provides a “theoretically” constant increased risk. The thrombotic risk increases with age and, intermittently, with oral contraceptive (OCP) or hormone replacement therapy (HRT) use; other events may increase the risk further. At some point, the cumulative risk may increase to the threshold for thrombosis and result in deep venous thrombosis (DVT). Note: The magnitude and duration of risk portrayed in the figure are meant for example only and may not precisely reflect the relative risk determined by clinical study. (From BA Konkle, A Schafer, in DP Zipes et al [eds]: Braunwald’s Heart Disease, 7th ed. Philadelphia, Saunders, 2005; modified with permission from FR Rosendaal: Venous thrombosis: A multicausal disease. Lancet 353:1167, 1999.) Similar consideration should be given in determining the need, if any, to test the patient and family members for thrombophilias. Careful history taking and clinical examination are essential components in the assessment of bleeding and thrombotic risk. The use of laboratory tests of coagulation complement, but cannot substitute for, clinical assessment. No test exists that provides a global assessment of hemostasis. The bleeding time has been used to assess bleeding risk; however, it does not predict bleeding risk with surgery and it is not recommended for this indication. The PFA-100, an instrument that measures platelet-dependent coagulation under flow conditions, is more sensitive and specific for VWD than the bleeding time; however, it is not sensitive enough to rule out mild bleeding disorders. PFA-100 closure times are prolonged in patients with some, but not all, inherited platelet disorders. Also, its utility in predicting bleeding risk has not been determined. For routine preoperative and preprocedure testing, an abnormal prothrombin time (PT) may detect liver disease or vitamin K deficiency that had not been previously appreciated. Studies have not confirmed the usefulness of an aPTT in preoperative evaluations in patients with a negative bleeding history. The primary use of coagulation testing should be to confirm the presence and type of bleeding disorder in a patient with a suspicious clinical history. Because of the nature of coagulation assays, proper sample acquisition and handling is critical to obtaining valid results. In patients with abnormal coagulation assays who have no bleeding history, repeat studies with attention to these factors frequently results in normal values. Most coagulation assays are performed in sodium citrate anticoagulated plasma that is recalcified for the assay. Because the anticoagulant is in liquid solution and needs to be added to blood in proportion to the plasma volume, incorrectly filled or inadequately mixed blood collection tubes will give erroneous results. Vacutainer tubes should be filled to >90% of the recommended fill, which is usually denoted by a line on the tube. An elevated hematocrit (>55%) can result in a false value due to a decreased plasma-to-anticoagulant ratio. Screening Assays The most commonly used screening tests are the PT, aPTT, and platelet count. The PT assesses the factors I (fibrinogen), II (prothrombin), V, VII, and X (Fig. 78-6). The PT measures the time for clot formation of the citrated plasma after recalcification and addition of thromboplastin, a mixture of TF and phospholipids. The sensitivity of the assay varies by the source of thromboplastin. The relationship between defects in secondary hemostasis (fibrin formation) and coagulation test abnormalities is shown in Table 78-4. To adjust for this variability, the overall sensitivity of different thromboplastins to reduction of the vitamin K–dependent clotting factors II, VII, IX, and X in anticoagulation patients is now expressed as the International Sensitivity Index (ISI). An inverse relationship exists between ISI and thromboplastin sensitivity. The international normalized ratio (INR) is then determined based on the formula: INR = (PTpatient/PT )ISI. The INR was developed to assess stable anticoagulation due to reduction of vitamin K–dependent coagulation factors; it is commonly used in the evaluation of patients with liver disease. Although it does allow comparison between laboratories, reagent sensitivity as used to determine the ISI is not the same in liver disease as with warfarin anticoagulation. In addition, progressive liver failure is associated with variable changes in coagulation factors; the degree of prolongation of either the PT or the INR only roughly predicts the bleeding risk. Thrombin generation has been shown to be normal in many patients with mild to moderate liver dysfunction. Because the PT only measures one aspect of hemostasis affected by liver dysfunction, we likely overestimate the bleeding risk of a mildly elevated INR in this setting. The aPTT assesses the intrinsic and common coagulation pathways; factors XI, IX, VIII, X, V, and II; fibrinogen; prekallikrein; HMWK PK FXII FXI FIX FVIII FVII FX FV Prothrombin (FII) Fibrinogen (FI) aPTTPTFIguRE 78–6 Coagulation factor activity tested in the activated partial thromboplastin time (aPTT) in red and prothrombin time (PT) in green, or both. F, factor; HMWK, high-molecular-weight kininogen; PK, prekallikrein. No clinical bleeding—↓ factor XII, high-molecular-weight kininogen, prekallikrein Variable, but usually mild, bleeding—↓ factor XI, mild ↓ factor VIII and factor IX Frequent, severe bleeding—severe deficiencies of factors VIII and IX Heparin and direct thrombin inhibitors high-molecular-weight kininogen; and factor XII (Fig. 78-6). The aPTT reagent contains phospholipids derived from either animal or vegetable sources that function as a platelet substitute in the coagulation pathways and includes an activator of the intrinsic coagulation system, such as nonparticulate ellagic acid or the particulate activators kaolin, celite, or micronized silica. The phospholipid composition of aPTT reagents varies, which influences the sensitivity of individual reagents to clotting factor deficiencies and to inhibitors such as heparin and lupus anticoagulants. Thus, aPTT results will vary from one laboratory to another, and the normal range in the laboratory where the testing occurs should be used in the interpretation. Local laboratories can relate their aPTT values to the therapeutic heparin anticoagulation by correlating aPTT values with direct measurements of heparin activity (anti-Xa or protamine titration assays) in samples from heparinized patients, although correlation between these assays is often poor. The aPTT reagent will vary in sensitivity to individual factor deficiencies and usually becomes prolonged with individual factor deficiencies of 30–50%. Mixing Studies Mixing studies are used to evaluate a prolonged aPTT or, less commonly PT, to distinguish between a factor deficiency and an inhibitor. In this assay, normal plasma and patient plasma are mixed in a 1:1 ratio, and the aPTT or PT is determined immediately and after incubation at 37°C for varying times, typically 30, 60, and/or 120 min. With isolated factor deficiencies, the aPTT will correct with mixing and stay corrected with incubation. With aPTT prolongation due to a lupus anticoagulant, the mixing and incubation will show no correction. In acquired neutralizing factor antibodies, notably an acquired factor VIII inhibitor, the initial assay may or may not correct immediately after mixing but will prolong or remain prolonged with incubation at 37°C. Failure to correct with mixing can also be due to the presence of other inhibitors or interfering substances such as heparin, fibrin split products, and paraproteins. Specific Factor Assays Decisions to proceed with specific clotting factor assays will be influenced by the clinical situation and the results of coagulation screening tests. Precise diagnosis and effective management of inherited and acquired coagulation deficiencies necessitate quantitation of the relevant factors. When bleeding is severe, specific assays are urgently required to guide appropriate therapy. Individual factor assays are usually performed as modifications of the mixing study, where the patient’s plasma is mixed with plasma deficient in the factor being studied. This will correct all factor deficiencies to >50%, thus making prolongation of clot formation due to a factor deficiency dependent on the factor missing from the added plasma. Testing for Antiphospholipid Antibodies Antibodies to phospholipids (cardiolipin) or phospholipid-binding proteins (β2-microglobulin and others) are detected by enzyme-linked immunosorbent assay (ELISA). When these antibodies interfere with phospholipid-dependent coagulation tests, they are termed lupus anticoagulants. The aPTT has variability sensitivity to lupus anticoagulants, depending in part on the aPTT reagents used. An assay using a sensitive reagent has been termed an LA-PTT. The dilute Russell viper venom test (dRVVT) and the tissue thromboplastin inhibition (TTI) test are modifications of standard tests with the phospholipid reagent decreased, thus increasing the sensitivity to antibodies that interfere with the phospholipid component. The tests, however, are not specific for lupus anticoagulants, because factor deficiencies or other inhibitors will also result in prolongation. Documentation of a lupus anticoagulant requires not only prolongation of a phospholipid-dependent coagulation test but also lack of correction when mixed with normal plasma and correction with the addition of activated platelet membranes or certain phospholipids (e.g., hexagonal phase). PART 2 Cardinal Manifestations and Presentation of Diseases Factor VII deficiency Vitamin K deficiency—early Warfarin anticoagulation Direct Xa inhibitors (rivaroxaban, apixaban) Factor II, V, X, or fibrinogen deficiency Vitamin K deficiency—late Direct thrombin inhibitors Heparin or heparin-like inhibitors Direct thrombin inhibitors (e.g., dabigatran, argatroban, bivalirudin) Mild or no bleeding—dysfibrinogenemia Frequent, severe bleeding—afibrinogenemia Prolonged PT and/or aPTT Not Corrected with Mixing with Normal Plasma Bleeding—specific factor inhibitor No symptoms, or clotting and/or pregnancy loss—lupus anticoagulant Disseminated intravascular coagulation Heparin or direct thrombin inhibitor Deficiency of α2-antiplasmin or plasminogen activator inhibitor 1 Treatment with fibrinolytic therapy Other Coagulation Tests The thrombin time and the reptilase time measure fibrinogen conversion to fibrin and are prolonged when the fibrinogen level is low (usually <80–100 mg/dL) or qualitatively abnormal, as seen in inherited or acquired dysfibrinogenemias, or when fibrin/fibrinogen degradation products interfere. The thrombin time, but not the reptilase time, is prolonged in the presence of heparin. The thrombin time is markedly prolonged in the presence of the direct thrombin inhibitor, dabigatran; a dilute thrombin time can be used to assess drug activity. Measurement of anti–factor Xa plasma inhibitory activity is a test frequently used to assess lowmolecular-weight heparin (LMWH) levels, as a direct measurement of unfractionated heparin (UFH) activity, or to assess activity of the new direct Xa inhibitors rivaroxaban or apixaban. Drug in the patient sample inhibits the enzymatic conversion of an Xa-specific chromogenic substrate to colored product by factor Xa. Standard curves are created using multiple concentrations of drug and are used to calculate the concentration of anti-Xa activity in the patient plasma. Laboratory Testing for Thrombophilia Laboratory assays to detect thrombophilic states include molecular diagnostics and immunologic and functional assays. These assays vary in their sensitivity and specificity for the condition being tested. Furthermore, acute thrombosis, acute illnesses, inflammatory conditions, pregnancy, and medications affect levels of many coagulation factors and their inhibitors. Antithrombin is decreased by heparin and in the setting of acute thrombosis. Protein C and S levels may be increased in the setting of acute thrombosis and are decreased by warfarin. Antiphospholipid antibodies are frequently transiently positive in acute illness. Testing for genetic thrombophilias should, in general, only be performed when there is a strong family history of thrombosis and results would affect clinical decision making. Because thrombophilia evaluations are usually performed to assess the need to extend anticoagulation, testing should be performed in a steady state, remote from the acute event. In most instances, warfarin anticoagulation can be stopped after the initial 3–6 months of treatment, and testing can be performed at least 3 weeks later. As a sensitive marker of coagulation activation, the quantitative d-dimer assay, drawn 4 weeks after stopping anticoagulation, can be used to stratify risk of recurrent thrombosis in patients who have an idiopathic event. Measures of Platelet Function The bleeding time has been used to assess bleeding risk; however, it has not been found to predict bleeding risk with surgery, and it is not recommended for use for this indication. The PFA-100 and similar instruments that measure platelet-dependent coagulation under flow conditions are generally more sensitive and specific for platelet disorders and VWD than the bleeding time; however, data are insufficient to support their use to predict bleeding risk or monitor response to therapy, and they will be normal in some patients with platelet disorders or mild VWD. When they are used in the evaluation of a patient with bleeding symptoms, abnormal results, as with the bleeding time, require specific testing, such as VWF assays and/or platelet aggregation studies. Because all of these “screening” assays may miss patients with mild bleeding disorders, further studies are needed to define their role in hemostasis testing. For classic platelet aggregometry, various agonists are added to the patient’s platelet-rich plasma and platelet aggregation is measured. Tests of platelet secretion in response to agonists can also be measured. These tests are affected by many factors, including numerous medications, and the association between minor defects in aggregation or secretion in these assays and bleeding risk is not clearly established. Robert I. Handin, MD, contributed this chapter in the 16th edition, and some material from that chapter has been retained here. Enlargement of Lymph nodes Patrick H. Henry, Dan L. Longo This chapter is intended to serve as a guide to the evaluation of patients who present with enlargement of the lymph nodes (lymphadenopathy) or the spleen (splenomegaly). Lymphadenopathy is a rather common clinical finding in primary care settings, whereas palpable splenomegaly is less so. Lymphadenopathy may be an incidental finding in patients being examined for various reasons, or it may be a presenting sign or symptom of the patient’s illness. The physician must eventually decide whether the lymphadenopathy is a normal finding or one that requires further study, up to and including biopsy. Soft, flat, submandibular nodes (<1 cm) are often palpable in healthy children and young adults; healthy adults may have palpable inguinal nodes of up to 2 cm, which are considered normal. Further evaluation of these normal nodes is not warranted. In contrast, if the physician believes the node(s) to be abnormal, then pursuit of a more precise diagnosis is needed. APPROACH TO THE PATIENT: Lymphadenopathy may be a primary or secondary manifestation of numerous disorders, as shown in Table 79-1. Many of these disorders are infrequent causes of lymphadenopathy. In primary care practice, more than two-thirds of patients with lymphadenopathy have nonspecific causes or upper respiratory illnesses (viral or bacterial), and <1% have a malignancy. In one study, 84% of patients referred for evaluation of lymphadenopathy had a “benign” diagnosis. The remaining 16% had a malignancy (lymphoma or metastatic adenocarcinoma). Of the patients with benign lymphadenopathy, 63% had a nonspecific or reactive etiology (no causative agent found), and the remainder had a specific cause demonstrated, most commonly infectious mononucleosis, toxoplasmosis, or tuberculosis. Thus, the vast majority of patients with lymphadenopathy will have a nonspecific etiology requiring few diagnostic tests. The physician will be aided in the pursuit of an explanation for the lymphadenopathy by a careful medical history, physical examination, selected laboratory tests, and perhaps an excisional lymph node biopsy. The medical history should reveal the setting in which lymphadenopathy is occurring. Symptoms such as sore throat, cough, fever, night sweats, fatigue, weight loss, or pain in the nodes should be sought. The patient’s age, sex, occupation, exposure to pets, sexual behavior, and use of drugs such as diphenylhydantoin are other important historic points. For example, children and young adults usually have benign (i.e., nonmalignant) disorders that account for the observed lymphadenopathy such as viral or bacterial upper respiratory infections; infectious mononucleosis; toxoplasmosis; and, in some countries, tuberculosis. In contrast, after age 50, the incidence of malignant disorders increases and that of benign disorders decreases. The physical examination can provide useful clues such as the extent of lymphadenopathy (localized or generalized), size of nodes, texture, presence or absence of nodal tenderness, signs of inflammation over the node, skin lesions, and splenomegaly. A thorough ear, nose, and throat (ENT) examination is indicated in adult patients CHAPTER 79 Enlargement of Lymph Nodes and Spleen 1. Infectious diseases a. Viral—infectious mononucleosis syndromes (EBV, CMV), infectious hepatitis, herpes simplex, herpesvirus-6, varicella-zoster virus, rubella, measles, adenovirus, HIV, epidemic keratoconjunctivitis, vaccinia, herpesvirus-8 b. Bacterial—streptococci, staphylococci, cat-scratch disease, brucellosis, tularemia, plague, chancroid, melioidosis, glanders, tuberculosis, atypical mycobacterial infection, primary and secondary syphilis, diphtheria, leprosy, Bartonella c. Fungal—histoplasmosis, coccidioidomycosis, paracoccidioidomycosis d. Chlamydial—lymphogranuloma venereum, trachoma e. Parasitic—toxoplasmosis, leishmaniasis, trypanosomiasis, filariasis f. Rickettsial—scrub typhus, rickettsialpox, Q fever 2. Immunologic diseases a. b. c. d. e. f. g. h. Drug hypersensitivity—diphenylhydantoin, hydralazine, allopurinol, primidone, gold, carbamazepine, etc. i. j. k. l. m. Autoimmune lymphoproliferative syndrome n. IgG4-related disease o. 3. Malignant diseases a. Hematologic—Hodgkin’s disease, non-Hodgkin’s lymphomas, acute or chronic lymphocytic leukemia, hairy cell leukemia, malignant histiocytosis, amyloidosis b. 4. Lipid storage diseases—Gaucher’s, Niemann-Pick, Fabry, Tangier 5. 6. a. b. c. d. e. f. Sinus histiocytosis with massive lymphadenopathy (Rosai-Dorfman disease) g. h. i. g. k. Vascular transformation of sinuses l. Inflammatory pseudotumor of lymph node m. Congestive heart failure Abbreviations: CMV, cytomegalovirus; EBV, Epstein-Barr virus. with cervical adenopathy and a history of tobacco use. Localized or regional adenopathy implies involvement of a single anatomic area. Generalized adenopathy has been defined as involvement of three or more noncontiguous lymph node areas. Many of the causes of lymphadenopathy (Table 79-1) can produce localized or generalized adenopathy, so this distinction is of limited utility in the differential diagnosis. Nevertheless, generalized lymphadenopathy is frequently associated with nonmalignant disorders such as infectious PART 2 Cardinal Manifestations and Presentation of Diseases mononucleosis (Epstein-Barr virus [EBV] or cytomegalovirus [CMV]), toxoplasmosis, AIDS, other viral infections, systemic lupus erythematosus (SLE), and mixed connective tissue disease. Acute and chronic lymphocytic leukemias and malignant lymphomas also produce generalized adenopathy in adults. The site of localized or regional adenopathy may provide a useful clue about the cause. Occipital adenopathy often reflects an infection of the scalp, and preauricular adenopathy accompanies conjunctival infections and cat-scratch disease. The most frequent site of regional adenopathy is the neck, and most of the causes are benign—upper respiratory infections, oral and dental lesions, infectious mononucleosis, or other viral illnesses. The chief malignant causes include metastatic cancer from head and neck, breast, lung, and thyroid primaries. Enlargement of supraclavicular and scalene nodes is always abnormal. Because these nodes drain regions of the lung and retroperitoneal space, they can reflect lymphomas, other cancers, or infectious processes arising in these areas. Virchow’s node is an enlarged left supraclavicular node infiltrated with metastatic cancer from a gastrointestinal primary. Metastases to supraclavicular nodes also occur from lung, breast, testis, or ovarian cancers. Tuberculosis, sarcoidosis, and toxoplasmosis are nonneoplastic causes of supraclavicular adenopathy. Axillary adenopathy is usually due to injuries or localized infections of the ipsilateral upper extremity. Malignant causes include melanoma or lymphoma and, in women, breast cancer. Inguinal lymphadenopathy is usually secondary to infections or trauma of the lower extremities and may accompany sexually transmitted diseases such as lymphogranuloma venereum, primary syphilis, genital herpes, or chancroid. These nodes may also be involved by lymphomas and metastatic cancer from primary lesions of the rectum, genitalia, or lower extremities (melanoma). The size and texture of the lymph node(s) and the presence of pain are useful parameters in evaluating a patient with lymphadenopathy. Nodes <1.0 cm2 in area (1.0 cm × 1.0 cm or less) are almost always secondary to benign, nonspecific reactive causes. In one retrospective analysis of younger patients (9–25 years) who had a lymph node biopsy, a maximum diameter of >2 cm served as one discriminant for predicting that the biopsy would reveal malignant or granulomatous disease. Another study showed that a lymph node size of 2.25 cm2 (1.5 cm × 1.5 cm) was the best size limit for distinguishing malignant or granulomatous lymphadenopathy from other causes of lymphadenopathy. Patients with node(s) ≤1.0 cm2 should be observed after excluding infectious mononucleosis and/or toxoplasmosis unless there are symptoms and signs of an underlying systemic illness. The texture of lymph nodes may be described as soft, firm, rubbery, hard, discrete, matted, tender, movable, or fixed. Tenderness is found when the capsule is stretched during rapid enlargement, usually secondary to an inflammatory process. Some malignant diseases such as acute leukemia may produce rapid enlargement and pain in the nodes. Nodes involved by lymphoma tend to be large, discrete, symmetric, rubbery, firm, mobile, and nontender. Nodes containing metastatic cancer are often hard, nontender, and non-movable because of fixation to surrounding tissues. The coexistence of splenomegaly in the patient with lymphadenopathy implies a systemic illness such as infectious mononucleosis, lymphoma, acute or chronic leukemia, SLE, sarcoidosis, toxoplasmosis, cat-scratch disease, or other less common hematologic disorders. The patient’s story should provide helpful clues about the underlying systemic illness. Nonsuperficial presentations (thoracic or abdominal) of adenopathy are usually detected as the result of a symptom-directed diagnostic workup. Thoracic adenopathy may be detected by routine chest radiography or during the workup for superficial adenopathy. It may also be found because the patient complains of a cough or wheezing from airway compression; hoarseness from recurrent laryngeal nerve involvement; dysphagia from esophageal compression; or swelling of the neck, face, or arms secondary to compression of the superior vena cava or subclavian vein. The differential diagnosis of mediastinal and hilar adenopathy includes primary lung disorders and systemic illnesses that characteristically involve mediastinal or hilar nodes. In the young, mediastinal adenopathy is associated with infectious mononucleosis and sarcoidosis. In endemic regions, histoplasmosis can cause unilateral paratracheal lymph node involvement that mimics lymphoma. Tuberculosis can also cause unilateral adenopathy. In older patients, the differential diagnosis includes primary lung cancer (especially among smokers), lymphomas, metastatic carcinoma (usually lung), tuberculosis, fungal infection, and sarcoidosis. Enlarged intraabdominal or retroperitoneal nodes are usually malignant. Although tuberculosis may present as mesenteric lymphadenitis, these masses usually contain lymphomas or, in young men, germ cell tumors. The laboratory investigation of patients with lymphadenopathy must be tailored to elucidate the etiology suspected from the patient’s history and physical findings. One study from a family practice clinic evaluated 249 younger patients with “enlarged lymph nodes, not infected” or “lymphadenitis.” No laboratory studies were obtained in 51%. When studies were performed, the most common were a complete blood count (CBC) (33%), throat culture (16%), chest x-ray (12%), or monospot test (10%). Only eight patients (3%) had a node biopsy, and half of those were normal or reactive. The CBC can provide useful data for the diagnosis of acute or chronic leukemias, EBV or CMV mononucleosis, lymphoma with a leukemic component, pyogenic infections, or immune cytopenias in illnesses such as SLE. Serologic studies may demonstrate antibodies specific to components of EBV, CMV, HIV, and other viruses; Toxoplasma gondii; Brucella; and so on. If SLE is suspected, antinuclear and anti-DNA antibody studies are warranted. The chest x-ray is usually negative, but the presence of a pulmonary infiltrate or mediastinal lymphadenopathy would suggest tuberculosis, histoplasmosis, sarcoidosis, lymphoma, primary lung cancer, or metastatic cancer and demands further investigation. A variety of imaging techniques (computed tomography [CT], magnetic resonance imaging [MRI], ultrasound, color Doppler ultrasonography) have been used to differentiate benign from malignant lymph nodes, especially in patients with head and neck cancer. CT and MRI are comparably accurate (65–90%) in the diagnosis of metastases to cervical lymph nodes. Ultrasonography has been used to determine the long axis, short axis, and a ratio of long to short (L/S) axis in cervical nodes. An L/S ratio of <2.0 has a sensitivity and a specificity of 95% for distinguishing benign and malignant nodes in patients with head and neck cancer. This ratio has greater specificity and sensitivity than palpation or measurement of either the long or the short axis alone. The indications for lymph node biopsy are imprecise, yet it is a valuable diagnostic tool. The decision to biopsy may be made early in a patient’s evaluation or delayed for up to 2 weeks. Prompt biopsy should occur if the patient’s history and physical findings suggest a malignancy; examples include a solitary, hard, nontender cervical node in an older patient who is a chronic user of tobacco; supraclavicular adenopathy; and solitary or generalized adenopathy that is firm, movable, and suggestive of lymphoma. If a primary head and neck cancer is suspected as the basis of a solitary, hard cervical node, then a careful ENT examination should be performed. Any mucosal lesion that is suspicious for a primary neoplastic process should be biopsied first. If no mucosal lesion is detected, an excisional biopsy of the largest node should be performed. Fine-needle aspiration should not be performed as the first diagnostic procedure. Most diagnoses require more tissue than such aspiration can provide, and it often delays a definitive diagnosis. Fine-needle aspiration should be reserved for thyroid nodules and for confirmation of relapse in patients whose primary diagnosis is known. If the primary physician is uncertain about whether to proceed to biopsy, consultation with a hematologist or medical oncologist should be helpful. In primary care practices, <5% of lymphadenopathy patients will require a biopsy. That percentage will be considerably larger in referral practices, i.e., hematology, oncology, or ENT. Two groups have reported algorithms that they claim will identify more precisely those lymphadenopathy patients who should have a biopsy. Both reports were retrospective analyses in referral practices. The first study involved patients 9–25 years of age who had a node biopsy performed. Three variables were identified that predicted those young patients with peripheral lymphadenopathy who should undergo biopsy; lymph node size >2 cm in diameter and abnormal chest x-ray had positive predictive values, whereas recent ENT symptoms had negative predictive values. The second study evaluated 220 lymphadenopathy patients in a hematology unit and identified five variables (lymph node size, location [supraclavicular or nonsupraclavicular], age [>40 years or <40 years], texture [nonhard or hard], and tenderness) that were used in a mathematical model to identify patients requiring a biopsy. Positive predictive value was found for age >40 years, supraclavicular location, node size >2.25 cm2, hard texture, and lack of pain or tenderness. Negative predictive value was evident for age <40 years, node size <1.0 cm2, nonhard texture, and tender or painful nodes. Ninety-one percent of those who required biopsy were correctly classified by this model. Because both of these studies were retrospective analyses and one was limited to young patients, it is not known how useful these models would be if applied prospectively in a primary care setting. Most lymphadenopathy patients do not require a biopsy, and at least half require no laboratory studies. If the patient’s history and physical findings point to a benign cause for lymphadenopathy, careful follow-up at a 2to 4-week interval can be used. The patient should be instructed to return for reevaluation if there is an increase in the size of the nodes. Antibiotics are not indicated for lymphadenopathy unless strong evidence of a bacterial infection is present. Glucocorticoids should not be used to treat lymphadenopathy because their lympholytic effect obscures some diagnoses (lymphoma, leukemia, Castleman’s disease), and they contribute to delayed healing or activation of underlying infections. An exception to this statement is the life-threatening pharyngeal obstruction by enlarged lymphoid tissue in Waldeyer’s ring that is occasionally seen in infectious mononucleosis. CHAPTER 79 Enlargement of Lymph Nodes and Spleen The spleen is a reticuloendothelial organ that has its embryologic origin in the dorsal mesogastrium at about 5 weeks of gestation. It arises in a series of hillocks, migrates to its normal adult location in the left upper quadrant (LUQ), and is attached to the stomach via the gastrolienal ligament and to the kidney via the lienorenal ligament. When the hillocks fail to unify into a single tissue mass, accessory spleens may develop in around 20% of persons. The function of the spleen has been elusive. Galen believed it was the source of “black bile” or melancholia, and the word hypochondria (literally, beneath the ribs) and the idiom “to vent one’s spleen” attest to the beliefs that the spleen had an important influence on the psyche and emotions. In humans, its normal physiologic roles seem to be the following: 1. Maintenance of quality control over erythrocytes in the red pulp by removal of senescent and defective red blood cells. The spleen accomplishes this function through a unique organization of its parenchyma and vasculature (Fig. 79-1). 2. Synthesis of antibodies in the white pulp. 3. The removal of antibody-coated bacteria and antibody-coated blood cells from the circulation. An increase in these normal functions may result in splenomegaly. The spleen is composed of red pulp and white pulp, which are Malpighi’s terms for the red blood–filled sinuses and reticuloendothelial Secondary follicle with germinal center (B cell area) PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 79-1 Schematic spleen structure. The spleen comprises many units of red and white pulp centered around small branches of the splenic artery, called central arteries. White pulp is lymphoid in nature and contains B cell follicles, a marginal zone around the follicles, and T cell–rich areas sheathing arterioles. The red pulp areas include pulp sinuses and pulp cords. The cords are dead ends. In order to regain access to the circulation, red blood cells must traverse tiny openings in the sinusoidal lining. Stiff, damaged, or old red cells cannot enter the sinuses. RE, reticuloendothelial. (Bottom portion of figure from RS Hillman, KA Ault: Hematology in Clinical Practice, 4th ed. New York, McGraw-Hill, 2005.) cell–lined cords and the white lymphoid follicles arrayed within the red pulp matrix. The spleen is in the portal circulation. The reason for this is unknown but may relate to the fact that lower blood pressure allows less rapid flow and minimizes damage to normal erythrocytes. Blood flows into the spleen at a rate of about 150 mL/min through the splenic artery, which ultimately ramifies into central arterioles. Some blood goes from the arterioles to capillaries and then to splenic veins and out of the spleen, but the majority of blood from central arterioles flows into the macrophage-lined sinuses and cords. The blood entering the sinuses reenters the circulation through the splenic venules, but the blood entering the cords is subjected to an inspection of sorts. To return to the circulation, the blood cells in the cords must squeeze through slits in the cord lining to enter the sinuses that lead to the venules. Old and damaged erythrocytes are less deformable and are retained in the cords, where they are destroyed and their components recycled. Red cell–inclusion bodies such as parasites (Chaps. 248 and 250e), nuclear residua (Howell-Jolly bodies, see Fig. 77-6), or denatured hemoglobin (Heinz bodies) are pinched off in the process of passing through the slits, a process called pitting. The culling of dead and damaged cells and the pitting of cells with inclusions appear to occur without significant delay because the blood transit time through the spleen is only slightly slower than in other organs. The spleen is also capable of assisting the host in adapting to its hostile environment. It has at least three adaptive functions: (1) clearance of bacteria and particulates from the blood, (2) the generation of immune responses to certain pathogens, and (3) the generation of cellular components of the blood under circumstances in which the marrow is unable to meet the needs (i.e., extramedullary hematopoiesis). The latter adaptation is a recapitulation of the blood-forming function the spleen plays during gestation. In some animals, the spleen also serves a role in the vascular adaptation to stress because it stores red blood cells (often hemoconcentrated to higher hematocrits than normal) under normal circumstances and contracts under the influence of β-adrenergic stimulation to provide the animal with an autotransfusion and improved oxygen-carrying capacity. However, the normal human spleen does not sequester or store red blood cells and does not contract in response to sympathetic stimuli. The normal human spleen contains approximately one-third of the total body platelets and a significant number of marginated neutrophils. These sequestered cells are available when needed to respond to bleeding or infection. APPROACH TO THE PATIENT: The most common symptoms produced by diseases involving the spleen are pain and a heavy sensation in the LUQ. Massive splenomegaly may cause early satiety. Pain may result from acute swelling of the spleen with stretching of the capsule, infarction, or inflammation of the capsule. For many years, it was believed that splenic infarction was clinically silent, which, at times, is true. However, Soma Weiss, in his classic 1942 report of the self-observations by a Harvard medical student on the clinical course of subacute bacterial endocarditis, documented that severe LUQ and pleuritic chest pain may accompany thromboembolic occlusion of splenic blood flow. Vascular occlusion, with infarction and pain, is commonly seen in children with sickle cell crises. Rupture of the spleen, from either trauma or infiltrative disease that breaks the capsule, may result in intraperitoneal bleeding, shock, and death. The rupture itself may be painless. A palpable spleen is the major physical sign produced by diseases affecting the spleen and suggests enlargement of the organ. The normal spleen weighs <250 g, decreases in size with age, normally lies entirely within the rib cage, has a maximum cephalocaudad diameter of 13 cm by ultrasonography or maximum length of 12 cm and/or width of 7 cm by radionuclide scan, and is usually not palpable. However, a palpable spleen was found in 3% of 2200 asymptomatic, male, freshman college students. Follow-up at 3 years revealed that 30% of those students still had a palpable spleen without any increase in disease prevalence. Ten-year follow-up found no evidence for lymphoid malignancies. Furthermore, in some tropical countries (e.g., New Guinea), the incidence of splenomegaly may reach 60%. Thus, the presence of a palpable spleen does not always equate with presence of disease. Even when disease is present, splenomegaly may not reflect the primary disease but rather a reaction to it. For example, in patients with Hodgkin’s disease, only two-thirds of the palpable spleens show involvement by the cancer. Physical examination of the spleen uses primarily the techniques of palpation and percussion. Inspection may reveal fullness in the LUQ that descends on inspiration, a finding associated with a massively enlarged spleen. Auscultation may reveal a venous hum or friction rub. Palpation can be accomplished by bimanual palpation, ballotment, and palpation from above (Middleton maneuver). For bimanual palpation, which is at least as reliable as the other techniques, the patient is supine with flexed knees. The examiner’s left hand is placed on the lower rib cage and pulls the skin toward the costal margin, allowing the fingertips of the right hand to feel the tip of the spleen as it descends while the patient inspires slowly, smoothly, and deeply. Palpation is begun with the right hand in the left lower quadrant with gradual movement toward the left costal margin, thereby identifying the lower edge of a massively enlarged spleen. When the spleen tip is felt, the finding is recorded as centimeters below the left costal margin at some arbitrary point, i.e., 10–15 cm, from the midpoint of the umbilicus or the xiphisternal junction. This allows other examiners to compare findings or the initial examiner to determine changes in size over time. Bimanual palpation in the right lateral decubitus position adds nothing to the supine examination. Percussion for splenic dullness is accomplished with any of three techniques described by Nixon, Castell, or Barkun: 1. Nixon’s method: The patient is placed on the right side so that the spleen lies above the colon and stomach. Percussion begins at the lower level of pulmonary resonance in the posterior axillary line and proceeds diagonally along a perpendicular line toward the lower midanterior costal margin. The upper border of dullness is normally 6–8 cm above the costal margin. Dullness >8 cm in an adult is presumed to indicate splenic enlargement. 2. Castell’s method: With the patient supine, percussion in the lowest intercostal space in the anterior axillary line (eighth or ninth) produces a resonant note if the spleen is normal in size. This is true during expiration or full inspiration. A dull percussion note on full inspiration suggests splenomegaly. 3. Percussion of Traube’s semilunar space: The borders of Traube’s space are the sixth rib superiorly, the left midaxillary line laterally, and the left costal margin inferiorly. The patient is supine with the left arm slightly abducted. During normal breathing, this space is percussed from medial to lateral margins, yielding a normal resonant sound. A dull percussion note suggests splenomegaly. Studies comparing methods of percussion and palpation with a standard of ultrasonography or scintigraphy have revealed sensitivity of 56–71% for palpation and 59–82% for percussion. Reproducibility among examiners is better for palpation than percussion. Both techniques are less reliable in obese patients or patients who have just eaten. Thus, the physical examination techniques of palpation and percussion are imprecise at best. It has been suggested that the examiner perform percussion first and, if positive, proceed to palpation; if the spleen is palpable, then one can be reasonably confident that splenomegaly exists. However, not all LUQ masses are enlarged spleens; gastric or colon tumors and pancreatic or renal cysts or tumors can mimic splenomegaly. The presence of an enlarged spleen can be more precisely determined, if necessary, by liver-spleen radionuclide scan, CT, MRI, or ultrasonography. The latter technique is the current procedure of choice for routine assessment of spleen size (normal = a maximum cephalocaudad diameter of 13 cm) because it has high sensitivity and specificity and is safe, noninvasive, quick, mobile, and less costly. Nuclear medicine scans are accurate, sensitive, and reliable but are costly, require greater time to generate data, and use immobile equipment. They have the advantage of demonstrating accessory splenic tissue. CT and MRI provide accurate determination of spleen size, but the equipment is immobile and the procedures are expensive. MRI appears to offer no advantage over CT. Changes in spleen structure such as mass lesions, infarcts, inhomogeneous infiltrates, and cysts are more readily assessed by CT, MRI, or ultrasonography. None of these techniques is very reliable in the detection of patchy infiltration (e.g., Hodgkin’s disease). Many of the diseases associated with splenomegaly are listed in Table 79-2. They are grouped according to the presumed basic mechanisms responsible for organ enlargement: 1. Hyperplasia or hypertrophy related to a particular splenic function such as reticuloendothelial hyperplasia (work hypertrophy) in diseases such as hereditary spherocytosis or thalassemia syndromes that require removal of large numbers of defective red blood cells; immune hyperplasia in response to systemic infection (infectious mononucleosis, subacute bacterial endocarditis) or to immunologic diseases (immune thrombocytopenia, SLE, Felty’s syndrome). 2. Passive congestion due to decreased blood flow from the spleen in conditions that produce portal hypertension (cirrhosis, Budd-Chiari syndrome, congestive heart failure). 3. Infiltrative diseases of the spleen (lymphomas, metastatic cancer, amyloidosis, Gaucher’s disease, myeloproliferative disorders with extramedullary hematopoiesis). The differential diagnostic possibilities are much fewer when the spleen is “massively enlarged,” palpable more than 8 cm below the left costal margin or has a drained weight of ≥1000 g (Table 79-3). The vast majority of such patients will have nonHodgkin’s lymphoma, chronic lymphocytic leukemia, hairy cell leukemia, chronic myeloid leukemia, myelofibrosis with myeloid metaplasia, or polycythemia vera. The major laboratory abnormalities accompanying splenomegaly are determined by the underlying systemic illness. Erythrocyte counts may be normal, decreased (thalassemia major syndromes, SLE, cirrhosis with portal hypertension), or increased (polycythemia vera). Granulocyte counts may be normal, decreased (Felty’s syndrome, congestive splenomegaly, leukemias), or increased (infections or inflammatory disease, myeloproliferative disorders). Similarly, the platelet count may be normal, decreased when there is enhanced sequestration or destruction of platelets in an enlarged spleen (congestive splenomegaly, Gaucher’s disease, immune thrombocytopenia), or increased in the myeloproliferative disorders such as polycythemia vera. The CBC may reveal cytopenia of one or more blood cell types, which should suggest hypersplenism. This condition is characterized by splenomegaly, cytopenia(s), normal or hyperplastic bone marrow, and a response to splenectomy. The latter characteristic is less precise because reversal of cytopenia, particularly granulocytopenia, is sometimes not sustained after splenectomy. The cytopenias result from increased destruction of the cellular elements secondary to reduced flow of blood through enlarged and congested cords (congestive splenomegaly) or to immune-mediated mechanisms. In hypersplenism, various cell types usually have normal morphology on the peripheral blood smear, although the red cells may be spherocytic due to loss of surface area during their longer transit through the enlarged spleen. The increased marrow production of red cells should be reflected as an increased reticulocyte production index, although the value may be less than expected due to increased sequestration of reticulocytes in the spleen. The need for additional laboratory studies is dictated by the differential diagnosis of the underlying illness of which splenomegaly is a manifestation. Splenectomy is infrequently performed for diagnostic purposes, especially in the absence of clinical illness or other diagnostic tests that suggest underlying disease. More often, splenectomy is performed for symptom control in patients with massive splenomegaly, for disease control in patients with traumatic splenic rupture, or for correction of cytopenias in patients with hypersplenism or immune-mediated destruction of one or more cellular blood elements. Splenectomy is CHAPTER 79 Enlargement of Lymph Nodes and Spleen Enlargement Due to Increased Demand for Splenic Function PART 2 Cardinal Manifestations and Presentation of Diseases Reticuloendothelial system hyperplasia (for removal of defective erythrocytes) Response to infection (viral, bacterial, fungal, parasitic) Malaria Enlargement Due to Abnormal Splenic or Portal Blood Flow Extramedullary hematopoiesis Myelofibrosis Marrow damage by toxins, radiation, strontium Marrow infiltration by tumors, leukemias, Gaucher’s disease Leukemias (acute, chronic, lymphoid, myeloid, monocytic) Myeloproliferative syndromes (e.g., polycythemia vera, essential Metastatic tumors (melanoma is most common) Hemangiomas, fibromas, lymphangiomas necessary for staging of patients with Hodgkin’s disease only in those with clinical stage I or II disease in whom radiation therapy alone is contemplated as the treatment. Noninvasive staging of the spleen in Hodgkin’s disease is not a sufficiently reliable basis for treatment decisions because one-third of normal-sized spleens will be involved with Hodgkin’s disease and one-third of enlarged spleens will be tumor-free. The widespread use of systemic therapy to test all stages of Hodgkin’s disease has made staging laparotomy with splenectomy aThe spleen extends >8 cm below the left costal margin and/or weighs >1000 g. unnecessary. Although splenectomy in chronic myeloid leukemia (CML) does not affect the natural history of disease, removal of the massive spleen usually makes patients significantly more comfortable and simplifies their management by significantly reducing transfusion requirements. The improvements in therapy of CML have reduced the need for splenectomy for symptom control. Splenectomy is an effective secondary or tertiary treatment for two chronic B cell leukemias, hairy cell leukemia and prolymphocytic leukemia, and for the very rare splenic mantle cell or marginal zone lymphoma. Splenectomy in these diseases may be associated with significant tumor regression in bone marrow and other sites of disease. Similar regressions of systemic disease have been noted after splenic irradiation in some types of lymphoid tumors, especially chronic lymphocytic leukemia and prolymphocytic leukemia. This has been termed the abscopal effect. Such systemic tumor responses to local therapy directed at the spleen suggest that some hormone or growth factor produced by the spleen may affect tumor cell proliferation, but this conjecture is not yet substantiated. A common therapeutic indication for splenectomy is traumatic or iatrogenic splenic rupture. In a fraction of patients with splenic rupture, peritoneal seeding of splenic fragments can lead to splenosis—the presence of multiple rests of spleen tissue not connected to the portal circulation. This ectopic spleen tissue may cause pain or gastrointestinal obstruction, as in endometriosis. A large number of hematologic, immunologic, and congestive causes of splenomegaly can lead to destruction of one or more cellular blood elements. In the majority of such cases, splenectomy can correct the cytopenias, particularly anemia and thrombocytopenia. In a large series of patients seen in two tertiary care centers, the indication for splenectomy was diagnostic in 10% of patients, therapeutic in 44%, staging for Hodgkin’s disease in 20%, and incidental to another procedure in 26%. Perhaps the only contraindication to splenectomy is the presence of marrow failure, in which the enlarged spleen is the only source of hematopoietic tissue. The absence of the spleen has minimal long-term effects on the hematologic profile. In the immediate postsplenectomy period, leukocytosis (up to 25,000/μL) and thrombocytosis (up to 1 × 106/μL) may develop, but within 2–3 weeks, blood cell counts and survival of each cell lineage are usually normal. The chronic manifestations of splenectomy are marked variation in size and shape of erythrocytes (anisocytosis, poikilocytosis) and the presence of Howell-Jolly bodies (nuclear remnants), Heinz bodies (denatured hemoglobin), basophilic stippling, and an occasional nucleated erythrocyte in the peripheral blood. When such erythrocyte abnormalities appear in a patient whose spleen has not been removed, one should suspect splenic infiltration by tumor that has interfered with its normal culling and pitting function. The most serious consequence of splenectomy is increased susceptibility to bacterial infections, particularly those with capsules such as Streptococcus pneumoniae, Haemophilus influenzae, and some gram-negative enteric organisms. Patients under age 20 years are particularly susceptible to overwhelming sepsis with S. pneumoniae, and the overall actuarial risk of sepsis in patients who have had their spleens removed is about 7% in 10 years. The case–fatality rate for pneumococcal sepsis in splenectomized patients is 50–80%. About 25% of patients without spleens will develop a serious infection at some time in their life. The frequency is highest within the first 3 years after splenectomy. About 15% of the infections are polymicrobial, and lung, skin, and blood are the most common sites. No increased risk of viral infection has been noted in patients who have no spleen. The susceptibility to bacterial infections relates to the inability to remove opsonized bacteria from the bloodstream and a defect in making antibodies to T cell–independent antigens such as the polysaccharide components of bacterial capsules. Pneumococcal vaccine should be administered to all patients 2 weeks before elective splenectomy. The Advisory Committee on Immunization Practices recommends that these patients receive repeat vaccination 5 years after splenectomy. Efficacy has not been proven for this group, and the recommendation discounts the possibility that administration of the vaccine may actually lower the titer of specific pneumococcal antibodies. A more effective pneumococcal conjugate vaccine that involves T cells in the response is now available (Prevenar, 7-valent). The vaccine to Neisseria meningitidis should also be given to patients in whom elective splenectomy is planned. Although efficacy data for H. influenzae type b vaccine are not available for older children or adults, it may be given to patients who have had a splenectomy. Splenectomized patients should be educated to consider any unexplained fever as a medical emergency. Prompt medical attention with evaluation and treatment of suspected bacteremia may be life-saving. Routine chemoprophylaxis with oral penicillin can result in the emergence of drug-resistant strains and is not recommended. In addition to an increased susceptibility to bacterial infections, splenectomized patients are also more susceptible to the parasitic disease babesiosis. The splenectomized patient should avoid areas where the parasite Babesia is endemic (e.g., Cape Cod, MA). Surgical removal of the spleen is an obvious cause of hyposplenism. Patients with sickle cell disease often suffer from autosplenectomy as a result of splenic destruction by the numerous infarcts associated with sickle cell crises during childhood. Indeed, the presence of a palpable spleen in a patient with sickle cell disease after age 5 suggests a coexisting hemoglobinopathy, e.g., thalassemia or hemoglobin C. In addition, patients who receive splenic irradiation for a neoplastic 413 or autoimmune disease are also functionally hyposplenic. The term hyposplenism is preferred to asplenism in referring to the physiologic consequences of splenectomy because asplenia is a rare, specific, and fatal congenital abnormality in which there is a failure of the left side of the coelomic cavity (which includes the splenic anlagen) to develop normally. Infants with asplenia have no spleens, but that is the least of their problems. The right side of the developing embryo is duplicated on the left so there is liver where the spleen should be, there are two right lungs, and the heart comprises two right atria and two right ventricles. Disorders of granulocytes and Steven M. Holland, John I. Gallin Leukocytes, the major cells comprising inflammatory and immune responses, include neutrophils, T and B lymphocytes, natural killer (NK) cells, monocytes, eosinophils, and basophils. These cells have specific functions, such as antibody production by B lymphocytes or destruction of bacteria by neutrophils, but in no single infectious disease is the exact role of the cell types completely established. Thus, whereas neutrophils are classically thought to be critical to host defense against bacteria, they may also play important roles in defense against viral infections. The blood delivers leukocytes to the various tissues from the bone marrow, where they are produced. Normal blood leukocyte counts are 4.3–10.8 × 109/L, with neutrophils representing 45–74% of the cells, bands 0–4%, lymphocytes 16–45%, monocytes 4–10%, eosinophils 0–7%, and basophils 0–2%. Variation among individuals and among different ethnic groups can be substantial, with lower leukocyte numbers for certain African-American ethnic groups. The various leukocytes are derived from a common stem cell in the bone marrow. Three-fourths of the nucleated cells of bone marrow are committed to the production of leukocytes. Leukocyte maturation in the marrow is under the regulatory control of a number of different factors, known as colony-stimulating factors (CSFs) and interleukins (ILs). Because an alteration in the number and type of leukocytes is often associated with disease processes, total white blood cell (WBC) count (cells per μL) and differential counts are informative. This chapter focuses on neutrophils, monocytes, and eosinophils. Lymphocytes and basophils are discussed in Chaps. 372e and 376, respectively. CHAPTER 80 Disorders of Granulocytes and Monocytes Important events in neutrophil life are summarized in Fig. 80-1. In normal humans, neutrophils are produced only in the bone marrow. The minimum number of stem cells necessary to support hematopoiesis is estimated to be 400–500 at any one time. Human blood monocytes, tissue macrophages, and stromal cells produce CSFs, hormones required for the growth of monocytes and neutrophils in the bone marrow. The hematopoietic system not only produces enough neutrophils (~1.3 × 1011 cells per 80-kg person per day) to carry out physiologic functions but also has a large reserve stored in the marrow, which can be mobilized in response to inflammation or infection. An increase in the number of blood neutrophils is called neutrophilia, and the presence of immature cells is termed a shift to the left. A decrease in the number of blood neutrophils is called neutropenia. Neutrophils and monocytes evolve from pluripotent stem cells under the influence of cytokines and CSFs (Fig. 80-2). The proliferation phase through the metamyelocyte takes about 1 week, while the maturation phase from metamyelocyte to mature neutrophil takes PART 2 Cardinal Manifestations and Presentation of Diseases Stem O2, .OH,C3acell Diapedesis Activation of other limbs of host defense Redness (Rubor) O2 –, H2 Edema (Tumor) Pain (Dolor) Warmth (Calor) Chemokines, other chemoattractants Fever HOCl (bleach) Macrophages IL-1, TNF-˜IL-8, TNF-˜, IL-12 Integrins Increased endothelial stickiness Selectins Vasodilation Fluid Leakage Recruitment Cytokine secretion FIguRE 80-1 Schematic events in neutrophil production, recruitment, and inflammation. The four cardinal signs of inflammation (rubor, tumor, calor, dolor) are indicated, as are the interactions of neutrophils with other cells and cytokines. G-CSF, granulocyte colony-stimulating factor; IL, interleukin; PMN, polymorphonuclear leukocyte; TNF-α, tumor necrosis factor α. FIguRE 80-2 Stages of neutrophil development shown schematically. Granulocyte colony-stimulating factor (G-CSF) and granulocytemacrophage colony-stimulating factor (GM-CSF) are critical to this process. Identifying cellular characteristics and specific cell-surface markers are listed for each maturational stage. CD33, CD13, CD15 CD33, CD13, CD15 CD33, CD13, CD15, CD14, CD11b CD33, CD13, CD15, CD14, CD11b CD33, CD13, CD15, CD14, CD11b CD10, CD16 CD33, CD13, CD15, CD14, CD11b CD10, CD16 Condensed, band– shaped nucleus Condensed, multilobed nucleus Secondary granule. another week. The myeloblast is the first recognizable precursor cell and is followed by the promyelocyte. The promyelocyte evolves when the classic lysosomal granules, called the primary, or azurophil, granules are produced. The primary granules contain hydrolases, elastase, myeloperoxidase, cathepsin G, cationic proteins, and bactericidal/ permeability-increasing protein, which is important for killing gram-negative bacteria. Azurophil granules also contain defensins, a family of cysteine-rich polypeptides with broad antimicrobial activity against bacteria, fungi, and certain enveloped viruses. The promyelocyte divides to produce the myelocyte, a cell responsible for the synthesis of the specific, or secondary, granules, which contain unique (specific) constituents such as lactoferrin, vitamin B12–binding protein, membrane components of the reduced nicotinamide-adenine dinucleotide phosphate (NADPH) oxidase required for hydrogen peroxide production, histaminase, and receptors for certain chemoattractants and adherence-promoting factors (CR3) as well as receptors for the basement membrane component, laminin. The secondary granules do not contain acid hydrolases and therefore are not classic lysosomes. Packaging of secondary granule contents during myelopoiesis is controlled by CCAAT/enhancer binding protein-ε. Secondary granule contents are readily released extracellularly, and their mobilization is important in modulating inflammation. During the final stages of maturation, no cell division occurs, and the cell passes through the metamyelocyte stage and then to the band neutrophil with a sausage-shaped nucleus (Fig. 80-3). As the band cell matures, the nucleus assumes a lobulated configuration. The nucleus of neutrophils normally contains up to four segments (Fig. 80-4). Excessive segmentation (more than five nuclear lobes) may be a manifestation of folate or vitamin B12 deficiency or the congenital neutropenia syndrome of warts, hypogammaglobulinemia, infections, and myelokathexis (WHIM) described below. The Pelger-Hüet anomaly (Fig. 80-5), an infrequent dominant benign inherited trait, results in neutrophils with distinctive bilobed nuclei that must be distinguished from band forms. Acquired bilobed nuclei, pseudo Pelger-Hüet anomaly, can occur with acute infections or in myelodysplastic syndromes. The physiologic role of the normal multilobed nucleus of neutrophils is unknown, but it may allow great deformation of neutrophils during migration into tissues at sites of inflammation. In severe acute bacterial infection, prominent neutrophil cytoplasmic granules, called toxic granulations, are occasionally seen. Toxic granulations are immature or abnormally staining azurophil granules. Cytoplasmic inclusions, also called Döhle bodies (Fig. 80-3), can be seen during infection and are fragments of ribosome-rich endoplasmic reticulum. Large neutrophil vacuoles are often present in acute bacterial infection and probably represent pinocytosed (internalized) membrane. FIguRE 80-3 Neutrophil band with Döhle body. The neutrophil with a sausage-shaped nucleus in the center of the field is a band form. Döhle bodies are discrete, blue-staining, nongranular areas found in the periphery of the cytoplasm of the neutrophil in infec-tions and other toxic states. They represent aggregates of rough endoplasmic reticulum. FIguRE 80-4 Normal granulocyte. The normal granulocyte has a segmented nucleus with heavy, clumped chromatin; fine neutrophilic granules are dispersed throughout the cytoplasm. Neutrophils are heterogeneous in function. Monoclonal antibodies have been developed that recognize only a subset of mature neutrophils. The meaning of neutrophil heterogeneity is not known. The morphology of eosinophils and basophils is shown in Fig. 80-6. Specific signals, including IL-1, tumor necrosis factor α (TNF-α), the CSFs, complement fragments, and chemokines, mobilize leukocytes from the bone marrow and deliver them to the blood in an unstimulated state. Under normal conditions, ~90% of the neutrophil pool is in the bone marrow, 2–3% in the circulation, and the remainder in the tissues (Fig. 80-7). The circulating pool exists in two dynamic compartments: one freely flowing and one marginated. The freely flowing pool is about one-half the neutrophils in the basal state and is composed of those cells that are in the blood and not in contact with the endothelium. Marginated leukocytes are those that are in close physical contact with the CHAPTER 80 Disorders of Granulocytes and Monocytes FIguRE 80-5 Pelger-Hüet anomaly. In this benign disorder, the majority of granulocytes are bilobed. The nucleus frequently has a spectacle-like, or “pince-nez,” configuration. FIguRE 80-6 Normal eosinophil (left) and basophil (right). The eosinophil contains large, bright orange granules and usually a bilobed nucleus. The basophil contains large purple-black granules that fill the cell and obscure the nucleus. PART 2 Cardinal Manifestations and Presentation of Diseases endothelium (Fig. 80-8). In the pulmonary circulation, where an extensive capillary bed (~1000 capillaries per alveolus) exists, margination occurs because the capillaries are about the same size as a mature neutrophil. Therefore, neutrophil fluidity and deformability are necessary to make the transit through the pulmonary bed. Increased neutrophil rigidity and decreased deformability lead to augmented neutrophil trapping and margination in the lung. In contrast, in the systemic postcapillary venules, margination is mediated by the interaction of specific cell-surface molecules called selectins. Selectins are glycoproteins expressed on neutrophils and endothelial cells, among others, that cause a low-affinity interaction, resulting in “rolling” of the neutrophil along the endothelial surface. On neutrophils, the molecule L-selectin (cluster determinant [CD] 62L) binds to glycosylated proteins on endothelial cells (e.g., glycosylation-dependent cell adhesion molecule [GlyCAM1] and CD34). Glycoproteins on neutrophils, most importantly sialyl-Lewisx (SLex, CD15s), are targets for binding of selectins expressed on endothelial cells (E-selectin [CD62E] and P-selectin [CD62P]) and FIguRE 80-7 Schematic neutrophil distribution and kinetics between the different anatomic and functional pools. other leukocytes. In response to chemotactic stimuli from injured tissues (e.g., complement product C5a, leukotriene B4, IL-8) or bacterial products (e.g., N-formylmethionylleucylphenylalanine [f-metleu-phe]), neutrophil adhesiveness increases through mobilization of intracellular adhesion proteins stored in specific granules to the cell surface, and the cells “stick” to the endothelium through integrins. The integrins are leukocyte glycoproteins that exist as complexes of a common CD18 β chain with CD11a (LFA-1), CD11b (called Mac-1, CR3, or the C3bi receptor), and CD11c (called p150,95 or CR4). CD11a/CD18 and CD11b/CD18 bind to specific endothelial receptors (intercellular adhesion molecules [ICAM] 1 and 2). On cell stimulation, L-selectin is shed from neutrophils, and E-selectin increases in the blood, presumably because it is shed from endothelial cells; receptors for chemoattractants and opsonins are mobilized; and the phagocytes orient toward the chemoattractant source in the extravascular space, increase their motile activity (chemokinesis), and migrate directionally (chemotaxis) into tissues. The process of migration into tissues is called diapedesis and involves the crawling of neutrophils between postcapillary endothelial cells that open junctions between adjacent cells to permit leukocyte passage. Diapedesis involves platelet/endothelial cell adhesion molecule (PECAM) 1 (CD31), which is expressed on both the emigrating leukocyte and the endothelial cells. The endothelial responses (increased blood flow from increased vasodilation and permeability) are mediated by anaphylatoxins (e.g., C3a and C5a) as well as vasodilators such as histamine, bradykinin, serotonin, nitric oxide, vascular endothelial growth factor (VEGF), and prostaglandins E and I. Cytokines regulate some of these processes (e.g., TNF-α induction of VEGF, interferon [IFN] γ inhibition of prostaglandin E). In the healthy adult, most neutrophils leave the body by migration through the mucous membrane of the gastrointestinal tract. Normally, neutrophils spend a short time in the circulation (half-life, 6–7 h). Senescent neutrophils are cleared from the circulation by macrophages in the lung and spleen. Once in the tissues, neutrophils release enzymes, such as collagenase and elastase, which may help establish abscess cavities. Neutrophils ingest pathogenic materials that have been opsonized by IgG and C3b. Fibronectin and the tetrapeptide tuftsin also facilitate phagocytosis. With phagocytosis comes a burst of oxygen consumption and activation of the hexose-monophosphate shunt. A membrane-associated NADPH oxidase, consisting of membrane and cytosolic components, is assembled and catalyzes the univalent reduction of oxygen to superoxide anion, which is then converted by superoxide dismutase to hydrogen peroxide and other toxic oxygen products (e.g., hydroxyl radical). Hydrogen peroxide + chloride + neutrophil myeloperoxidase generate hypochlorous acid (bleach), hypochlorite, and chlorine. These products oxidize and halogenate microorganisms and tumor cells and, when uncontrolled, can damage host tissue. Strongly cationic proteins, defensins, elastase, cathepsins, and probably nitric oxide also participate in microbial killing. Lactoferrin chelates iron, an important growth factor for microorganisms, especially fungi. Other enzymes, such as lysozyme and acid proteases, help digest microbial debris. After 1–4 days in tissues, neutrophils die. The apoptosis of neutrophils is also cytokine-regulated; granulocyte colony-stimulating factor (G-CSF) and IFN-γ prolong their life span. Under certain conditions, such as in delayed-type hypersensitivity, monocyte accumulation occurs within 6–12 h of initiation of inflammation. Neutrophils, monocytes, microorganisms in various states of digestion, and altered local tissue cells make up the inflammatory exudate, pus. Myeloperoxidase confers the characteristic green color to pus and may participate in turning off the inflammatory process by inactivating chemoattractants and immobilizing phagocytic cells. Neutrophils respond to certain cytokines (IFN-γ, granulocytemacrophage colony-stimulating factor [GM-CSF], IL-8) and produce cytokines and chemotactic signals (TNF-α, IL-8, macrophage inflammatory protein [MIP] 1) that modulate the inflammatory response. In the presence of fibrinogen, f-met-leu-phe or leukotriene B4 induces IL-8 production by neutrophils, providing autocrine amplification of inflammation. Chemokines (chemoattractant cytokines) are small proteins CHAPTER 80 Disorders of Granulocytes and Monocytes Diapedesis CD18 CD11a,b CD31 FIguRE 80-8 Neutrophil travel through the pulmonary capillaries is dependent on neutrophil deformability. Neutrophil rigidity (e.g., caused by C5a) enhances pulmonary trapping and response to pulmonary pathogens in a way that is not so dependent on cell-surface receptors. Intraalveolar chemotactic factors, such as those caused by certain bacteria (e.g., Streptococcus pneumoniae), lead to diapedesis of neutrophils from the pulmonary capillaries into the alveolar space. Neutrophil interaction with the endothelium of the systemic postcapillary venules is dependent on molecules of attachment. The neutrophil “rolls” along the endothelium using selectins: neutrophil CD15s (sialyl-Lewisx) binds to CD62E (E-selectin) and CD62P (P-selectin) on endothelial cells; CD62L (L-selectin) on neutrophils binds to CD34 and other molecules (e.g., GlyCAM-1) expressed on endothelium. Chemokines or other activation factors stimulate integrin-mediated “tight adhesion”: CD11a/CD18 (LFA 1) and CD11b/CD18 (Mac-1, CR3) bind to CD54 (ICAM-1) and CD102 (ICAM-2) on the endothelium. Diapedesis occurs between endothelial cells: CD31 (PECAM-1) expressed by the emigrating neutrophil interacts with CD31 expressed at the endothelial cell-cell junction. CD, cluster determinant; GlyCAM, glycosylation-dependent cell adhesion molecule; ICAM, intercellular adhesion molecule; PECAM, platelet/endothelial cell adhesion molecule. produced by many different cell types, including endothelial cells, fibroblasts, epithelial cells, neutrophils, and monocytes, that regulate neutrophil, monocyte, eosinophil, and lymphocyte recruitment and activation. Chemokines transduce their signals through heterotrimeric G protein– linked receptors that have seven cell membrane–spanning domains, the same type of cell-surface receptor that mediates the response to the classic chemoattractants f-met-leu-phe and C5a. Four major groups of chemokines are recognized based on the cysteine structure near the N terminus: C, CC, CXC, and CXXXC. The CXC cytokines such as IL-8 mainly attract neutrophils; CC chemokines such as MIP-1 attract lymphocytes, monocytes, eosinophils, and basophils; the C chemokine lymphotactin is T cell tropic; the CXXXC chemokine fractalkine attracts neutrophils, monocytes, and T cells. These molecules and their receptors not only regulate the trafficking and activation of inflammatory cells, but specific chemokine receptors also serve as co-receptors for HIV infection (Chap. 226) and have a role in other viral infections such as West Nile infection and atherogenesis. Defects in the neutrophil life cycle can lead to dysfunction and compromised host defenses. Inflammation is often depressed, and the clinical result is often recurrent, severe bacterial and fungal infections. Aphthous ulcers of mucous membranes (gray ulcers without pus) and gingivitis and periodontal disease suggest a phagocytic cell disorder. Patients with congenital phagocyte defects can have infections within the first few days of life. Skin, ear, upper and lower respiratory tract, and bone infections are common. Sepsis and meningitis are rare. In some disorders, the frequency of infection is variable, and patients can go for months or even years without major infection. Aggressive management of these congenital diseases has extended the life span of patients well beyond 30 years. Neutropenia The consequences of absent neutrophils are dramatic. Susceptibility to infectious diseases increases sharply when neutrophil counts fall below 1000 cells/μL. When the absolute neutrophil count (ANC; band forms and mature neutrophils combined) falls to <500 cells/μL, control of endogenous microbial flora (e.g., mouth, gut) is impaired; when the ANC is <200/μL, the local inflammatory process is absent. Neutropenia can be due to depressed production, increased peripheral destruction, or excessive peripheral pooling. A falling neutrophil count or a significant decrease in the number of neutrophils below steady-state levels, together with a failure to increase neutrophil counts in the setting of infection or other challenge, requires investigation. Acute neutropenia, such as that caused by cancer chemotherapy, is more likely to be associated with increased risk of infection than neutropenia of long duration (months to years) that reverses in response to infection or carefully controlled administration of endotoxin (see “Laboratory Diagnosis and Management,” below). Some causes of inherited and acquired neutropenia are listed in Table 80-1. The most common neutropenias are iatrogenic, resulting from the use of cytotoxic or immunosuppressive therapies for malignancy or control of autoimmune disorders. These drugs cause neutropenia because they result in decreased production of rapidly growing progenitor (stem) cells of the marrow. Certain antibiotics such as chloramphenicol, trimethoprim-sulfamethoxazole, flucytosine, vidarabine, and the antiretroviral drug zidovudine may cause neutropenia by inhibiting proliferation of myeloid precursors. Azathioprine and 6-mercaptopurine are metabolized by the enzyme thiopurine methyltransferase (TMPT), hypofunctional polymorphisms in which are found in 11% of whites and can lead to accumulation of 6-thioguanine and profound marrow toxicity. The marrow suppression is generally dose-related and dependent on continued administration of the drug. Cessation of the offending agent and recombinant human G-CSF usually reverse these forms of neutropenia. Another important mechanism for iatrogenic neutropenia is the effect of drugs that serve as immune haptens and sensitize neutrophils or neutrophil precursors to immune-mediated peripheral destruction. This form of drug-induced neutropenia can be seen within 7 days of exposure to the drug; with previous drug exposure, resulting in preexisting antibodies, neutropenia may occur a few hours after Drug-induced—alkylating agents (nitrogen mustard, busulfan, chlorambucil, cyclophosphamide); antimetabolites (methotrexate, 6-mercaptopurine, 5-flucytosine); noncytotoxic agents (antibiotics [chloramphenicol, penicillins, sulfonamides], phenothiazines, tranquilizers [meprobamate], anticonvulsants [carbamazepine], antipsychotics [clozapine], certain diuretics, anti-inflammatory agents, antithyroid drugs, many others) Hematologic diseases—idiopathic, cyclic neutropenia, Chédiak-Higashi syndrome, aplastic anemia, infantile genetic disorders (see text) Tumor invasion, myelofibrosis Nutritional deficiency—vitamin B12, folate (especially alcoholics) Infection—tuberculosis, typhoid fever, brucellosis, tularemia, measles, infectious mononucleosis, malaria, viral hepatitis, leishmaniasis, AIDS Autoimmune disorders—Felty’s syndrome, rheumatoid arthritis, lupus erythematosus Drugs as haptens—aminopyrine, α-methyldopa, phenylbutazone, mercurial diuretics, some phenothiazines Granulomatosis with polyangiitis (Wegener’s) administration of the drug. Although any drug can cause this form of neutropenia, the most frequent causes are commonly used antibiotics, such as sulfa-containing compounds, penicillins, and cephalosporins. Fever and eosinophilia may also be associated with drug reactions, but often these signs are not present. Drug-induced neutropenia can be severe, but discontinuation of the sensitizing drug is sufficient for recovery, which is usually seen within 5–7 days and is complete by 10 days. Readministration of the sensitizing drug should be avoided, because abrupt neutropenia will often result. For this reason, diagnostic challenge should be avoided. Autoimmune neutropenias caused by circulating antineutrophil antibodies are another form of acquired neutropenia that results in increased destruction of neutrophils. Acquired neutropenia may also be seen with viral infections, including infection with HIV. Acquired neutropenia may be cyclic in nature, occurring at intervals of several weeks. Acquired cyclic or stable neutropenia may be associated with an expansion of large granular lymphocytes (LGLs), which may be T cells, NK cells, or NK-like cells. Patients with large granular lymphocytosis may have moderate blood and bone marrow lymphocytosis, neutropenia, polyclonal hypergammaglobulinemia, splenomegaly, rheumatoid arthritis, and absence of lymphadenopathy. Such patients may have a chronic and relatively stable course. Recurrent bacterial infections are frequent. Benign and malignant forms of this syndrome occur. In some patients, a spontaneous regression has occurred even after 11 years, suggesting an immunoregulatory defect as the basis for at least one form of the disorder. Glucocorticoids, cyclosporine, and methotrexate are commonly used to manage these cytopenias. Hereditary Neutropenias Hereditary neutropenias are rare and may manifest in early childhood as a profound constant neutropenia or agranulocytosis. Congenital forms of neutropenia include Kostmann’s syndrome (neutrophil count <100/μL), which is often fatal and due to mutations in the antiapoptosis gene HAX-1; severe chronic neutropenia (neutrophil count of 300–1500/μL) due to mutations in neutrophil elastase (ELANE); hereditary cyclic neutropenia, or, more appropriately, cyclic hematopoiesis, also due to mutations in neutrophil elastase (ELANE); the cartilage-hair hypoplasia syndrome due to mutations in the mitochondrial RNA-processing endoribonuclease RMRP; Shwachman-Diamond syndrome associated with pancreatic insufficiency due to mutations in the Shwachman-Bodian-Diamond syndrome gene SBDS; the WHIM (warts, hypogammaglobulinemia, PART 2 Cardinal Manifestations and Presentation of Diseases infections, myelokathexis [retention of WBCs in the marrow]) syndrome, characterized by neutrophil hypersegmentation and bone marrow myeloid arrest due to mutations in the chemokine receptor CXCR4; and neutropenias associated with other immune defects, such as X-linked agammaglobulinemia, Wiskott-Aldrich syndrome, and CD40 ligand deficiency. Mutations in the G-CSF receptor can develop in severe congenital neutropenia and are linked to leukemia. Absence of both myeloid and lymphoid cells is seen in reticular dysgenesis, due to mutations in the nuclear genome-encoded mitochondrial enzyme adenylate kinase-2 (AK2). Maternal factors can be associated with neutropenia in the newborn. Transplacental transfer of IgG directed against antigens on fetal neutrophils can result in peripheral destruction. Drugs (e.g., thiazides) ingested during pregnancy can cause neutropenia in the newborn by either depressed production or peripheral destruction. In Felty’s syndrome—the triad of rheumatoid arthritis, splenomegaly, and neutropenia (Chap. 380)—spleen-produced antibodies can shorten neutrophil life span, while large granular lymphocytes can attack marrow neutrophil precursors. Splenectomy may increase the neutrophil count in Felty’s syndrome and lower serum neutrophilbinding IgG. Some Felty’s syndrome patients also have neutropenia associated with an increased number of LGLs. Splenomegaly with peripheral trapping and destruction of neutrophils is also seen in lysosomal storage diseases and in portal hypertension. Neutrophilia Neutrophilia results from increased neutrophil production, increased marrow release, or defective margination (Table 80-2). The most important acute cause of neutrophilia is infection. Neutrophilia from acute infection represents both increased production and increased marrow release. Increased production is also associated with chronic inflammation and certain myeloproliferative diseases. Increased marrow release and mobilization of the marginated leukocyte pool are induced by glucocorticoids. Release of epinephrine, as with vigorous exercise, excitement, or stress, will demarginate neutrophils in the spleen and lungs and double the neutrophil count in minutes. Cigarette smoking can elevate neutrophil counts above the normal range. Leukocytosis with cell counts of 10,000–25,000/μL occurs in response to infection and other forms of acute inflammation and results from both release of the marginated pool and mobilization of marrow reserves. Persistent neutrophilia with cell counts of ≥30,000–50,000/μL is called a leukemoid reaction, a term Idiopathic Drug-induced—glucocorticoids, G-CSF Infection—bacterial, fungal, sometimes viral Inflammation—thermal injury, tissue necrosis, myocardial and pulmonary infarction, hypersensitivity states, collagen vascular diseases Myeloproliferative diseases—myelocytic leukemia, myeloid metaplasia, polycythemia vera Drugs—epinephrine, glucocorticoids, nonsteroidal anti-inflammatory agents Stress, excitement, vigorous exercise Leukocyte adhesion deficiency type 1 (CD18); leukocyte adhesion deficiency type 2 (selectin ligand, CD15s); leukocyte adhesion deficiency type 3 (FERMT3) Metabolic disorders—ketoacidosis, acute renal failure, eclampsia, acute poi Other—metastatic carcinoma, acute hemorrhage or hemolysis Abbreviation: G-CSF, granulocyte colony-stimulating factor. Cause of Indicated Dysfunction Adherence-aggregation Aspirin, colchicine, alcohol, glucocorti-Neonatal state, hemodialysis Leukocyte adhesion deficiency types 1, 2, coids, ibuprofen, piroxicam and 3 Deformability Leukemia, neonatal state, diabetes mellitus, immature neutrophils Chemokinesis-chemotaxis Glucocorticoids (high dose), auranofin, Thermal injury, malignancy, malnutrition, Chédiak-Higashi syndrome, neutrophilcolchicine (weak effect), phenylbu-periodontal disease, neonatal state, systemic specific granule deficiency, hyper IgE–recurtazone, naproxen, indomethacin, lupus erythematosus, rheumatoid arthritis, rent infection (Job’s) syndrome (in some interleukin 2 diabetes mellitus, sepsis, influenza virus patients), Down’s syndrome, α-mannosidase infection, herpes simplex virus infection, deficiency, leukocyte adhesion deficiencies, acrodermatitis enteropathica, AIDS Wiskott-Aldrich syndrome Microbicidal activity Colchicine, cyclophosphamide, gluco-Leukemia, aplastic anemia, certain neutrope-Chédiak-Higashi syndrome, neutrophil-specorticoids (high dose), TNF-α-blocking nias, tuftsin deficiency, thermal injury, sepsis, cific granule deficiency, chronic granulomaantibodies neonatal state, diabetes mellitus, malnutri-tous disease, defects in IFNγ/IL-12 axis tion, AIDS Abbreviations: IFNγ, interferon γ; IL, interleukin; TNF-α, tumor necrosis factor alpha. often used to distinguish this degree of neutrophilia from leukemia. In a leukemoid reaction, the circulating neutrophils are usually mature and not clonally derived. Abnormal Neutrophil Function Inherited and acquired abnormalities of phagocyte function are listed in Table 80-3. The resulting diseases are best considered in terms of the functional defects of adherence, chemotaxis, and microbicidal activity. The distinguishing features of the important inherited disorders of phagocyte function are shown in Table 80-4. dISORdERS OF AdHESION Three main types of leukocyte adhesion deficiency (LAD) have been described. All are autosomal recessive and result in the inability of neutrophils to exit the circulation to sites of infection, leading to leukocytosis and increased susceptibility to infection (Fig. 80-8). Patients with LAD 1 have mutations in CD18, the common component of the integrins LFA-1, Mac-1, and p150,95, leading to a defect in tight adhesion between neutrophils and the endothelium. The heterodimer formed by CD18/CD11b (Mac-1) is also the receptor for the complement-derived opsonin C3bi (CR3). The CD18 gene is located on distal chromosome 21q. The severity of the defect determines the severity of clinical disease. Complete lack of expression of the leukocyte integrins results in a severe phenotype in which inflammatory stimuli do not increase the expression of leukocyte integrins on neutrophils or activated T and B cells. Neutrophils (and monocytes) from patients with LAD 1 adhere poorly to endothelial cells and protein-coated surfaces and exhibit defective spreading, aggregation, and chemotaxis. Patients with LAD 1 have recurrent bacterial infections involving the skin, oral and genital mucosa, and respiratory and intestinal tracts; persistent leukocytosis (resting neutrophil counts of 15,000–20,000/μL) because cells do not marginate; and, in severe cases, a history of delayed separation of the umbilical stump. Infections, especially of the skin, may become necrotic with progressively enlarging borders, slow healing, and development of dysplastic scars. The most common bacteria are Staphylococcus aureus and enteric gram-negative bacteria. LAD 2 is caused by an abnormality of fucosylation of SLex (CD15s), the ligand on neutrophils that interacts with selectins on endothelial cells and is responsible for neutrophil rolling along the endothelium. Infection susceptibility in LAD 2 appears to be less severe than in LAD 1. LAD 2 is also known as congenital disorder of glycosylation IIc (CDGIIc) due to mutation in a GDP-fucose transporter (SLC35C1). LAD 3 is characterized by infection susceptibility, leukocytosis, and petechial hemorrhage due to impaired integrin activation caused by mutations in the gene FERMT3. dISORdERS OF NEUTROPHIL gRANULES The most common neutrophil defect is myeloperoxidase deficiency, a primary granule defect inherited as an autosomal recessive trait; the incidence is ~1 in 2000 persons. Isolated myeloperoxidase deficiency is not associated with clinically compromised defenses, presumably because other defense systems such as hydrogen peroxide generation are amplified. Microbicidal activity of neutrophils is delayed but not absent. Myeloperoxidase deficiency may make other acquired host defense defects more serious, and patients with myeloperoxidase deficiency and diabetes are more susceptible to Candida infections. An acquired form of myeloperoxidase deficiency occurs in myelomonocytic leukemia and acute myeloid leukemia. Chédiak-Higashi syndrome (CHS) is a rare disease with autosomal recessive inheritance due to defects in the lysosomal transport protein LYST, encoded by the gene CHS1 at 1q42. This protein is required for normal packaging and disbursement of granules. Neutrophils (and all cells containing lysosomes) from patients with CHS characteristically have large granules (Fig. 80-9), making it a systemic disease. Patients with CHS have nystagmus, partial oculocutaneous albinism, and an increased number of infections resulting from many bacterial agents. Some CHS patients develop an “accelerated phase” in childhood with a hemophagocytic syndrome and an aggressive lymphoma requiring bone marrow transplantation. CHS neutrophils and monocytes have impaired chemotaxis and abnormal rates of microbial killing due to slow rates of fusion of the lysosomal granules with phagosomes. NK cell function is also impaired. CHS patients may develop a severe disabling peripheral neuropathy in adulthood that can lead to bed confinement. Specific granule deficiency is a rare autosomal recessive disease in which the production of secondary granules and their contents, as well as the primary granule component defensins, is defective. The defect in killing leads to severe bacterial infections. One type of specific granule deficiency is due to a mutation in the CCAAT/enhancer binding protein-ε, a regulator of expression of granule components. A dominant mutation in C/EBP-ε has also been described. CHRONIC gRANULOMATOUS dISEASE Chronic granulomatous disease (CGD) is a group of disorders of granulocyte and monocyte oxidative metabolism. Although CGD is rare, with an incidence of ~1 in 200,000 individuals, it is an important model of defective neutrophil oxidative metabolism. In about two-thirds of patients, CGD is inherited as an X-linked recessive trait; 30% of patients inherit the disease in an autosomal recessive pattern. Mutations in the genes for the five proteins that assemble at the plasma membrane account for all patients with CGD. Two proteins (a 91-kDa protein, abnormal in X-linked CGD, and a 22-kDa protein, absent in one form of autosomal recessive CGD) form the heterodimer cytochrome b-558 in the plasma membrane. Three other proteins (40, 47, and 67 kDa, abnormal in the other autosomal recessive forms of CGD) are cytoplasmic in origin and interact with the cytochrome after cell activation to form NADPH oxidase, required for hydrogen peroxide production. Leukocytes from patients with CGD have severely diminished hydrogen peroxide production. The genes involved in each of the defects have been cloned and sequenced and the chromosome locations identified. Patients with CGD characteristically have increased numbers of infections CHAPTER 80 Disorders of Granulocytes and Monocytes Chronic Granulomatous Diseases (70% X-Linked, 30% Autosomal Recessive) Severe infections of skin, ears, lungs, liver, and bone with catalase-positive microorganisms such as Staphylococcus aureus, Burkholderia cepacia complex, Aspergillus spp., Chromobacterium violaceum; often hard to culture organism; excessive inflammation with granulomas, frequent lymph node suppuration; granulomas can obstruct GI or GU tracts; gingivitis, aphthous ulcers, seborrheic dermatitis No respiratory burst due to the lack of one of five DHR or NBT test; no superoxide and H2O2 NADPH oxidase subunits in neutrophils, monocytes, production by neutrophils; immunoblot and eosinophils for NADPH oxidase components; genetic PART 2 Cardinal Manifestations and Presentation of Diseases Recurrent infections of skin, ears, and sinopulmonary tract; Abnormal chemotaxis, impaired respiratory burst Lack of secondary (specific) granules in delayed wound healing; decreased inflammation; bleed-and bacterial killing, failure to upregulate chemotac-neutrophils (Wright’s stain), no neutrophiling diathesis tic and adhesion receptors with stimulation, defect specific granule contents (i.e., lactoferrin), no in transcription of granule proteins; defect in CEBPE defensins, platelet α granule abnormality; genetic detection Clinically normal except in patients with underlying dis-No myeloperoxidase due to preand posttransla-No peroxidase in neutrophils; genetic detecease such as diabetes mellitus; then candidiasis or other tional defects in myeloperoxidase deficiency tion fungal infections Type 1: Delayed separation of umbilical cord, sustained Impaired phagocyte adherence, aggregation, neutrophilia, recurrent infections of skin and mucosa, gin-spreading, chemotaxis, phagocytosis of C3bi-coated givitis, periodontal disease particles; defective production of CD18 subunit common to leukocyte integrins Type 2: Mental retardation, short stature, Bombay (hh) Impaired phagocyte rolling along endothelium; due blood phenotype, recurrent infections, neutrophilia to defects in fucose transporter Type 3: Petechial hemorrhage, recurrent infections Impaired signaling for integrin activation resulting in impaired adhesion due to mutation in FERMT3 Reduced phagocyte surface expression of the CD18-containing integrins with monoclonal antibodies against LFA-1 (CD18/ CD11a), Mac-1 or CR3 (CD18/CD11b), p150,95 (CD18/CD11c); genetic detection Reduced phagocyte surface expression of Sialyl-Lewisx, with monoclonal antibodies against CD15s; genetic detection Abbreviations: C/EBPε, CCAAT/enhancer binding protein-ε; DHR, dihydrorhodamine (oxidation test); DOCK8, dedicator of cytokinesis 8; GI, gastrointestinal; GU, genitourinary; HPV, human papilloma virus; HSV, herpes simplex virus; IFN, interferon; IL, interleukin; IRAK4, IL-1 receptor–associated kinase 4; LFA-1, leukocyte function–associated antigen 1; MyD88, myeloid differentiation primary response gene 88; NADPH, nicotinamide–adenine dinueleotide phosphate; NBT, nitroblue tetrazolium (dye test); NEMO, NF-κB essential modulator; NF-κB, nuclear factor-κB; NK, natural killer; STAT1–3, signal transducer and activator of transcription 1–3; TLR, Toll-like receptor; TNF, tumor necrosis factor. FIguRE 80-9 Chédiak-Higashi syndrome. The granulocytes contain huge cytoplasmic granules formed from aggregation and fusion of azurophilic and specific granules. Large abnormal granules are found in other granule-containing cells throughout the body. due to catalase-positive microorganisms (organisms that destroy their own hydrogen peroxide) such as S. aureus, Burkholderia cepacia, and Aspergillus species. When patients with CGD become infected, they often have extensive inflammatory reactions, and lymph node suppuration is common despite the administration of appropriate antibiotics. Aphthous ulcers and chronic inflammation of the nares are often present. Granulomas are frequent and can obstruct the gastrointestinal or genitourinary tracts. The excessive inflammation is due to failure to downregulate inflammation, reflecting failure to inhibit the synthesis of, degradation of, or response to chemoattractants or residual antigens, leading to persistent neutrophil accumulation. Impaired killing of intracellular microorganisms by macrophages may lead to persistent cell-mediated immune activation and granuloma formation. Autoimmune complications such as immune thrombocytopenic purpura and juvenile rheumatoid arthritis are also increased in CGD. In addition, for unexplained reasons, discoid lupus is more common in X-linked carriers. Late complications, including nodular regenerative hyperplasia and portal hypertension, are increasingly recognized in long-term survivors of severe CGD. dISORdERS OF PHAgOCYTE ACTIVATION Phagocytes depend on cell-surface stimulation to induce signals that evoke multiple levels of the inflammatory response, including cytokine synthesis, chemotaxis, and antigen presentation. Mutations affecting the major pathway that signals through NF-κB have been noted in patients with a variety of infection susceptibility syndromes. If the defects are at a very late stage of signal transduction, in the protein critical for NF-κB activation known as the NF-κB essential modulator (NEMO), then affected males develop ectodermal dysplasia and severe immune deficiency with susceptibility to bacteria, fungi, mycobacteria, and viruses. If the defects in NF-κB activation are closer to the cell-surface receptors, in the proteins transducing Toll-like receptor signals, IL-1 receptor–associated kinase 4 (IRAK4), and myeloid differentiation primary response gene 88 (MyD88), then children have a marked susceptibility to pyogenic infections early in life but develop resistance to infection later. The mononuclear phagocyte system is composed of monoblasts, promonocytes, and monocytes, in addition to the structurally diverse tissue macrophages that make up what was previously referred to as the reticuloendothelial system. Macrophages are long-lived phagocytic 421 cells capable of many of the functions of neutrophils. They are also secretory cells that participate in many immunologic and inflammatory processes distinct from neutrophils. Monocytes leave the circulation by diapedesis more slowly than neutrophils and have a half-life in the blood of 12–24 h. After blood monocytes arrive in the tissues, they differentiate into macrophages (“big eaters”) with specialized functions suited for specific anatomic locations. Macrophages are particularly abundant in capillary walls of the lung, spleen, liver, and bone marrow, where they function to remove microorganisms and other noxious elements from the blood. Alveolar macrophages, liver Kupffer cells, splenic macrophages, peritoneal macrophages, bone marrow macrophages, lymphatic macrophages, brain microglial cells, and dendritic macrophages all have specialized functions. Macrophage-secreted products include lysozyme, neutral proteases, acid hydrolases, arginase, complement components, enzyme inhibitors (plasmin, α2-macroglobulin), binding proteins (transferrin, fibronectin, transcobalamin II), nucleosides, and cytokines (TNF-α; IL-1, -8, -12, -18). IL-1 (Chaps. 23 and 372e) has many functions, including initiating fever in the hypothalamus, mobilizing leukocytes from the bone marrow, and activating lymphocytes and neutrophils. TNF-α is a pyrogen that duplicates many of the actions of IL-1 and plays an important role in the pathogenesis of gram-negative shock (Chap. 325). TNF-α stimulates production of hydrogen peroxide and related toxic oxygen species by macrophages and neutrophils. In addition, TNF-α induces catabolic changes that contribute to the profound wasting (cachexia) associated with many chronic diseases. Other macrophage-secreted products include reactive oxygen and nitrogen metabolites, bioactive lipids (arachidonic acid metabolites and platelet-activating factors), chemokines, CSFs, and factors stimulating fibroblast and vessel proliferation. Macrophages help regulate the replication of lymphocytes and participate in the killing of tumors, viruses, and certain bacteria (Mycobacterium tuberculosis and Listeria monocytogenes). Macrophages are key effector cells in the elimination of intracellular microorganisms. Their ability to fuse to form giant cells that coalesce into granulomas in response to some inflammatory stimuli is important in the elimination of intracellular microbes and is under the control of IFN-γ. Nitric oxide induced by IFN-γ is an important effector against intracellular parasites, including tuberculosis and Leishmania. Macrophages play an important role in the immune response (Chap. 372e). They process and present antigen to lymphocytes and secrete cytokines that modulate and direct lymphocyte development and function. Macrophages participate in autoimmune phenomena by removing immune complexes and other substances from the circulation. Polymorphisms in macrophage receptors for immunoglobulin (FcγRII) determine susceptibility to some infections and autoimmune diseases. In wound healing, they dispose of senescent cells, and they contribute to atheroma development. Macrophage elastase mediates development of emphysema from cigarette smoking. Many disorders of neutrophils extend to mononuclear phagocytes. Monocytosis is associated with tuberculosis, brucellosis, subacute bacterial endocarditis, Rocky Mountain spotted fever, malaria, and visceral leishmaniasis (kala azar). Monocytosis also occurs with malignancies, leukemias, myeloproliferative syndromes, hemolytic anemias, chronic idiopathic neutropenias, and granulomatous diseases such as sarcoidosis, regional enteritis, and some collagen vascular diseases. Patients with LAD, hyperimmunoglobulin E–recurrent infection (Job’s) syndrome, CHS, and CGD all have defects in the mononuclear phagocyte system. Monocyte cytokine production or response is impaired in some patients with disseminated nontuberculous mycobacterial infection who are not infected with HIV. Genetic defects in the pathways regulated by IFN-γ and IL-12 lead to impaired killing of intracellular bacteria, mycobacteria, salmonellae, and certain viruses (Fig. 80-10). CHAPTER 80 Disorders of Granulocytes and Monocytes IL-2RCD14IL-1518?TLRLPSIL-12IL-2IFN˜TNF°IFN˜RTNF°R˛1˛2AFBSalm.T/NKM˝STAT1GATA2ISG1512NRAMP1NEMOIRF8 PART 2 Cardinal Manifestations and Presentation of Diseases FIguRE 80-10 Lymphocyte-macrophage interactions underlying resistance to mycobacteria and other intracellular pathogens such as Salmonella, Histoplasma, and Coccidioides. Mycobacteria (and others) infect macrophages, leading to the production of IL-12, which activates T or NK cells through its receptor, leading to production of IL-2 and IFN-γ. IFN-γ acts through its receptor on macrophages to upregulate TNF-γ and IL-12 and kill intracellular pathogens. Other critical interacting molecules include signal transducer and activator of transcription 1 (STAT1), interferon regulatory factor 8 (IRF8), GATA2, and ISG15. Mutant forms of the cytokines and receptors shown in bold type have been found in severe cases of nontuberculous mycobacterial infection, salmonellosis and other intracellular pathogens. AFB, acid-fast bacilli; IFN, interferon; IL, interleukin; NEMO, nuclear factorκB essential modulator; NK, natural killer; TLR, Toll-like receptor; TNF, tumor necrosis factor. Certain viral infections impair mononuclear phagocyte function. For example, influenza virus infection causes abnormal monocyte chemotaxis. Mononuclear phagocytes can be infected by HIV using CCR5, the chemokine receptor that acts as a co-receptor with CD4 for HIV. T lymphocytes produce IFN-γ, which induces FcR expression and phagocytosis and stimulates hydrogen peroxide production by mononuclear phagocytes and neutrophils. In certain diseases, such as AIDS, IFN-γ production may be deficient, whereas in other diseases, such as T cell lymphomas, excessive release of IFN-γ may be associated with erythrophagocytosis by splenic macrophages. Autoinflammatory diseases are characterized by abnormal cytokine regulation, leading to excess inflammation in the absence of infection. These diseases can mimic infectious or immunodeficient syndromes. Gain-of-function mutations in the TNF-α receptor cause TNF-α receptor–associated periodic syndrome (TRAPS), which is characterized by recurrent fever in the absence of infection, due to persistent stimulation of the TNF-α receptor (Chap. 392). Diseases with abnormal IL-1 regulation leading to fever include familial Mediterranean fever due to mutations in PYRIN. Mutations in cold-induced autoinflammatory syndrome 1 (CIAS1) lead to neonatal-onset multisystem autoinflammatory disease, familial cold urticaria, and Muckle-Wells syndrome. The syndrome of pyoderma gangrenosum, acne, and sterile pyogenic arthritis (PAPA syndrome) is caused by mutations in PSTPIP1. In contrast to these syndromes of overexpression of proinflammatory cytokines, blockade of TNF-α by the antagonists infliximab, adalimumab, certolizumab, golimumab, or etanercept has been associated with severe infections due to tuberculosis, nontuberculous mycobacteria, and fungi (Chap. 392). Monocytopenia occurs with acute infections, with stress, and after treatment with glucocorticoids. Drugs that suppress neutrophil production in the bone marrow can cause monocytopenia. Persistent severe circulating monocytopenia is seen in GATA2 deficiency, even though macrophages are found at the sites of inflammation. Monocytopenia also occurs in aplastic anemia, hairy cell leukemia, acute myeloid leukemia, and as a direct result of myelotoxic drugs. Eosinophils and neutrophils share similar morphology, many lysosomal constituents, phagocytic capacity, and oxidative metabolism. Eosinophils express a specific chemoattractant receptor and respond to a specific chemokine, eotaxin, but little is known about their required role. Eosinophils are much longer lived than neutrophils, and unlike neutrophils, tissue eosinophils can recirculate. During most infections, eosinophils appear unimportant. However, in invasive helminthic infections, such as hookworm, schistosomiasis, strongyloidiasis, toxocariasis, trichinosis, filariasis, echinococcosis, and cysticercosis, the eosinophil plays a central role in host defense. Eosinophils are associated with bronchial asthma, cutaneous allergic reactions, and other hypersensitivity states. The distinctive feature of the red-staining (Wright’s stain) eosinophil granule is its crystalline core consisting of an arginine-rich protein (major basic protein) with histaminase activity, important in host defense against parasites. Eosinophil granules also contain a unique eosinophil peroxidase that catalyzes the oxidation of many substances by hydrogen peroxide and may facilitate killing of microorganisms. Eosinophil peroxidase, in the presence of hydrogen peroxide and halide, initiates mast cell secretion in vitro and thereby promotes inflammation. Eosinophils contain cationic proteins, some of which bind to heparin and reduce its anticoagulant activity. Eosinophilderived neurotoxin and eosinophil cationic protein are ribonucleases that can kill respiratory syncytial virus. Eosinophil cytoplasm contains Charcot-Leyden crystal protein, a hexagonal bipyramidal crystal first observed in a patient with leukemia and then in sputum of patients with asthma; this protein is lysophospholipase and may function to detoxify certain lysophospholipids. Several factors enhance the eosinophil’s function in host defense. T cell–derived factors enhance the ability of eosinophils to kill parasites. Mast cell–derived eosinophil chemotactic factor of anaphylaxis (ECFa) increases the number of eosinophil complement receptors and enhances eosinophil killing of parasites. Eosinophil CSFs (e.g., IL-5) produced by macrophages increase eosinophil production in the bone marrow and activate eosinophils to kill parasites. Eosinophilia is the presence of >500 eosinophils per μL of blood and is common in many settings besides parasite infection. Significant tissue eosinophilia can occur without an elevated blood count. A common cause of eosinophilia is allergic reaction to drugs (iodides, aspirin, sulfonamides, nitrofurantoin, penicillins, and cephalosporins). Allergies such as hay fever, asthma, eczema, serum sickness, allergic vasculitis, and pemphigus are associated with eosinophilia. Eosinophilia also occurs in collagen vascular diseases (e.g., rheumatoid arthritis, eosinophilic fasciitis, allergic angiitis, and periarteritis nodosa) and malignancies (e.g., Hodgkin’s disease; mycosis fungoides; chronic myeloid leukemia; and cancer of the lung, stomach, pancreas, ovary, or uterus), as well as in Job’s syndrome, DOCK8 deficiency (see below), and CGD. Eosinophilia is commonly present in helminthic infections. IL-5 is the dominant eosinophil growth factor. Therapeutic administration of the cytokines IL-2 or GM-CSF frequently leads to transient eosinophilia. The most dramatic hypereosinophilic syndromes are Loeffler’s syndrome, tropical pulmonary eosinophilia, Loeffler’s endocarditis, eosinophilic leukemia, and idiopathic hypereosinophilic syndrome (50,000–100,000/μL). IL-5 is the dominant eosinophil growth factor and can be specifically inhibited with the monoclonal antibody mepolizumab. The idiopathic hypereosinophilic syndrome represents a heterogeneous group of disorders with the common feature of prolonged eosinophilia of unknown cause and organ system dysfunction, including the heart, central nervous system, kidneys, lungs, gastrointestinal tract, and skin. The bone marrow is involved in all affected individuals, but the most severe complications involve the heart and central nervous system. Clinical manifestations and organ dysfunction are highly variable. Eosinophils are found in the involved tissues and likely cause tissue damage by local deposition of toxic eosinophil proteins such as eosinophil cationic protein and major basic protein. In the heart, the pathologic changes lead to thrombosis, endocardial fibrosis, and restrictive endomyocardiopathy. The damage to tissues in other organ systems is similar. Some cases are due to mutations involving the platelet-derived growth factor receptor, and these are extremely sensitive to the tyrosine kinase inhibitor imatinib. Glucocorticoids, hydroxyurea, and IFN-α each have been used successfully, as have therapeutic antibodies against IL-5. Cardiovascular complications are managed aggressively. The eosinophilia-myalgia syndrome is a multisystem disease, with prominent cutaneous, hematologic, and visceral manifestations, that frequently evolves into a chronic course and can occasionally be fatal. The syndrome is characterized by eosinophilia (eosinophil count >1000/μL) and generalized disabling myalgias without other recognized causes. Eosinophilic fasciitis, pneumonitis, and myocarditis; neuropathy culminating in respiratory failure; and encephalopathy may occur. The disease is caused by ingesting contaminants in L-tryptophan–containing products. Eosinophils, lymphocytes, macrophages, and fibroblasts accumulate in the affected tissues, but their role in pathogenesis is unclear. Activation of eosinophils and fibroblasts and the deposition of eosinophil-derived toxic proteins in affected tissues may contribute. IL-5 and transforming growth factor β have been implicated as potential mediators. Treatment is withdrawal of products containing L-tryptophan and the administration of glucocorticoids. Most patients recover fully, remain stable, or show slow recovery, but the disease can be fatal in up to 5% of patients. Eosinophilic neoplasms are discussed in Chapter 135e. Eosinopenia occurs with stress, such as acute bacterial infection, and after treatment with glucocorticoids. The mechanism of eosinopenia of acute bacterial infection is unknown but is independent of endogenous glucocorticoids, because it occurs in animals after total adrenalectomy. There is no known adverse effect of eosinopenia. The hyperimmunoglobulin E–recurrent infection syndrome, or Job’s syndrome, is a rare multisystem disease in which the immune and somatic systems are affected, including neutrophils, monocytes, T cells, B cells, and osteoclasts. Autosomal dominant mutations in signal transducer and activator of transcription 3 (STAT3) lead to inhibition of normal STAT signaling with broad and profound effects. Patients have characteristic facies with broad nose, kyphoscoliosis, and eczema. The primary teeth erupt normally but do not deciduate, often requiring extraction. Patients develop recurrent sinopulmonary and cutaneous infections that tend to be much less inflamed than appropriate for the degree of infection and have been referred to as “cold abscesses.” Characteristically, pneumonias cavitate, leading to pneumatoceles. Coronary artery aneurysms are common, as are cerebral demyelinated plaques that accumulate with age. Importantly, IL-17–producing T cells, which are thought responsible for protection against extracellular and mucosal infections, are profoundly reduced in Job’s syndrome. Despite very high IgE levels, these patients do not have elevated levels of allergy. An important syndrome with clinical overlap with STAT3 deficiency is due to autosomal recessive defects in dedicator of cytokinesis 8 (DOCK8). In DOCK8 deficiency, IgE elevation is joined to severe allergy, viral susceptibility, and increased rates of cancer. Initial studies of WBC and differential and often a bone marrow examination may be followed by assessment of bone marrow reserves (steroid challenge test), marginated circulating pool of cells (epinephrine challenge test), and marginating ability (endotoxin challenge test) (Fig. 80-7). In vivo assessment of inflammation is possible with a Rebuck skin window test or an in vivo skin blister assay, which measures the 423 ability of leukocytes and inflammatory mediators to accumulate locally in the skin. In vitro tests of phagocyte aggregation, adherence, chemotaxis, phagocytosis, degranulation, and microbicidal activity (for S. aureus) may help pinpoint cellular or humoral lesions. Deficiencies of oxidative metabolism are detected with either the nitroblue tetrazolium (NBT) dye test or the dihydrorhodamine (DHR) oxidation test. These tests are based on the ability of products of oxidative metabolism to alter the oxidation states of reporter molecules so that they can be detected microscopically (NBT) or by flow cytometry (DHR). Qualitative studies of superoxide and hydrogen peroxide production may further define neutrophil oxidative function. Patients with leukopenias or leukocyte dysfunction often have delayed inflammatory responses. Therefore, clinical manifestations may be minimal despite overwhelming infection, and unusual infections must always be suspected. Early signs of infection demand prompt, aggressive culturing for microorganisms, use of antibiotics, and surgical drainage of abscesses. Prolonged courses of antibiotics are often required. In patients with CGD, prophylactic antibiotics (trimethoprim-sulfamethoxazole) and antifungals (itraconazole) markedly diminish the frequency of life-threatening infections. Glucocorticoids may relieve gastrointestinal or genitourinary tract obstruction by granulomas in patients with CGD. Although TNF-α-blocking agents may markedly relieve inflammatory bowel symptoms, extreme caution must be exercised in their use in CGD inflammatory bowel disease, because it profoundly increases these patients’ already heightened susceptibility to infection. Recombinant human IFN-γ, which nonspecifically stimulates phagocytic cell function, reduces the frequency of infections in patients with CGD by 70% and reduces the severity of infection. This effect of IFN-γ in CGD is additive to the effect of prophylactic antibiotics. The recommended dose is 50 μg/m2 subcutaneously three times weekly. IFN-γ has also been used successfully in the treatment of leprosy, nontuberculous mycobacteria, and visceral leishmaniasis. Rigorous oral hygiene reduces but does not eliminate the discomfort of gingivitis, periodontal disease, and aphthous ulcers; chlorhexidine mouthwash and tooth brushing with a hydrogen peroxide–sodium bicarbonate paste help many patients. Oral antifungal agents (fluconazole, itraconazole, voriconazole, posaconazole) have reduced mucocutaneous candidiasis in patients with Job’s syndrome. Androgens, glucocorticoids, lithium, and immunosuppressive therapy have been used to restore myelopoiesis in patients with neutropenia due to impaired production. Recombinant G-CSF is useful in the management of certain forms of neutropenia due to depressed neutrophil production, including those related to cancer chemotherapy. Patients with chronic neutropenia with evidence of a good bone marrow reserve need not receive prophylactic antibiotics. Patients with chronic or cyclic neutrophil counts <500/μL may benefit from prophylactic antibiotics and G-CSF during periods of neutropenia. Oral trimethoprim-sulfamethoxazole (160/800 mg) twice daily can prevent infection. Increased numbers of fungal infections are not seen in patients with CGD on this regimen. Oral quinolones such as levofloxacin and ciprofloxacin are alternatives. In the setting of cytotoxic chemotherapy with severe, persistent lymphocyte dysfunction, trimethoprim-sulfamethoxazole prevents Pneumocystis jiroveci pneumonia. These patients, and patients with phagocytic cell dysfunction, should avoid heavy exposure to airborne soil, dust, or decaying matter (mulch, manure), which are often rich in Nocardia and the spores of Aspergillus and other fungi. Restriction of activities or social contact has no proven role in reducing risk of infection for phagocyte defects. Although aggressive medical care for many patients with phagocytic disorders can allow them to go for years without a life-threatening infection, there may still be delayed effects of prolonged antimicrobials and other inflammatory complications. Cure of most congenital phagocyte defects is possible by bone marrow transplantation, and rates of success are improving (Chap. 139e). The identification of specific gene defects in patients with LAD 1, CGD, and other immunodeficiencies has led to gene therapy trials in a number of genetic white cell disorders. CHAPTER 80 Disorders of Granulocytes and Monocytes Atlas of Hematology and Analysis of Peripheral Blood Smears Dan L. Longo Some of the relevant findings in peripheral blood, enlarged lymph nodes, and bone marrow are illustrated in this chapter. Systematic his-81e tologic examination of the bone marrow and lymph nodes is beyond the scope of a general medicine textbook. However, every internist should know how to examine a peripheral blood smear. The examination of a peripheral blood smear is one of the most informative exercises a physician can perform. Although advances in automated technology have made the examination of a peripheral blood smear by a physician seem less important, the technology is not a completely satisfactory replacement for a blood smear interpretation by a trained medical professional who also knows the patient’s clinical history, family history, social history, and physical findings. It is useful to ask the laboratory to generate a Wright’s-stained peripheral blood smear and examine it. The best place to examine blood cell morphology is the feathered edge of the blood smear where red cells lie in a single layer, side by side, just barely touching one another but not overlapping. The author’s approach is to look at the smallest cellular elements, the platelets, first and work his way up in size to red cells and then white cells. Using an oil immersion lens that magnifies the cells 100-fold, one counts the platelets in five to six fields, averages the number per field, and multiplies by 20,000 to get a rough estimate of the platelet count. The platelets are usually 1–2 μm in diameter and have a blue granulated appearance. There is usually 1 platelet for every 20 or so red cells. Of course, the automated counter is much more accurate, but gross disparities between the automated and manual counts should be assessed. Large platelets may be a sign of rapid platelet turnover, as young platelets are often larger than old ones; alternatively, certain rare inherited syndromes can produce large platelets. Platelet clumping visible on the smear can be associated with falsely low automated platelet counts. Similarly, neutrophil fragmentation can be a source of falsely elevated automated platelet counts. Next one examines the red blood cells. One can gauge their size by comparing the red cell to the nucleus of a small lymphocyte. Both are normally about 8 μm wide. Red cells that are smaller than the small lymphocyte nucleus may be microcytic; those larger than the small lymphocyte nucleus may be macrocytic. Macrocytic cells also tend to be more oval than spherical in shape and are sometimes called macroovalocytes. The automated mean corpuscular volume (MCV) can assist in making a classification. However, some patients may have both iron and vitamin B12 deficiency, which will produce an MCV in the normal range but wide variation in red cell size. When the red cells vary greatly in size, anisocytosis is said to be present. When the red cells vary greatly in shape, poikilocytosis is said to be present. The electronic cell counter provides an independent assessment of variability in red cell size. It measures the range of red cell volumes and reports the results as “red cell distribution width” (RDW). This value is calculated from the MCV; thus, cell width is not being measured but cell volume is. The term is derived from the curve displaying the frequency of cells at each volume, also called the distribution. The width of the red cell volume distribution curve is what determines the RDW. The RDW is calculated as follows: RDW = (standard deviation of MCV ÷ mean MCV) × 100. In the presence of morphologic anisocytosis, RDW (normally 11–14%) increases to 15–18%. The RDW is useful in at least two clinical settings. In patients with microcytic anemia, the differential diagnosis is generally between iron deficiency and thalassemia. In thalassemia, the small red cells are generally of uniform size with a normal small RDW. In iron deficiency, the size variability and the RDW are large. In addition, a large RDW can suggest a dimorphic anemia when a chronic atrophic gastritis can produce both vitamin B12 malabsorption to produce macrocytic anemia and blood loss to produce iron deficiency. In such settings, RDW is also large. An elevated RDW also has been reported as a risk factor for all-cause mortality in 81e-1 population-based studies (Patel KV et al: Arch Intern Med 169:515, 2009), a finding that is unexplained currently. After red cell size is assessed, one examines the hemoglobin content of the cells. They are either normal in color (normochromic) or pale in color (hypochromic). They are never “hyperchromic.” If more than the normal amount of hemoglobin is made, the cells get larger—they do not become darker. In addition to hemoglobin content, the red cells are examined for inclusions. Red cell inclusions are the following: 1. Basophilic stippling—diffuse fine or coarse blue dots in the red cell usually representing RNA residue—especially common in lead poisoning 2. 3. Nuclei—red cells may be released or pushed out of the marrow prematurely before nuclear extrusion—often implies a myelophthisic process or a vigorous narrow response to anemia, usually hemolytic anemia 4. (Chap. 250e) 5. Polychromatophilia—the red cell cytoplasm has a bluish hue, reflecting the persistence of ribosomes still actively making hemoglobin in a young red cell Vital stains are necessary to see precipitated hemoglobin called Heinz bodies. Red cells can take on a variety of different shapes. All abnormally shaped red cells are poikilocytes. Small red cells without the central pallor are spherocytes; they can be seen in hereditary spherocytosis, hemolytic anemias of other causes, and clostridial sepsis. Dacrocytes are teardrop-shaped cells that can be seen in hemolytic anemias, severe iron deficiency, thalassemias, myelofibrosis, and myelodysplastic syndromes. Schistocytes are helmet-shaped cells that reflect microangiopathic hemolytic anemia or fragmentation on an artificial heart valve. Echinocytes are spiculated red cells with the spikes evenly spaced; they can represent an artifact of abnormal drying of the blood smear or reflect changes in stored blood. They also can be seen in renal failure and malnutrition and are often reversible. Acanthocytes are spiculated red cells with the spikes irregularly distributed. This process tends to be irreversible and reflects underlying renal disease, abetalipoproteinemia, or splenectomy. Elliptocytes are elliptical-shaped red cells that can reflect an inherited defect in the red cell membrane, but they also are seen in iron deficiency, myelodysplastic syndromes, megaloblastic anemia, and thalassemias. Stomatocytes are red cells in which the area of central pallor takes on the morphology of a slit instead of the usual round shape. Stomatocytes can indicate an inherited red cell membrane defect and also can be seen in alcoholism. Target cells have an area of central pallor that contains a dense center, or bull’s-eye. These cells are seen classically in thalassemia, but they are also present in iron deficiency, cholestatic liver disease, and some hemoglobinopathies. They also can be generated artifactually by improper slide making. One last feature of the red cells to assess before moving to the white blood cells is the distribution of the red cells on the smear. In most individuals, the cells lie side by side in a single layer. Some patients have red cell clumping (called agglutination) in which the red cells pile upon one another; it is seen in certain paraproteinemias and autoimmune hemolytic anemias. Another abnormal distribution involves red cells lying in single cell rows on top of one another like stacks of coins. This is called rouleaux formation and reflects abnormal serum protein levels. Finally, one examines the white blood cells. Three types of granulocytes are usually present: neutrophils, eosinophils, and basophils, in decreasing frequency. Neutrophils are generally the most abundant white cell. They are round, are 10–14 μm wide, and contain a lobulated nucleus with two to five lobes connected by a thin chromatin thread. Bands are immature neutrophils that have not completed nuclear condensation and have a U-shaped nucleus. Bands reflect a left shift in neutrophil maturation in an effort to make more cells more rapidly. CHAPTER 81e Atlas of Hematology and Analysis of Peripheral Blood Smears 81e-2 Neutrophils can provide clues to a variety of conditions. Vacuolated neutrophils may be a sign of bacterial sepsis. The presence of 1to 2-μm blue cytoplasmic inclusions, called Dle bodies, can reflect infections, burns, or other inflammatory states. If the neutrophil granules are larger than normal and stain a darker blue, “toxic granulations” are said to be present, and they also suggest a systemic inflammation. The presence of neutrophils with more than five nuclear lobes suggests megaloblastic anemia. Large misshapen granules may reflect the inherited Chédiak-Higashi syndrome. Eosinophils are slightly larger than neutrophils, have bilobed nuclei, and contain large red granules. Diseases of eosinophils are associated with too many of them rather than any morphologic or qualitative change. They normally total less than one-thirtieth the number of neutrophils. Basophils are even rarer than eosinophils in the blood. They have large dark blue granules and may be increased as part of chronic myeloid leukemia. Lymphocytes can be present in several morphologic forms. Most common in healthy individuals are small lymphocytes with a small dark nucleus and scarce cytoplasm. In the presence of viral infections, more of the lymphocytes are larger, about the size of neutrophils, with abundant cytoplasm and a less condensed nuclear chromatin. These cells are called reactive lymphocytes. About 1% of lymphocytes are larger and contain blue granules in a light blue cytoplasm; they are called large granular lymphocytes. In chronic lymphoid leukemia, the small lymphocytes are increased in number, and many of them are ruptured in making the blood smear, leaving a smudge of nuclear material without a surrounding cytoplasm or cell membrane; they are called smudge cells and are rare in the absence of chronic lymphoid leukemia. Monocytes are the largest white blood cells, ranging from 15 to 22 μm in diameter. The nucleus can take on a variety of shapes but usually appears to be folded; the cytoplasm is gray. Abnormal cells may appear in the blood. Most often the abnormal cells originate from neoplasms of bone marrow–derived cells, including lymphoid cells, myeloid cells, and occasionally red cells. More rarely, other types of tumors can get access to the bloodstream, and rare epithelial malignant cells may be identified. The chances of seeing such abnormal cells is increased by examining blood smears made from buffy coats, the layer of cells that is visible on top of sedimenting red cells when blood is left in the test tube for an hour. Smears made from finger sticks may include rare endothelial cells. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 81e-1 Normal peripheral blood smear. Small lymphocyte in center of field. Note that the diameter of the red blood cell is similar to the diameter of the small lymphocyte nucleus. Figure 81e-3 Hypochromic microcytic anemia of iron deficiency. Small lymphocyte in field helps assess the red blood cell size. Figure 81e-2 Reticulocyte count preparation. This new methylene blue–stained blood smear shows large numbers of heavily stained reticulocytes (the cells containing the dark blue–staining RNA precipitates). Figure 81e-4 Iron deficiency anemia next to normal red blood cells. Microcytes (right panel) are smaller than normal red blood cells (cell diameter <7 μm) and may or may not be poorly hemoglobinized (hypochromic). Figure 81e-5 Polychromatophilia. Note large red cells with light purple coloring. Figure 81e-8 Spherocytosis. Note small hyperchromatic cells with-out the usual clear area in the center. CHAPTER 81e Atlas of Hematology and Analysis of Peripheral Blood Smears Figure 81e-6 Macrocytosis. These cells are both larger than normal (mean corpuscular volume >100) and somewhat oval in shape. Some morphologists call these cells macroovalocytes. Figure 81e-9 Rouleaux formation. Small lymphocyte in center of field. These red cells align themselves in stacks and are related to increased serum protein levels. Figure 81e-7 Hypersegmented neutrophils. Hypersegmented neutrophils (multilobed polymorphonuclear leukocytes) are larger than normal neutrophils with five or more segmented nuclear lobes. They are commonly seen with folic acid or vitamin B12 deficiency. Figure 81e-10 Red cell agglutination. Small lymphocyte and seg-mented neutrophil in upper left center. Note irregular collections of aggregated red cells. Figure 81e-11 Fragmented red cells. Heart valve hemolysis. Figure 81e-14 Elliptocytosis. Small lymphocyte in center of field. Elliptical shape of red cells is related to weakened membrane struc-ture, usually due to mutations in spectrin. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 81e-12 Sickle cells. Homozygous sickle cell disease. A nucle-ated red cell and neutrophil are also in the field. Figure 81e-15 Stomatocytosis. Red cells characterized by a wide transverse slit or stoma. This often is seen as an artifact in a dehydrat-ed blood smear. These cells can be seen in hemolytic anemias and in conditions in which the red cell is overhydrated or dehydrated. Figure 81e-13 Target cells. Target cells are recognized by the bull’s-eye appearance of the cell. Small numbers of target cells are seen with liver disease and thalassemia. Larger numbers are typical of hemoglobin C disease. Figure 81e-16 Acanthocytosis. Spiculated red cells are of two types: acanthocytes are contracted dense cells with irregular membrane projections that vary in length and width; echinocytes have small, uniform, and evenly spaced membrane projections. Acanthocytes are present in severe liver disease, in patients with abetalipoproteinemia, and in rare patients with McLeod blood group. Echinocytes are found in patients with severe uremia, in glycolytic red cell enzyme defects, and in microangiopathic hemolytic anemia. Figure 81e-17 Howell-Jolly bodies. Howell-Jolly bodies are tiny nuclear remnants that normally are removed by the spleen. They appear in the blood after splenectomy (defect in removal) and with maturation/dysplastic disorders (excess production). Figure 81e-20 Reticulin stain of marrow myelofibrosis. Silver stain of a myelofibrotic marrow showing an increase in reticulin fibers (black-staining threads). Figure 81e-18 Teardrop cells and nucleated red blood cells char-acteristic of myelofibrosis. A teardrop-shaped red blood cell (left panel) and a nucleated red blood cell (right panel) as typically seen with myelofibrosis and extramedullary hematopoiesis. CHAPTER 81e Atlas of Hematology and Analysis of Peripheral Blood Smears Figure 81e-21 Stippled red cell in lead poisoning. Mild hypochro-mia. Coarsely stippled red cell. Figure 81e-19 Myelofibrosis of the bone marrow. Total replace-ment of marrow precursors and fat cells by a dense infiltrate of reticu-lin fibers and collagen (H&E stain). Figure 81e-22 Heinz bodies. Blood mixed with hypotonic solution of crystal violet. The stained material is precipitates of denatured hemoglobin within cells. Figure 81e-23 Giant platelets. Giant platelets, together with a marked increase in the platelet count, are seen in myeloproliferative disorders, especially primary thrombocythemia. Figure 81e-26 Normal eosinophils. The film was prepared from the buffy coat of the blood from a normal donor. E, eosinophil; L, lymphocyte; N, neutrophil. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 81e-27 Normal basophil. The film was prepared from the buffy coat of the blood from a normal donor. B, basophil; L, lymphocyte. Figure 81e-24 Normal granulocytes. The normal granulocyte has a segmented nucleus with heavy, clumped chromatin; fine neutrophilic granules are dispersed throughout the cytoplasm. Figure 81e-25 Normal monocytes. The film was prepared from the buffy coat of the blood from a normal donor. L, lymphocyte; M, monocyte; N, neutrophil. Figure 81e-28 Pelger-Ht anomaly. In this benign disorder, the majority of granulocytes are bilobed. The nucleus frequently has a spectacle-like, or “pince-nez,” configuration. Figure 81e-29 Dle body. Neutrophil band with Döhle body. The neutrophil with a sausage-shaped nucleus in the center of the field is a band form. Döhle bodies are discrete, blue-staining nongranular areas found in the periphery of the cytoplasm of the neutrophil in infections and other toxic states. They represent aggregates of rough endoplasmic reticulum. Figure 81e-32 Aplastic anemia bone marrow. Normal hematopoi-etic precursor cells are virtually absent, leaving behind fat cells, reticu-loendothelial cells, and the underlying sinusoidal structure. CHAPTER 81e Atlas of Hematology and Analysis of Peripheral Blood Smears Figure 81e-30 Chédiak-Higashi disease. Note giant granules in neutrophil. Figure 81e-33 Metastatic cancer in the bone marrow. Marrow biopsy specimen infiltrated with metastatic breast cancer and reactive fibrosis (H&E stain). Figure 81e-31 Normal bone marrow. Low-power view of normal adult marrow (hematoxylin and eosin [H&E] stain), showing a mix of fat cells (clear areas) and hematopoietic cells. The percentage of the space that consists of hematopoietic cells is referred to as marrow cellularity. In adults, normal marrow cellularity is 35–40%. If demands for increased marrow production occur, cellularity may increase to meet the demand. As people age, the marrow cellularity decreases and the marrow fat increases. Patients >70 years old may have a 20–30% marrow cellularity. Figure 81e-34 Lymphoma in the bone marrow. Nodular (follicular) lymphoma infiltrate in a marrow biopsy specimen. Note the character-istic paratrabecular location of the lymphoma cells. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 81e-35 Erythroid hyperplasia of the marrow. Marrow aspirate specimen with a myeloid/erythroid ratio (M/E ratio) of 1:1–2, typical for a patient with a hemolytic anemia or one recovering from blood loss. Figure 81e-36 Myeloid hyperplasia of the marrow. Marrow aspi-rate specimen showing a myeloid/erythroid ratio of ≥3:1, suggesting either a loss of red blood cell precursors or an expansion of myeloid elements. Figure 81e-37 Megaloblastic erythropoiesis. High-power view of megaloblastic red blood cell precursors from a patient with a macrocytic anemia. Maturation is delayed, with late normoblasts showing a more immature-appearing nucleus with a lattice-like pattern with normal cytoplasmic maturation. Figure 81e-38 Prussian blue staining of marrow iron stores. Iron stores can be graded on a scale of 0 to 4+. A. A marrow with excess iron stores (>4+); B. normal stores (2–3+); C. minimal stores (1+); and D. absent iron stores (0). Figure 81e-39 Ringed sideroblast. An orthochromatic normoblast with a collar of blue granules (mitochondria encrusted with iron) sur-rounding the nucleus. Figure 81e-42 Acute erythroleukemia. Note giant dysmorphic erythroblasts; two are binucleate, and one is multinucleate. Figure 81e-40 Acute myeloid leukemia. Leukemic myeloblast with an Auer rod. Note two to four large, prominent nucleoli in each cell. Figure 81e-43 Acute lymphoblastic leukemia. CHAPTER 81e Atlas of Hematology and Analysis of Peripheral Blood Smears Figure 81e-41 Acute promyelocytic leukemia. Note prominent cytoplasmic granules in the leukemia cells. Figure 81e-44 Burkitt’s leukemia, acute lymphoblastic leukemia. Figure 81e-45 Chronic myeloid leukemia in the peripheral blood. Figure 81e-48 Adult T cell leukemia. Peripheral blood smear show-ing leukemia cells with typical “flower-shaped” nucleus. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 81e-46 Chronic lymphoid leukemia in the peripheral blood. Figure 81e-49 Follicular lymphoma in a lymph node. The normal nodal architecture is effaced by nodular expansions of tumor cells. Nodules vary in size and contain predominantly small lymphocytes with cleaved nuclei along with variable numbers of larger cells with vesicular chromatin and prominent nucleoli. Figure 81e-47 Sézary’s syndrome. Lymphocytes with frequently convoluted nuclei (Sézary cells) in a patient with advanced mycosis fungoides. Figure 81e-50 Diffuse large B cell lymphoma in a lymph node. The neoplastic cells are heterogeneous but predominantly large cells with vesicular chromatin and prominent nucleoli. Figure 81e-51 Burkitt’s lymphoma in a lymph node. Burkitt’s lym-phoma with starry-sky appearance. The lighter areas are macrophages attempting to clear dead cells. Figure 81e-54 Lacunar cell; Reed-Sternberg cell variant in nodular sclerosing Hodgkin’s disease. High-power view of single mononuclear lacunar cell with retracted cytoplasm in a patient with nodular scleros-ing Hodgkin’s disease. CHAPTER 81e Atlas of Hematology and Analysis of Peripheral Blood Smears Figure 81e-52 Erythrophagocytosis accompanying aggressive lymphoma. The central macrophage is ingesting red cells, neutro-phils, and platelets. (Courtesy of Dr. Kiyomi Tsukimori, Kyushu University, Fukuoka, Japan.) Figure 81e-55 Normal plasma cell. Figure 81e-53 Hodgkin’s disease. A Reed-Sternberg cell is present near the center of the field; a large cell with a bilobed nucleus and prominent nucleoli giving an “owl’s eyes” appearance. The majority of the cells are normal lymphocytes, neutrophils, and eosinophils that form a pleiomorphic cellular infiltrate. Figure 81e-56 Multiple myeloma. PART 2 Cardinal Manifestations and Presentation of Diseases Figure 81e-57 Serum color in hemoglobinemia. The distinctive red coloration of plasma (hemoglobinemia) in a spun blood sample in a patient with intravascular hemolysis. AcknowledgmentFigures in this e-chapter were borrowed from Williams Hematology, 7th edition, M Lichtman et al (eds). New York, McGraw-Hill, 2005; Hematology in General Practice, 4th edition, RS Hillman, KA Ault, New York, McGraw-Hill, 2005. 425part 3: Genes, the Environment, and Disease of the genome (and epigenome) in various malignancies has led toprinciples of human Genetics fundamental new insights into cancer biology and reveals that the genomic profile of mutations is in many cases more important in J. Larry Jameson, Peter Kopp determining the appropriate chemotherapy than the organ in which The prevalence of genetic diseases, combined with their potential severity and chronic nature, imposes great human, social, and financial burdens on society. Human genetics refers to the study of individual genes, their role and function in disease, and their mode of inheritance. Genomics refers to an organism’s entire genetic information, the genome, and the function and interaction of DNA within the genome, as well as with environmental or nongenetic factors, such as a person’s lifestyle. With the characterization of the human genome, genomics complements traditional genetics in our efforts to elucidate the etiology and pathogenesis of disease and to improve therapeutic interventions and outcomes. Following impressive advances in genetics, genomics, and health care information technology, the consequences of this wealth of knowledge for the practice of medicine are profound and play an increasingly prominent role in the diagnosis, prevention, and treatment of disease (Chap. 84). Personalized medicine, the customization of medical decisions to an individual patient, relies heavily on genetic information. For example, a patient’s genetic characteristics (genotype) can be used to optimize drug therapy and predict efficacy, adverse events, and drug dosing of selected medications (pharmacogenetics) (Chap. 5). The mutational profile of a malignancy allows the selection of therapies that target mutated or overexpressed signaling molecules. Although still investigational, genomic risk prediction models for common diseases are beginning to emerge. Genetics has traditionally been viewed through the window of relatively rare single-gene diseases. These disorders account for ~10% of pediatric admissions and childhood mortality. Historically, genetics has focused predominantly on chromosomal and metabolic disorders, reflecting the long-standing availability of techniques to diagnose these conditions. For example, conditions such as trisomy 21 (Down’s syndrome) or monosomy X (Turner’s syndrome) can be diagnosed using cytogenetics (Chap. 83e). Likewise, many metabolic disorders (e.g., phenylketonuria, familial hypercholesterolemia) are diagnosed using biochemical analyses. The advances in DNA diagnostics have extended the field of genetics to include virtually all medical specialties and have led to the elucidation of the pathogenesis of numerous monogenic disorders. In addition, it is apparent that virtually every medical condition has a genetic component. As is often evident from a patient’s family history, many common disorders such as hypertension, heart disease, asthma, diabetes mellitus, and mental illnesses are significantly influenced by the genetic background. These polygenic or multifactorial (complex) disorders involve the contributions of many different genes, as well as environmental factors that can modify disease risk (Chap. 84). Genome-wide association studies (GWAS) have elucidated numerous disease-associated loci and are providing novel insights into the allelic architecture of complex traits. These studies have been facilitated by the availability of comprehensive catalogues of human single-nucleotide polymorphism (SNP) haplotypes generated through the HapMap Project. The sequencing of whole genomes or exomes (the exons within the genome) is increasingly used in the clinical realm in order to characterize individuals with complex undiagnosed conditions or to characterize the mutational profile of advanced malignancies in order to select better targeted therapies. Cancer has a genetic basis because it results from acquired somatic mutations in genes controlling growth, apoptosis, and cellular differentiation (Chap. 101e). In addition, the development of many cancers is associated with a hereditary predisposition. Characterization the tumor originates. Hence, comprehensive mutational profiling of malignancies has increasing impact on cancer taxonomy, the choice of targeted therapies, and improved outcomes. Genetic and genomic approaches have proven invaluable for the detection of infectious pathogens and are used clinically to identify agents that are difficult to culture such as mycobacteria, viruses, and parasites, or to track infectious agents locally or globally. In many cases, molecular genetics has improved the feasibility and accuracy of diagnostic testing and is beginning to open new avenues for therapy, including gene and cellular therapy (Chaps. 90e and 91e). Molecular genetics has also provided the opportunity to characterize the microbiome, a new field that characterizes the population dynamics of bacteria, viruses, and parasites that coexist with humans and other animals (Chap. 86e). Emerging data indicate that the microbiome has significant effects on normal physiology as well as various disease states. Molecular biology has significantly changed the treatment of human disease. Peptide hormones, growth factors, cytokines, and vaccines can now be produced in large amounts using recombinant DNA technology. Targeted modifications of these peptides provide the practitioner with improved therapeutic tools, as illustrated by genetically modified insulin analogues with more favorable kinetics. Lastly, there is reason to believe that a better understanding of the genetic basis of human disease will also have an increasing impact on disease prevention. The astounding rate at which new genetic information is being generated creates a major challenge for physicians, health care providers, and basic investigators. Although many functional aspects of the genome remain unknown, there are many clinical situations where sufficient evidence exits for the use of genetic and genomic information to optimize patient care and treatment. Much genetic information resides in databases or is being published in basic science journals. Databases provide easy access to the expanding information about the human genome, genetic disease, and genetic testing (Table 82-1). For example, several thousand monogenic disorders are summarized in a large, continuously evolving compendium, referred to as the Online Mendelian Inheritance in Man (OMIM) catalogue (Table 82-1). The ongoing refinement of bioinformatics is simplifying the analysis and access to this daunting amount of new information. THE HUMAN GENOME Structure of the Human Genome • Human Genome Project The Human Genome Project was initiated in the mid-1980s as an ambitious effort to characterize the entire human genome. Although the prospect of determining the complete sequence of the human genome seemed daunting several years ago, technical advances in DNA sequencing and bioinformatics led to the completion of a draft human sequence in 2000 and the completion of the DNA sequence for the last of the human chromosomes in May 2006. Currently, facilitated by rapidly decreasing costs for comprehensive sequence analyses and improvement of bioinformatics pipelines for data analysis, the sequencing of whole genomes and exomes is used with increasing frequency in the clinical setting. The scope of a whole genome sequence analysis can be illustrated by the following analogy. Human DNA consists of ~3 billion base pairs (bp) of DNA per haploid genome, which is nearly 1000-fold greater than that of the Escherichia coli genome. If the human DNA sequence were printed out, it would correspond to about 120 volumes of Harrison’s Principles of Internal Medicine. In addition to the human genome, the genomes of numerous organisms have been sequenced completely (~4000) or partially (~10,000) (Genomes Online Database [GOLD]; Table 82-1). They include, among others, eukaryotes such as the mouse (Mus musculus), Chapter 82 Principles of Human Genetics Catalog of Published Genome-Wide Association Studies Office of Biotechnology Activities, National Institutes of Health American College of Medical Genetics and Genomics American Society of Human Genetics MITOMAP, a human mitochondrial genome database Dolan DNA Learning Center, Cold Spring Harbor Laboratories The Online Metabolic and Molecular Bases of Inherited Disease (OMMBID) The Jackson Laboratory http://www.ncbi.nlm.nih.gov/ http://www.genome.gov/ http://www.genome.gov/ GWAStudies/ http://www.ensembl.org http://www.ncbi.nlm.nih.gov/ omim http://oba.od.nih.gov/oba http://www.acmg.net/ http://www.ashg.org http://cgap.nci.nih.gov/ http://www.genetests.org/ http://www.genomesonline.org/ http://www.genenames.org/ http://www.mitomap.org/ http://www.hapmap.org/ http://www.genome.gov/10005107 http://www.dnalc.org/ http://www.ommbid.com/ http://omia.angis.org.au/ http://www.jax.org/ http://www.informatics.jax.org Broad access to biomedical and genomic information, literature (PubMed), sequence databases, software for analyses of nucleotides and proteins Extensive links to other databases, genome resources, and tutorials An institute of the National Institutes of Health focused on genomic and genetic research; links providing information about the human genome sequence, genomes of other organisms, and genomic research Maps and sequence information of eukaryotic genomes Online compendium of Mendelian disorders and human genes causing genetic disorders Information about recombinant DNA and gene transfer; medical, ethical, legal, and social issues raised by genetic testing; medical, ethical, legal, and social issues raised by xenotransplantation Extensive links to other databases relevant for the diagnosis, treatment, and prevention of genetic disease Information about advances in genetic research, professional and public education, social and scientific policies Information about gene expression profiles of normal, precancer, and cancer cells International directory of genetic testing laboratories and prenatal diagnosis clinics; reviews and educational materials Gene names and symbols A compendium of polymorphisms and mutations of the human mitochondrial DNA Catalogue of haplotypes in different ethnic groups relevant for association studies and pharmacogenomics Encyclopedia of DNA Elements; catalogue of all functional elements in the human genome Educational material about selected genetic disorders, DNA, eugenics, and genetic origin Online version of the comprehensive text on the metabolic and molecular bases of inherited disease Online compendium of Mendelian disorders in animals Information about murine models and the mouse genome Mouse genome informatics Note: Databases are evolving constantly. Pertinent information may be found by using links listed in the few selected databases. Saccharomyces cerevisiae, Caenorhabditis elegans, and Drosophila melanogaster; bacteria (e.g., E. coli); and Archaea, viruses, organelles (mitochondria, chloroplasts), and plants (e.g., Arabidopsis thaliana). Genomic information of infectious agents has significant impact for the characterization of infectious outbreaks and epidemics. Other ramifications arising from the availability of genomic data include, among others, (1) the comparison of entire genomes (comparative genomics), (2) the study of large-scale expression of RNAs (functional genomics) and proteins (proteomics) to detect differences between various tissues in health and disease, (3) the characterization of the variation among individuals by establishing catalogues of sequence variations and SNPs (HapMap Project), and (4) the identification of genes that play critical roles in the development of polygenic and multifactorial disorders. cHromosomes The human genome is divided into 23 different chromosomes, including 22 autosomes (numbered 1–22) and the X and Y sex chromosomes (Fig. 82-1). Adult cells are diploid, meaning they contain two homologous sets of 22 autosomes and a pair of sex chromosomes. Females have two X chromosomes (XX), whereas males have one X and one Y chromosome (XY). As a consequence of meiosis, germ cells (sperm or oocytes) are haploid and contain one set of 22 autosomes and one of the sex chromosomes. At the time of fertilization, the diploid genome is reconstituted by pairing of the homologous chromosomes from the mother and father. With each cell division (mitosis), chromosomes are replicated, paired, segregated, and divided into two daughter cells. structure of Dna DNA is a double-stranded helix composed of four different bases: adenine (A), thymidine (T), guanine (G), and cytosine (C). Adenine is paired to thymidine, and guanine is paired to cytosine, by hydrogen bond interactions that span the double helix (Fig. 82-1). DNA has several remarkable features that make it ideal for the transmission of genetic information. It is relatively stable, and the double-stranded nature of DNA and its feature of strict base-pair complementarity permit faithful replication during cell division. Complementarity also allows the transmission of genetic information from DNA → RNA → protein (Fig. 82-2). mRNA is encoded by the so-called sense or coding strand of the DNA double helix and is translated into proteins by ribosomes. The presence of four different bases provides surprising genetic diversity. In the protein-coding regions of genes, the DNA bases are arranged into codons, a triplet of bases that specifies a particular amino acid. It is possible to arrange the four bases into 64 different triplet codons (43). Each codon specifies 1 of the 20 different amino acids, or a regulatory signal such as initiation and stop of translation. Because there are more codons than amino acids, the genetic code is degenerate; that is, most amino acids can be specified by several different codons. By arranging the codons in different combinations and in Nucleosome core Histone H2A, H2B, H4 p, short arm Centromere Solenoid q, long arm FIGURE 82-1 Structure of chromatin and chromosomes. Chromatin is composed of double-strand DNA that is wrapped around histone and nonhistone proteins forming nucleosomes. The nucleosomes are further organized into solenoid structures. Chromosomes assume their characteristic structure, with short (p) and long (q) arms at the metaphase stage of the cell cycle. various lengths, it is possible to generate the tremendous diversity of primary protein structure. DNA length is normally measured in units of 1000 bp (kilobases, kb) or 1,000,000 bp (megabases, Mb). Not all DNA encodes genes. In fact, genes account for only ~10–15% of DNA. Much of the remaining DNA consists of sequences, often of highly repetitive nature, the function of which is poorly understood. These repetitive DNA regions, along with nonrepetitive sequences that do not encode genes, serve, in part, a structural role in the packaging of DNA into chromatin (i.e., DNA bound to histone proteins, and chromosomes) and exert regulatory functions (Fig. 82-1). Genes A gene is a functional unit that is regulated by transcription (see below) and encodes an RNA product, which is most commonly, but not always, translated into a protein that exerts activity within or outside the cell (Fig. 82-3). Historically, genes were identified because they conferred specific traits that are transmitted from one generation to the next. Increasingly, they are characterized based on expression in various tissues (transcriptome). The size of genes is quite broad; some genes are only a few hundred base pairs, whereas others are extraordinarily large (2 Mb). The number of genes greatly underestimates the complexity of genetic expression, because single genes can generate multiple spliced messenger RNA (mRNA) products (isoforms), which are translated into proteins that are subject to complex posttranslational modification such as phosphorylation. Exons refer to the portion of genes that are eventually spliced together to form mRNA. Introns refer to the spacing regions between the exons that are spliced out of precursor RNAs during RNA processing. The gene locus also includes regions that are necessary to control its expression (Fig. 82-2). Current 427 estimates predict 20,687 protein-coding genes in the human genome with an average of about four different coding transcripts per gene. Remarkably, the exome only constitutes 1.14% of the genome. In addition, thousands of noncoding transcripts (RNAs of various length such as microRNAs and long noncoding RNAs), which function, at least in part, as transcriptional and posttranscriptional regulators of gene expression, have been identified. Aberrant expression of microRNAs has been found to play a pathogenic role in numerous diseases. sinGle-nucleotiDe PolymorPHisms An SNP is a variation of a single base pair in the DNA. The identification of the ~10 million SNPs estimated to occur in the human genome has generated a catalogue of common genetic variants that occur in human beings from distinct ethnic backgrounds (Fig. 82-3). SNPs are the most common type of sequence variation and account for ~90% of all sequence variation. They occur on average every 100 to 300 bases and are the major source of genetic heterogeneity. Remarkably, however, the primary DNA sequence of humans has ~99.9% similarity compared to that of any other human. SNPs that are in close proximity are inherited together (e.g., they are linked) and are referred to as haplotypes(Fig. 82-4). The HapMap describes the nature and location of these SNP haplotypes and how they are distributed among individuals within and among populations. The haplotype map information, referred to as HapMap, is greatly facilitating GWAS designed to elucidate the complex interactions among multiple genes and lifestyle factors in multifactorial disorders (see below). Moreover, haplotype analyses are useful to assess variations in responses to medications (pharmacogenomics) and environmental factors, as well as the prediction of disease predisposition. coPy number variations Copy number variations (CNVs) are relatively large genomic regions (1 kb to several Mb) that have been duplicated or deleted on certain chromosomes (Fig. 82-5). It has been estimated that as many as 1500 CNVs, scattered throughout the genome, are present in an individual. When comparing the genomes of two individuals, approximately 0.4–0.8% of their genomes differ in terms of CNVs. Of note, de novo CNVs have been observed between monozygotic twins, who otherwise have identical genomes. Some CNVs have been associated with susceptibility or resistance to disease, and CNVs can be elevated in cancer cells. Replication of DNA and Mitosis Genetic information in DNA is transmitted to daughter cells under two different circumstances: (1) somatic cells divide by mitosis, allowing the diploid (2n) genome to replicate itself completely in conjunction with cell division; and (2) germ cells (sperm and ova) undergo meiosis, a process that enables the reduction of the diploid (2n) set of chromosomes to the haploid state (1n). Prior to mitosis, cells exit the resting, or G0 state, and enter the cell cycle (Chap. 101e). After traversing a critical checkpoint in G1, cells undergo DNA synthesis (S phase), during which the DNA in each chromosome is replicated, yielding two pairs of sister chromatids (2n → 4n). The process of DNA synthesis requires stringent fidelity in order to avoid transmitting errors to subsequent generations of cells. Genetic abnormalities of DNA mismatch/repair include xeroderma pigmentosum, Bloom’s syndrome, ataxia telangiectasia, and hereditary nonpolyposis colon cancer (HNPCC), among others. Many of these disorders strongly predispose to neoplasia because of the rapid acquisition of additional mutations (Chap. 101e). After completion of DNA synthesis, cells enter G2 and progress through a second checkpoint before entering mitosis. At this stage, the chromosomes condense and are aligned along the equatorial plate at metaphase. The two identical sister chromatids, held together at the centromere, divide and migrate to opposite poles of the cell. After formation of a nuclear membrane around the two separated sets of chromatids, the cell divides and two daughter cells are formed, thus restoring the diploid (2n) state. Assortment and Segregation of Genes During Meiosis Meiosis occurs only in germ cells of the gonads. It shares certain features with mitosis but involves two distinct steps of cell division that reduce the chromosome number to the haploid state. In addition, there is active recombination that generates genetic diversity. During the first cell division, two Chapter 82 Principles of Human Genetics Regulation of Gene Expression FIGURE 82-2 Flow of genetic information. Multiple extracellular signals activate intracellular signal cascades that result in altered regulation of gene expression through the interaction of transcription factors with regulatory regions of genes. RNA polymerase transcribes DNA into RNA that is processed to mRNA by excision of intronic sequences. The mRNA is translated into a polypeptide chain to form the mature protein after undergoing posttranslational processing. CBP, CREB-binding protein; CoA, co-activator; COOH, carboxyterminus; CRE, cyclic AMP responsive element; CREB, cyclic AMP response element–binding protein; GTF, general transcription factors; HAT, histone acetyl transferase; NH2, aminoterminus; RE, response element; TAF, TBP-associated factors; TATA, TATA box; TBP, TATA-binding protein. sister chromatids (2n → 4n) are formed for each chromosome pair and there is an exchange of DNA between homologous paternal and maternal chromosomes. This process involves the formation of chiasmata, structures that correspond to the DNA segments that cross over between the maternal and paternal homologues (Fig. 82-6). Usually there is at least one crossover on each chromosomal arm; recombination occurs more frequently in female meiosis than in male meiosis. Subsequently, the chromosomes segregate randomly. Because there are 23 chromosomes, there exist 223 (>8 million) possible combinations of chromosomes. Together with the genetic exchanges that occur during recombination, chromosomal segregation generates tremendous diversity, and each gamete is genetically unique. The process of recombination and the independent segregation of chromosomes provide the foundation for performing linkage analyses, whereby one attempts to correlate the inheritance of certain chromosomal regions (or linked genes) with the presence of a disease or genetic trait (see below). After the first meiotic division, which results in two daughter cells (2n), the two chromatids of each chromosome separate during a second meiotic division to yield four gametes with a haploid state (1n). When the egg is fertilized by sperm, the two haploid sets are combined, thereby restoring the diploid state (2n) in the zygote. REGULATION OF GENE EXPRESSION Regulation by Transcription Factors The expression of genes is regulated by DNA-binding proteins that activate or repress transcription. The number of DNA sequences and transcription factors that regulate transcription is much greater than originally anticipated. Most genes contain at least 15–20 discrete regulatory elements within 300 bp of the transcription start site. This densely packed promoter region often contains binding sites for ubiquitous transcription factors such as CAAT box/enhancer binding protein (C/EBP), cyclic AMP response element–binding (CREB) protein, selective promoter factor 1 (Sp-1), or activator protein 1 (AP-1). However, factors involved in cell-specific expression may also bind to these sequences. Key regulatory elements may also reside at a large distance from the proximal promoter. The globin and the immunoglobulin genes, for example, contain locus control regions that are several kilobases away from the structural sequences of the gene. Specific groups of transcription factors that bind to these promoter and enhancer sequences provide a combinatorial code for regulating transcription. In this manner, relatively ubiquitous factors interact with more restricted factors to allow each gene to be expressed and regulated in a unique manner that is dependent on developmental state, cell type, and numerous extracellular stimuli. Regulatory factors also bind within the gene itself, particularly in the intronic regions. The transcription factors that bind to DNA actually represent only the first level of regulatory control. Other proteins—co-activators and co-repressors—interact with the DNA-binding transcription factors to generate large regulatory complexes. These complexes are subject to control by numerous cell-signaling pathways and enzymes, leading to phosphorylation, acetylation, sumoylation, and ubiquitination. Ultimately, the recruited transcription factors interact with, and stabilize, components of the basal transcription complex that assembles at the site of the TATA box and initiator region. This basal transcription factor complex consists of >30 different proteins. Gene transcription occurs when RNA polymerase begins to synthesize RNA from the DNA template. A large number of identified genetic diseases involve transcription factors (Table 82-2). The field of functional genomics is based on the concept that understanding alterations of gene expression under various physiologic and pathologic conditions provides insight into the underlying functional role of the gene. By revealing specific gene expression profiles, this knowledge may be of diagnostic and therapeutic relevance. The large-scale study of expression profiles, which takes advantage of microarray and bead array technologies, is also referred to as transcriptomics 429 Known Genes (1260) SNPs (612,977) p22.3 p22.1p21.3 p21.1 p15.3 p15.1 p14.3 p14.1 p13 p12.3 p12.1 p11.2 q11.21 q11.22 q11.23 q21.11 p21.13 q21.3 q22.1q22.3q31.1 q31.2 q31.31 q31.33 q32.1 q36.1 q36.3 in 7q31.2 containing the CFTR gene is shown below. The CFTR gene contains 27 exons. More than 1900 mutations in this gene have been found in patients with cystic fibrosis. A 20-kb region encompassing exons 4–9 is shown further amplified to illustrate the SNPs in this region. Chapter 82 Principles of Human Genetics FIGURE 82-4 The origin of haplotypes is due to repeated recombination events occurring in multiple generations. Over time, this leads to distinct haplotypes. These haplotype blocks can often be characterized by genotyping selected Tag single-nucleotide polymorphisms (SNPs), an approach that facilitates performing genome-wide association studies (GWAS). because the complement of mRNAs transcribed by the cellular genome is called the transcriptome. Most studies of gene expression have focused on the regulatory DNA elements of genes that control transcription. However, it should be emphasized that gene expression requires a series of steps, including mRNA processing, protein translation, and posttranslational modifications, all of which are actively regulated (Fig. 82-2). Epigenetic Regulation of Gene Expression Epigenetics describes mechanisms and phenotypic changes that are not a result of variation in the primary DNA nucleotide sequence, but are caused by secondary modifications of DNA or histones. These modifications include heritable changes such as X-inactivation and imprinting, but they can also result from dynamic posttranslational protein modifications in response to environmental influences such as diet, age, or drugs. The epigenetic modifications result in altered expression of individual genes or chromosomal loci encompassing multiple genes. The term epigenome describes the constellation of covalent modifications of DNA and histones that impact chromatin structure, as well as non-coding transcripts that modulate the transcriptional activity of DNA. Although the primary DNA sequence is usually identical in all cells of an organism, tissue-specific changes in the epigenome contribute to determining the transcriptional signature of a cell (transcriptome) and hence the protein expression profile (proteome). Mechanistically, DNA and histone modifications can result in the activation or silencing of gene expression (Fig. 82-7). DNA methylation involves the addition of a methyl group to cytosine residues. This is FIGURE 82-5 Copy number variations (CNV) encompass relatively large regions of the genome that have been duplicated or deleted. Chromosome 8 is shown with CNV detected by genomic hybridization. An increase in the signal strength indicates a duplication, a decrease reflects a deletion of the covered chromosomal regions. of the two X chromosome copies present in females. FIGURE 82-6 Crossing-over and genetic recombination. During chiasma formation, either of the two sister chromatids on one chromosome pairs with one of the chromatids of the homologous chromosome. Genetic recombination occurs through crossing-over and results in recombinant and nonrecombinant chromosome segments in the gametes. Together with the random segregation of the maternal and paternal chromosomes, recombination contributes to genetic diversity and forms the basis of the concept of linkage. usually restricted to cytosines of CpG dinucleotides, which are abundant throughout the genome. Methylation of these dinucleotides is thought to represent a defense mechanism that minimizes the expression of sequences that have been incorporated into the genome such as retroviral sequences. CpG dinucleotides also exist in so-called CpG islands, stretches of DNA characterized by a high CG content, which are found in the majority of human gene promoters. CpG islands in promoter regions are typically unmethylated, and the lack of methylation facilitates transcription. Histone methylation involves the addition of a methyl group to lysine residues in histone proteins (Fig. 82-7). Depending on the specific lysine residue being methylated, this alters chromatin configuration, either making it more open or tightly packed. Acetylation of histone proteins is another well-characterized mechanism that results in an open chromatin configuration, which favors active transcription. Acetylation is generally more dynamic than methylation, and many transcriptional activation complexes have histone acetylase activity, whereas repressor complexes often contain deacetylases and remove acetyl groups from histones. Other histone modifications, whose effects are incompletely characterized, include phosphorylation and sumoylation. Lastly, noncoding RNAs that bind to DNA can have a significant impact on transcriptional activity. Physiologically, epigenetic mechanisms play an important role in several instances. For example, X-inactivation refers to the relative silencing of one The inactivation process is a form of dosage compensation such that females (XX) do not generally express twice as many X-chromosomal gene products as males (XY). In a given cell, the choice of which chromosome is inactivated occurs randomly in humans. But once the maternal or paternal X chromosome is inactivated, it will remain inactive, and this information is transmitted with each cell division. The X-inactive specific transcript (Xist) gene encodes a large noncoding RNA that mediates the silencing of the X chromosome from which it is transcribed by coating it with Xist RNA. The inactive X chromosome is highly methylated and has low levels of histone acetylation. Epigenetic gene inactivation also occurs on selected chromosomal regions of autosomes, a phenomenon referred to as genomic imprinting. Through this mechanism, a small subset of genes is only expressed in a monoallelic fashion. Imprinting is heritable and leads to the preferential expression of one of the parental alleles, which deviates from the usual biallelic expression seen for the majority of genes. Remarkably, imprinting can be limited to a subset of tissues. Imprinting is mediated through DNA methylation of one of the alleles. The epigenetic marks on imprinted genes are maintained throughout life, but during zygote formation, they are activated or inactivated in a sex-specific manner (imprint reset) (Fig. 82-8), which allows a differential expression pattern in the fertilized egg and the subsequent mitotic divisions. Appropriate expression of imprinted genes is important for normal development and cellular functions. Imprinting defects and uniparental disomy, which is the inheritance of two chromosomes or chromosomal regions from the same parent, are the cause of several developmental disorders such as Beckwith-Wiedemann syndrome, Silver-Russell syndrome, Angelman’s syndrome, and Prader-Willi syndrome (see below). Monoallelic loss-of-function mutations in the GNAS1 gene lead to Albright’s hereditary osteodystrophy (AHO). Paternal transmission of GNAS1 mutations leads to an isolated AHO phenotype (pseudopseudohypoparathyroidism), whereas maternal transmission leads to AHO in combination with hormone resistance to parathyroid hormone, thyrotropin, and gonadotropins (pseudohypoparathyroidism type IA). These phenotypic differences are explained by tissue-specific imprinting of the GNAS1 gene, which is expressed primarily from the maternal allele in the thyroid, gonadotropes, and the proximal renal tubule. In most other tissues, the GNAS1 gene is expressed biallelically. Abbreviations: CREB, cAMP responsive element–binding protein; HNF, hepatocyte nuclear factor; PML, promyelocytic leukemia; RAR, retinoic acid receptor; SRY, sex-determining region Y; VHL, von Hippel–Lindau. In patients with isolated renal resistance to parathyroid hormone (pseu-It is caused by mutations in the MECP2 gene, which encodes a methyldohypoparathyroidism type IB), defective imprinting of the GNAS1 binding protein. The ensuing aberrant methylation results in abnormal gene results in decreased Gsα expression in the proximal renal tubules. gene expression in neurons, which are otherwise normally developed. Rett’s syndrome is an X-linked dominant disorder resulting in devel-Remarkably, epigenetic differences also occur among monozygotic opmental regression and stereotypic hand movements in affected girls. twins. Although twins are epigenetically indistinguishable during the FIGURE 82-7 Epigenetic modifications of DNA and histones. Methylation of cytosine residues is associated with gene silencing. Methylation of certain genomic regions is inherited (imprinting), and it is involved in the silencing of one of the two X chromosomes in females (X-inactivation). Alterations in methylation can also be acquired, e.g., in cancer cells. Covalent posttranslational modifications of histones play an important role in altering DNA accessibility and chromatin structure and hence in regulating transcription. Histones can be reversibly modified in their amino-terminal tails, which protrude from the nucleosome core particle, by acetylation of lysine, phosphorylation of serine, methylation of lysine and arginine residues, and sumoylation. Acetylation of histones by histone acetylases (HATs), e.g., leads to unwinding of chromatin and accessibility to transcription factors. Conversely, deacetylation by histone deacetylases (HDACs) results in a compact chromatin structure and silencing of transcription. early years of life, older monozygotic twins exhibit differences in the overall content and genomic distribution of DNA methylation and histone acetylation, which would be expected to alter gene expression in various tissues. In cancer, the epigenome is characterized by simultaneous losses and gains of DNA methylation in different genomic regions, as well as repressive histone modifications. Hyperand hypomethylation are associated with mutations in genes that control DNA methylation. Hypomethylation is thought to remove normal control mechanisms that prevent expression of repressed DNA regions. It is also associated with genomic instability. Hypermethylation, in contrast, results in the silencing of CpG islands in promoter regions of genes, including tumor-suppressor genes. Epigenetic alterations are considered to be more easily reversible compared to genetic changes, and modification of the epigenome with demethylating agents and histone deacetylases is being explored in clinical trials. Several organisms have been studied extensively as genetic models, including M. musculus (mouse), D. melanogaster (fruit fly), C. elegans (nematode), S. cerevisiae (baker’s yeast), and E. coli (colonic bacterium). The ability to use these evolutionarily distant organisms as genetic models that are relevant Maternal somatic cell Paternal somatic cell its functional consequences. Some mutations may be lethal, others are less deleterious, and some may confer an evolutionary advantage. Mutations can occur in the germline (sperm or oocytes); these can be transmitted to progeny. Alternatively, mutations can occur during embryogenesis or in somatic tissues. Mutations that occur during development lead to mosaicism, a situation in which tissues are composed of cells with different genetic constitutions. If the germline is mosaic, a mutation can be transmitted to some progeny but not others, which sometimes leads to confusion in assessing the pattern of inheritance. Somatic mutations that do not affect cell survival can sometimes be detected because of variable phenotypic effects in tissues (e.g., pigmented lesions in McCune-Albright syndrome). Other somatic mutations are associated with neoplasia because they confer a growth advantage to cells. Epigenetic events may also influence gene expression or facilitate genetic damage. With the exception of triplet nucleotide repeats, which can expand (see below), mutations are usually stable. Mutations are structurally diverse—they can involve the entire genome, as in triploidy (one extra set of chromosomes), or gross numerical or structural alterations in chromosomes or individual genes (Chap. 83e). Large deletions may affect a portion of a gene or an entire gene, or, if several genes are involved, they may lead to a contiguous gene Zygote syndrome. Unequal crossing-over between homologous genes can result in fusion gene mutations, as illustrated by color blindness. Mutations involving single nucleotides are referred to as point mutations. Substitutions are called transitions if a purine is replaced by another purine base (A ↔ G) or if a pyrimidine is replaced by another pyrimidine (C ↔ T). Changes from a purine to a pyrimidine, or vice versa, are referred to as transversions. If the DNA sequence change occurs in a coding region and alters an amino acid, it is called a missense mutation. Depending on the functional consequences of such a missense mutation, amino acid substitutions in different regions of the protein can lead to distinct Inactive Methylated pat mat Active Unmethylated FIGURE 82-8 A few genomic regions are imprinted in a parent-specific fashion. The unmethylated chromosomal regions are actively expressed, whereas the methylated regions are silenced. In the germline, the imprint is reset in a parent-specific fashion: both chromo-somes are unmethylated in the maternal (mat) germline and methylated in the paternal (pat) germline. In the zygote, the resulting imprinting pattern is identical with the pattern in the somatic cells of the parents. to human physiology reflects a surprising conservation of genetic pathways and gene function. Transgenic mouse models have been particularly valuable, because many human and mouse genes exhibit similar structure and function and because manipulation of the mouse genome is relatively straightforward compared to that of other mammalian species. Transgenic strategies in mice can be divided into two main approaches: (1) expression of a gene by random insertion into the genome, and (2) deletion or targeted mutagenesis of a gene by homologous recombination with the native endogenous gene (knock-out, knock-in). Previous versions of this chapter provide more detail about the technical principles underlying the development of genetically modified animals. Several databases provide comprehensive information about natural and transgenic animal models, the associated phenotypes, and integrated genetic, genomic, and biologic data (Table 82-1). TRANSMISSION OF GENETIC DISEASE Origins and Types of Mutations A mutation can be defined as any change in the primary nucleotide sequence of DNA regardless of phenotypes. Mutations can occur in all domains of a gene (Fig. 82-9). A point mutation occurring within the coding region leads to an amino acid substitution if the codon is altered (Fig. 82-10). Point mutations that introduce a premature stop codon result in a truncated protein. Large deletions may affect a portion of a gene or an entire gene, whereas small deletions and insertions alter the reading frame if they do not represent a multiple of three bases. These “frameshift” mutations lead to an entirely altered carboxy terminus. Mutations in intronic sequences or in exon junctions may destroy or create splice donor or splice acceptor sites. Mutations may also be found in the regulatory sequences of genes, resulting in reduced or enhanced gene transcription. Certain DNA sequences are particularly susceptible to mutagenesis. Successive pyrimidine residues (e.g., T-T or C-C) are subject to the formation of ultraviolet light–induced photoadducts. If these pyrimidine dimers are not repaired by the nucleotide excision repair pathway, mutations will be introduced after DNA synthesis. The dinucleotide C-G, or CpG, is also a hot spot for a specific type of mutation. In this case, methylation of the cytosine is associated with FIGURE 82-9 Point mutations causing β thalassemia as example of allelic heterogeneity. The β-globin gene is located in the globin gene cluster. Point mutations can be located in the promoter, the CAP site, the 5’-untranslated region, the initiation codon, each of the three exons, the introns, or the polyadenylation signal. Many mutations introduce missense or nonsense mutations, whereas others cause defective RNA splicing. Not shown here are deletion mutations of the β-globin gene or larger deletions of the globin locus that can also result in thalassemia. ▼, promoter mutations; *, CAP site; •, 5’UTR; 1 , initiation codon; ♦, defective RNA processing; ✦, missense and nonsense Chapter 82 Principles of Human Genetics A, Poly A signal. an enhanced rate of deamination to uracil, which is then replaced with thymine. This C → T transition (or G → A on the opposite strand) accounts for at least one-third of point mutations associated with polymorphisms and mutations. In addition to the fact that certain types of mutations (C → T or G → A) are relatively common, the nature of the genetic code also results in overrepresentation of certain amino acid substitutions. Polymorphisms are sequence variations that have a frequency of at least 1%. Usually, they do not result in a perceptible phenotype. Often they consist of single base-pair substitutions that do not alter the protein coding sequence because of the degenerate nature of the genetic code (synonymous polymorphism), although it is possible that some might alter mRNA stability, translation, or the amino acid sequence (nonsynonymous polymorphism) (Fig. 82-10). The detection of sequence variants poses a practical problem because mutations is much greater in the male germline than the female germline, in which rates of aneuploidy are increased (Chap. 83e). Thus, the incidence of new point mutations in spermatogonia increases with paternal age (e.g., achondrodysplasia, Marfan’s syndrome, neurofibromatosis). It is estimated that about 1 in 10 sperm carries a new deleterious mutation. The rates for new mutations are calculated most readily for autosomal dominant and X-linked disorders and are ~10−5−10−6/locus per generation. Because most monogenic diseases are relatively rare, new mutations account for a significant fraction of cases. This is important in the context of genetic counseling, because a new mutation can be transmitted to the affected individual but does not necessarily imply that the parents are at risk to transmit the disease to other children. An exception to this is when the new mutation occurs early in germline development, leading to gonadal mosaicism. it is often unclear whether it creates 433 a mutation with functional consequences or a benign polymorphism. In this situation, the sequence alteration is described as variant of unknown significance (VUS). mutation rates Mutations represent an important cause of genetic diversity as well as disease. Mutation rates are difficult to determine in humans because many mutations are silent and because testing is often not adequate to detect the phenotypic consequences. Mutation rates vary in different genes but are estimated to occur at a rate of ~10−10/bp per cell division. Germline mutation rates (as opposed to somatic mutations) are relevant in the transmission of genetic disease. Because the population of oocytes is established very early in development, only ~20 cell divisions are required for completed oogenesis, whereas spermatogenesis involves ~30 divisions by the time of puberty and 20 cell divisions each year thereafter. Consequently, the probability of acquiring new point 1 bp Deletion with frameshift FIGURE 82-10 A. Examples of mutations. The coding strand is shown with the encoded amino acid sequence. B. Chromatograms of sequence analyses after amplification of genomic DNA by polymerase chain reaction. unequal crossinG-over Normally, DNA recombination in germ cells occurs with remarkable fidelity to maintain the precise junction sites for the exchanged DNA sequences (Fig. 82-6). However, mispairing of homologous sequences leads to unequal crossover, with gene duplication on one of the chromosomes and gene deletion on the other chromosome. A significant fraction of growth hormone (GH) gene deletions, for example, involve unequal crossing-over (Chap. 402). The GH gene is a member of a large gene cluster that includes a GH variant gene as well as several structurally related chorionic somatomammotropin genes and pseudogenes (highly homologous but functionally inactive relatives of a normal gene). Because such gene clusters contain multiple homologous DNA sequences arranged in tandem, they are particularly prone to undergo recombination and, consequently, gene duplication or deletion. On the other hand, duplication of the PMP22 gene because of unequal crossing-over results in increased gene dosage and type IA Charcot-Marie-Tooth disease. Unequal crossing-over resulting in deletion of PMP22 causes a distinct neuropathy called hereditary liability to pressure palsy (Chap. 459). Glucocorticoid-remediable aldosteronism (GRA) is caused by a gene fusion or rearrangement involving the genes that encode aldosterone synthase (CYP11B2) and steroid 11β-hydroxylase (CYP11B1), normally arranged in tandem on chromosome 8q. These two genes are 95% identical, predisposing to gene duplication and deletion by unequal crossing-over. The rearranged gene product contains the regulatory regions of 11β-hydroxylase fused to the coding sequence of aldosterone synthetase. Consequently, the latter enzyme is expressed in the adrenocorticotropic hormone (ACTH)–dependent zona fasciculata of the adrenal gland, resulting in overproduction of mineralocorticoids and hypertension (Chap. 406). Gene conversion refers to a nonreciprocal exchange of homologous genetic information. It has been used to explain how an internal portion of a gene is replaced by a homologous segment copied from another allele or locus; these genetic alterations may range from a few nucleotides to a few thousand nucleotides. As a result of gene conversion, it is possible for short DNA segments of two chromosomes to be identical, even though these sequences are distinct in the parents. A practical consequence of this phenomenon is that nucleotide substitutions can occur during gene conversion between related genes, often altering the function of the gene. In disease states, gene conversion often involves intergenic exchange of DNA between a gene and a related pseudogene. For example, the 21-hydroxylase gene (CYP21A2) is adjacent to a nonfunctional pseudogene (CYP21A1P). Many of the nucleotide substitutions that are found in the CYP21A2 gene in patients with congenital adrenal hyperplasia correspond to sequences that are present in the CYP21A1P pseudogene, suggesting gene conversion as one cause of mutagenesis. In addition, mitotic gene conversion has been suggested as a mechanism to explain revertant mosaicism in which an inherited mutation is “corrected” in certain cells. For example, patients with autosomal recessive generalized atrophic benign epidermolysis bullosa have acquired reverse mutations in one of the two mutated COL17A1 alleles, leading to clinically unaffected patches of skin. insertions anD Deletions Although many instances of insertions and deletions occur as a consequence of unequal crossing-over, there is also evidence for internal duplication, inversion, or deletion of DNA sequences. The fact that certain deletions or insertions appear to occur repeatedly as independent events indicates that specific regions within the DNA sequence predispose to these errors. For example, certain regions of the DMD gene, which encodes dystrophin, appear to be hot spots for deletions and result in muscular dystrophy (Chap. 462e). Some regions within the human genome are rearrangement hot spots and lead to CNVs. errors in Dna rePair Because mutations caused by defects in DNA repair accumulate as somatic cells divide, these types of mutations are particularly important in the context of neoplastic disorders (Chap. 102e). Several genetic disorders involving DNA repair enzymes underscore their importance. Patients with xeroderma pigmentosum have defects in DNA damage recognition or in the nucleotide excision and repair pathway (Chap. 105). Exposed skin is dry and pigmented and is extraordinarily sensitive to the mutagenic effects of ultraviolet irradiation. More than 10 different genes have been shown to cause the different forms of xeroderma pigmentosum. This finding is consistent with the earlier classification of this disease into different complementation groups in which normal function is rescued by the fusion of cells derived from two different forms of xeroderma pigmentosum. Ataxia telangiectasia causes large telangiectatic lesions of the face, cerebellar ataxia, immunologic defects, and hypersensitivity to ionizing radiation (Chap. 450). The discovery of the ataxia telangiectasia mutated (ATM) gene reveals that it is homologous to genes involved in DNA repair and control of cell cycle checkpoints. Mutations in the ATM gene give rise to defects in meiosis as well as increasing susceptibility to damage from ionizing radiation. Fanconi’s anemia is also associated with an increased risk of multiple acquired genetic abnormalities. It is characterized by diverse congenital anomalies and a strong predisposition to develop aplastic anemia and acute myelogenous leukemia (Chap. 132). Cells from these patients are susceptible to chromosomal breaks caused by a defect in genetic recombination. At least 13 different complementation groups have been identified, and the loci and genes associated with Fanconi’s anemia have been cloned. HNPCC (Lynch’s syndrome) is characterized by autosomal dominant transmission of colon cancer, young age (<50 years) of presentation, predisposition to lesions in the proximal large bowel, and associated malignancies such as uterine cancer and ovarian cancer. HNPCC is predominantly caused by mutations in one of several different mismatch repair (MMR) genes including MutS homologue 2 (MSH2), MutL homologue 1 and 6 (MLH1, MLH6), MSH6, PMS1, and PMS2 (Chap. 110). These proteins are involved in the detection of nucleotide mismatches and in the recognition of slipped-strand trinucleotide repeats. Germline mutations in these genes lead to microsatellite instability and a high mutation rate in colon cancer. Genetic screening tests for this disorder are now being used for families considered to be at risk (Chap. 84). Recognition of HNPCC allows early screening with colonoscopy and the implementation of prevention strategies using nonsteroidal anti-inflammatory drugs. unstable Dna sequences Trinucleotide repeats may be unstable and expand beyond a critical number. Mechanistically, the expansion is thought to be caused by unequal recombination and slipped mispairing. A premutation represents a small increase in trinucleotide copy number. In subsequent generations, the expanded repeat may increase further in length and result in an increasingly severe phenotype, a process called dynamic mutation (see below for discussion of anticipation). Trinucleotide expansion was first recognized as a cause of the fragile X syndrome, one of the most common causes of intellectual disability. Other disorders arising from a similar mechanism include Huntington’s disease (Chap. 448), X-linked spinobulbar muscular atrophy (Chap. 452), and myotonic dystrophy (Chap. 462e). Malignant cells are also characterized by genetic instability, indicating a breakdown in mechanisms that regulate DNA repair and the cell cycle. Functional Consequences of Mutations Functionally, mutations can be broadly classified as gain-of-function and loss-of-function mutations. Gain-of-function mutations are typically dominant (e.g., they result in phenotypic alterations when a single allele is affected). Inactivating mutations are usually recessive, and an affected individual is homozygous or compound heterozygous (e.g., carrying two different mutant alleles of the same gene) for the disease-causing mutations. Alternatively, mutation in a single allele can result in haploinsufficiency, a situation in which one normal allele is not sufficient to maintain a normal phenotype. Haploinsufficiency is a commonly observed mechanism in diseases associated with mutations in transcription factors (Table 82-2). Remarkably, the clinical features among patients with an identical mutation in a transcription factor often vary significantly. One mechanism underlying this variability consists in the influence of modifying genes. Haploinsufficiency can also affect the expression of rate-limiting enzymes. For example, haploinsufficiency in enzymes involved in heme synthesis can cause porphyrias (Chap. 430). An increase in dosage of a gene product may also result in disease, as illustrated by the duplication of the DAX1 gene in dosage-sensitive sex reversal (Chap. 410). Mutation in a single allele can also result in loss of function due to a dominant-negative effect. In this case, the mutated allele interferes with the function of the normal gene product by one of several different mechanisms: (1) a mutant protein may interfere with the function of a multimeric protein complex, as illustrated by mutations in type 1 collagen (COL1A1, COL1A2) genes in osteogenesis imperfecta (Chap. 427); (2) a mutant protein may occupy binding sites on proteins or promoter response elements, as illustrated by thyroid hormone resistance, a disorder in which inactivated thyroid hormone receptor β binds to target genes and functions as an antagonist of normal receptors (Chap. 405); or (3) a mutant protein can be cytotoxic as in α1 antitrypsin deficiency (Chap. 314) or autosomal dominant neurohypophyseal diabetes insipidus (Chap. 404), in which the abnormally folded proteins are trapped within the endoplasmic reticulum and ultimately cause cellular damage. Genotype and Phnotype • alleles, GenotyPes, anD HaPlotyPes An observed trait is referred to as a phenotype; the genetic information defining the phenotype is called the genotype. Alternative forms of a gene or a genetic marker are referred to as alleles. Alleles may be polymorphic variants of nucleic acids that have no apparent effect on gene expression or function. In other instances, these variants may have subtle effects on gene expression, thereby conferring adaptive advantages associated with genetic diversity. On the other hand, allelic variants may reflect mutations that clearly alter the function of a gene product. The common Glu6Val (E6V) sickle cell mutation in the β-globin gene and the ΔF508 deletion of phenylalanine (F) in the CFTR gene are examples of allelic variants of these genes that result in disease. Because each individual has two copies of each chromosome (one inherited from the mother and one inherited from the father), he or she can have only two alleles at a given locus. However, there can be many different alleles in the population. The normal or common allele is usually referred to as wild type. When alleles at a given locus are identical, the individual is homozygous. Inheriting identical copies of a mutant allele occurs in many autosomal recessive disorders, particularly in circumstances of consanguinity or isolated populations. If the alleles are different on the maternal and the paternal copy of the gene, the individual is heterozygous at this locus (Fig. 82-10). If two different mutant alleles are inherited at a given locus, the individual is said to be a compound heterozygote. Hemizygous is used to describe males with a mutation in an X chromosomal gene or a female with a loss of one X chromosomal locus. Genotypes describe the specific alleles at a particular locus. For example, there are three common alleles (E2, E3, E4) of the apolipoprotein E (APOE) gene. The genotype of an individual can therefore be described as APOE3/4 or APOE4/4 or any other variant. These designations indicate which alleles are present on the two chromosomes in the APOE gene at locus 19q13.2. In other cases, the genotype might be assigned arbitrary numbers (e.g., 1/2) or letters (e.g., B/b) to distinguish different alleles. A haplotype refers to a group of alleles that are closely linked together at a genomic locus (Fig. 82-4). Haplotypes are useful for tracking the transmission of genomic segments within families and for detecting evidence of genetic recombination, if the crossover event occurs between the alleles (Fig. 82-6). As an example, various alleles at the histocompatibility locus antigen (HLA) on chromosome 6p are used to establish haplotypes associated with certain disease states. For example, 21-hydroxylase deficiency, complement deficiency, and hemochromatosis are each associated with specific HLA haplotypes. It is now recognized that these genes lie in close proximity to the HLA locus, which explains why HLA associations were identified even before the disease genes were cloned and localized. In other cases, specific HLA associations with diseases such as ankylosing spondylitis (HLA-B27) or type 1 diabetes mellitus (HLA-DR4) reflect the role of specific HLA allelic variants in susceptibility to these autoimmune diseases. The characterization of common SNP haplotypes in numerous populations from different parts of the world through the HapMap Project is providing a novel tool for association studies designed to detect genes involved in the pathogenesis of complex disorders 435 (Table 82-1). The presence or absence of certain haplotypes may also become relevant for the customized choice of medical therapies (pharmacogenomics) or for preventive strategies. Genotype-phenotype correlation describes the association of a specific mutation and the resulting phenotype. The phenotype may differ depending on the location or type of the mutation in some genes. For example, in von Hippel–Lindau disease, an autosomal dominant multisystem disease that can include renal cell carcinoma, hemangioblastomas, and pheochromocytomas, among others, the phenotype varies greatly and the identification of the specific mutation can be clinically useful in order to predict the phenotypic spectrum. allelic HeteroGeneity Allelic heterogeneity refers to the fact that different mutations in the same genetic locus can cause an identical or similar phenotype. For example, many different mutations of the β-globin locus can cause β thalassemia (Table 82-3) (Fig. 82-9). In essence, allelic heterogeneity reflects the fact that many different mutations are capable of altering protein structure and function. For this reason, maps of inactivating mutations in genes usually show a near-random distribution. Exceptions include (1) a founder effect, in which a particular mutation that does not affect reproductive capacity can be traced to a single individual; (2) “hot spots” for mutations, in which the nature of the DNA sequence predisposes to a recurring mutation; and (3) localization of mutations to certain domains that are particularly critical for protein function. Allelic heterogeneity creates a practical problem for genetic testing because one must often examine the entire genetic locus for mutations, because these can differ in each patient. For example, there are currently 1963 reported mutations in the CFTR gene (Fig. 82-3). Mutational analysis may initially focus on a panel of mutations that are particularly frequent (often taking the ethnic background of the patient into account), but a negative result does not exclude the presence of a mutation elsewhere in the gene. One should also be aware that mutational analyses generally focus on the coding region of a gene without considering regulatory and intronic regions. Because disease-causing mutations may be located outside the coding regions, negative results need to be interpreted with caution. The advent of more comprehensive sequencing technologies greatly facilitates concomitant mutational analyses of several genes after targeted enrichment, or even mutational analysis of the whole exome or genome. However, comprehensive sequencing can result in significant diagnostic challenges because the detection of a sequence alteration alone is not always sufficient to establish that it has a causal role. PHenotyPic HeteroGeneity Phenotypic heterogeneity occurs when more than one phenotype is caused by allelic mutations (e.g., different mutations in the same gene) (Table 82-3). For example, laminopathies are monogenic multisystem disorders that result from mutations in the LMNA gene, which encodes the nuclear lamins A and C. Twelve autosomal dominant and four autosomal recessive disorders are caused by mutations in the LMNA gene. They include several forms of lipodystrophies, Emery-Dreifuss muscular dystrophy, progeria syndromes, a form of neuronal Charcot-Marie-Tooth disease (type 2B1), and a group of overlapping syndromes. Remarkably, hierarchical cluster analysis has revealed that the phenotypes vary depending on the position of the mutation ( genotype-phenotype correlation). Similarly, identical mutations in the FGFR2 gene can result in very distinct phenotypes: Crouzon’s syndrome (craniofacial synostosis) or Pfeiffer’s syndrome (acrocephalopolysyndactyly). locus or nonallelic HeteroGeneity anD PHenocoPies Nonallelic or locus heterogeneity refers to the situation in which a similar disease phenotype results from mutations at different genetic loci (Table 82-3). This often occurs when more than one gene product produces different subunits of an interacting complex or when different genes are involved in the same genetic cascade or physiologic pathway. For example, osteogenesis imperfecta can arise from mutations in two different procollagen genes (COL1A1 or COL1A2) that are located on different chromosomes, and at least eight other genes (Chap. 427). The effects of inactivating mutations in these two genes are similar because the protein products comprise different subunits Chapter 82 Principles of Human Genetics of the helical collagen fiber. Similarly, muscular dystrophy syndromes can be caused by mutations in various genes, consistent with the fact that it can be transmitted in an X-linked (Duchenne or Becker), autosomal dominant (limb-girdle muscular dystrophy type 1), or autosomal recessive (limb-girdle muscular dystrophy type 2) manner (Chap. 462e). Mutations in the X-linked DMD gene, which encodes dystrophin, are the most common cause of muscular dystrophy. This feature reflects the large size of the gene as well as the fact that the phenotype is expressed in hemizygous males because they have only a single copy of the X chromosome. Dystrophin is associated with a large protein complex linked to the membrane-associated cytoskeleton in muscle. Mutations in several different components of this protein complex can also cause muscular dystrophy syndromes. Although the phenotypic features of some of these disorders are distinct, the phenotypic spectrum caused by mutations in different genes overlaps, thereby leading to nonallelic heterogeneity. It should be noted that mutations in dystrophin also cause allelic heterogeneity. For example, mutations in the DMD gene can cause either Duchenne’s or the less severe Becker’s muscular dystrophy, depending on the severity of the protein defect. Recognition of nonallelic heterogeneity is important for several reasons: (1) the ability to identify disease loci in linkage studies is reduced by including patients with similar phenotypes but different genetic disorders; (2) genetic testing is more complex because several different genes need to be considered along with the possibility of different mutations in each of the candidate genes; and (3) novel information is gained about how genes or proteins interact, providing unique insights into molecular physiology. Phenocopies refer to circumstances in which nongenetic conditions mimic a genetic disorder. For example, features of toxin-or drug-induced neurologic syndromes can resemble those seen in Huntington’s disease, and vascular causes of dementia share phenotypic features with familial forms of Alzheimer’s dementia (Chap. 448). As in nonallelic heterogeneity, the presence of phenocopies has the potential to confound linkage studies and genetic testing. Patient history and subtle differences in phenotype can often provide clues that distinguish these disorders from related genetic conditions. variable exPressivity anD incomPlete Penetrance The same genetic mutation may be associated with a phenotypic spectrum in different affected individuals, thereby illustrating the phenomenon of variable expressivity. This may include different manifestations of a disorder variably involving different organs (e.g., multiple endocrine neoplasia [MEN]), the severity of the disorder (e.g., cystic fibrosis), or the age of disease onset (e.g., Alzheimer’s dementia). MEN 1 illustrates several of these features. In this autosomal dominant tumor syndrome, affected individuals carry an inactivating germline mutation that is inherited in an autosomal dominant fashion. After somatic inactivation of the alternate allele, they can develop tumors of the parathyroid gland, endocrine pancreas, and the pituitary gland (Chap. 408). However, the pattern of tumors in the different glands, the age at which tumors develop, and the types of hormones produced vary among affected individuals, even within a given family. In this example, the phenotypic variability arises, in part, because of the requirement for a second somatic mutation in the normal copy of the MEN1 gene, as well as the large array of different cell types that are susceptible to the effects of MEN1 gene mutations. In part, variable expression reflects the influence of modifier genes, or genetic background, on the effects of a particular mutation. Even in identical twins, in whom the genetic constitution is essentially the same, one can occasionally see variable expression of a genetic disease. Interactions with the environment can also influence the course of a disease. For example, the manifestations and severity of hemochromatosis can be influenced by iron intake (Chap. 428), and the course of phenylketonuria is affected by exposure to phenylalanine in the diet (Chap. 434e). Other metabolic disorders, such as hyperlipidemias and porphyria, also fall into this category. Many mechanisms, including genetic effects and environmental influences, can therefore lead to variable expressivity. In genetic counseling, it is particularly important to recognize this variability, because one cannot always predict the course of disease, even when the mutation is known. Penetrance refers to the proportion of individuals with a mutant genotype that express the phenotype. If all carriers of a mutant express the phenotype, penetrance is complete, whereas it is said to be incomplete or reduced if some individuals do not exhibit features of the phenotype. Dominant conditions with incomplete penetrance are characterized by skipping of generations with unaffected carriers transmitting the mutant gene. For example, hypertrophic obstructive cardiomyopathy (HCM) caused by mutations in the myosin-binding protein C gene is a dominant disorder with clinical features in only a subset of patients who carry the mutation (Chap. 283). Patients who have the mutation but no evidence of the disease can still transmit the disorder to subsequent generations. In many conditions with postnatal onset, the proportion of gene carriers who are affected varies with age. Thus, when describing penetrance, one has to specify age. For example, for disorders such as Huntington’s disease or familial amyotrophic lateral sclerosis, which present later in life, the rate of penetrance is influenced by the age at which the clinical assessment is performed. Imprinting can also modify the penetrance of a disease. For example, in patients with Albright’s hereditary osteodystrophy, mutations in the Gsα subunit (GNAS1 gene) are expressed clinically only in individuals who inherit the mutation from their mother (Chap. 424). sex-influenceD PHenotyPes Certain mutations affect males and females quite differently. In some instances, this is because the gene resides on the X or Y sex chromosomes (X-linked disorders and Y-linked disorders). As a result, the phenotype of mutated X-linked genes will be expressed fully in males but variably in heterozygous females, depending on the degree of X-inactivation and the function of the gene. For example, most heterozygous female carriers of factor VIII deficiency (hemophilia A) are asymptomatic because sufficient factor VIII is produced to prevent a defect in coagulation (Chap. 141). On the other hand, some females heterozygous for the X-linked lipid storage defect caused by α-galactosidase A deficiency (Fabry’s disease) experience mild manifestations of painful neuropathy, as well as other features of the disease (Chap. 432e). Because only males have a Y chromosome, mutations in genes such as SRY, which causes male-to-female sex reversal, or DAZ (deleted in azoospermia), which causes abnormalities of spermatogenesis, are unique to males (Chap. 410). Other diseases are expressed in a sex-limited manner because of the differential function of the gene product in males and females. Activating mutations in the luteinizing hormone receptor cause dominant male-limited precocious puberty in boys (Chap. 411). 437 The phenotype is unique to males because activation of the receptor induces testosterone production in the testis, whereas it is functionally silent in the immature ovary. Biallelic inactivating mutations of the follicle-stimulating hormone (FSH) receptor cause primary ovarian failure in females because the follicles do not develop in the absence of FSH action. In contrast, affected males have a more subtle phenotype, because testosterone production is preserved (allowing sexual maturation) and spermatogenesis is only partially impaired (Chap. 411). In congenital adrenal hyperplasia, most commonly caused by 21-hydroxylase deficiency, cortisol production is impaired and ACTH stimulation of the adrenal gland leads to increased production of androgenic precursors (Chap. 406). In females, the increased androgen level causes ambiguous genitalia, which can be recognized at the time of birth. In males, the diagnosis may be made on the basis of adrenal insufficiency at birth, because the increased adrenal androgen level does not alter sexual differentiation, or later in childhood, because of the development of precocious puberty. Hemochromatosis is more common in males than in females, presumably because of differences in dietary iron intake and losses associated with menstruation and pregnancy in females (Chap. 428). Chromosomal Disorders Chromosomal or cytogenetic disorders are caused by numerical or structural aberrations in chromosomes. For a detailed discussion of disorders of chromosome number and structure, see Chap. 83e. Deviations in chromosome number are common causes of abortions, developmental disorders, and malformations. Contiguous gene syndromes (e.g., large deletions affecting several genes) have been useful for identifying the location of new disease-causing genes. Because of the variable size of gene deletions in different patients, a systematic comparison of phenotypes and locations of deletion breakpoints allows positions of particular genes to be mapped within the critical genomic region. Monogenic Mendelian Disorders Monogenic human diseases are frequently referred to as Mendelian disorders because they obey the principles of genetic transmission originally set forth in Gregor Mendel’s classic work. The continuously updated OMIM catalogue lists several thousand of these disorders and provides information about the clinical phenotype, molecular basis, allelic variants, and pertinent animal models (Table 82-1). The mode of inheritance for a given phenotypic trait or disease is determined by pedigree analysis. All affected and unaffected individuals in the family are recorded in a pedigree using standard symbols (Fig. 82-11). The principles of allelic segregation, and the transmission of alleles from parents to children, are illustrated in Fig. 82-12. One dominant (A) allele and one recessive (a) allele can display three Mendelian modes of inheritance: autosomal dominant, autosomal recessive, and X-linked. About 65% of human monogenic disorders are autosomal dominant, 25% are autosomal recessive, and 5% are X-linked. Genetic testing is now available for many of these disorders and plays an increasingly important role in clinical medicine (Chap. 84). autosomal Dominant DisorDers These disorders assume particular relevance because mutations in a single allele are sufficient to cause the disease. In contrast to recessive disorders, in which disease pathogenesis is relatively straightforward because there is loss of gene function, dominant disorders can be caused by various disease mechanisms, many of which are unique to the function of the genetic pathway involved. In autosomal dominant disorders, individuals are affected in successive generations; the disease does not occur in the offspring of unaffected individuals. Males and females are affected with equal frequency because the defective gene resides on one of the 22 autosomes (Fig. 82-13A). Autosomal dominant mutations alter one of the two alleles at a given locus. Because the alleles segregate randomly at meiosis, the probability that an offspring will be affected is 50%. Unless there is a new germline mutation, an affected individual has an affected parent. Children with a normal genotype do not transmit the disorder. Due to differences in penetrance or expressivity (see above), the Chapter 82 Principles of Human Genetics Heterozygous Heterozygous Female male female carrier of X-linked trait FIGURE 82-11 Standard pedigree symbols. clinical manifestations of autosomal dominant disorders may be variable. Because of these variations, it is sometimes challenging to determine the pattern of inheritance. It should be recognized, however, that some individuals acquire a mutated gene from an unaffected parent. De novo germline mutations occur more frequently during later cell divisions in gametogenesis, which explains why siblings are rarely affected. As noted before, new germline mutations occur more frequently in fathers of advanced age. For example, the average age of fathers with new germline mutations that cause Marfan’s syndrome is ~37 years, whereas fathers who transmit the disease by inheritance have an average age of ~30 years. autosomal recessive DisorDers In recessive disorders, the mutated alleles result in a complete or partial loss of function. They frequently involve enzymes in metabolic pathways, receptors, or proteins in signaling cascades. In an autosomal recessive disease, the affected individual, who can be of either sex, is a homozygote or compound heterozygote for a single-gene defect. With a few important exceptions, autosomal recessive diseases are rare and often occur in the context of parental consanguinity. The relatively high frequency of certain recessive disorders such as sickle cell anemia, cystic fibrosis, and thalassemia, is partially explained by a selective biologic advantage for the heterozygous state (see below). Although heterozygous carriers of a defective allele are usually clinically normal, they may display subtle differences in phenotype that only become apparent with more precise testing or in the context of certain environmental influences. In sickle the offspring of parents with one dominant (A) and one recessive (a) allele. The distribution of the parental alleles to their offspring depends on the combination present in the parents. Filled symbols = affected individuals. B Autosomal recessive Autosomal recessive with pseudodominance FIGURE 82-13 (A) Dominant, (B) recessive, (C) X-linked, and (D)mitochondrial (matrilinear) inheritance. cell anemia, for example, heterozygotes are normally asymptomatic. However, in situations of dehydration or diminished oxygen pressure, sickle cell crises can also occur in heterozygotes (Chap. 127). In most instances, an affected individual is the offspring of heterozygous parents. In this situation, there is a 25% chance that the offspring will have a normal genotype, a 50% probability of a heterozygous state, and a 25% risk of homozygosity for the recessive alleles (Figs. 82-10, 82-13B). In the case of one unaffected heterozygous and one affected homozygous parent, the probability of disease increases to 50% for each child. In this instance, the pedigree analysis mimics an autosomal dominant mode of inheritance (pseudodominance). In contrast to autosomal dominant disorders, new mutations in recessive alleles are rarely manifest because they usually result in an asymptomatic carrier state. x-linkeD DisorDers Males have only one X chromosome; consequently, a daughter always inherits her father’s X chromosome in addition to one of her mother’s two X chromosomes. A son inherits the Y chromosome from his father and one maternal X chromosome. Thus, the characteristic features of X-linked inheritance are (1) the absence of father-to-son transmission, and (2) the fact that all daughters of an affected male are obligate carriers of the mutant allele (Fig. 82-13C). The risk of developing disease due to a mutant X-chromosomal gene differs in the two sexes. Because males have only one X chromosome, they are hemizygous for the mutant allele; thus, they are more likely to develop the mutant phenotype, regardless of whether the mutation is dominant or recessive. A female may be either heterozygous or homozygous for the mutant allele, which may be dominant or recessive. The terms X-linked dominant or X-linked recessive are therefore only applicable to expression of the mutant phenotype in women. In addition, the expression of X-chromosomal genes is influenced by X chromosome inactivation. y-linkeD DisorDers The Y chromosome has a relatively small number of genes. One such gene, the sex-region determining Y factor (SRY), which encodes the testis-determining factor (TDF), is crucial for normal male development. Normally there is infrequent exchange of sequences on the Y chromosome with the X chromosome. The SRY region is adjacent to the pseudoautosomal region, a chromosomal segment on the X and Y chromosomes with a high degree of homology. A crossing-over event occasionally involves the SRY region with the distal tip of the X chromosome during meiosis in the male. Translocations can result in XY females with the Y chromosome lacking the SRY gene or XX males harboring the SRY gene on one of the X chromosomes (Chap. 410). Point mutations in the SRY gene may also result in individuals with an XY genotype and an incomplete female phenotype. Most of these mutations occur de novo. Men with oligospermia/azoospermia frequently have microdeletions on the long arm of the Y chromosome that involve one or more of the azoospermia factor (AZF) genes. Exceptions to Simple Mendelian Inheritance Patterns • mitocHonDrial DisorDers Mendelian inheritance refers to the transmission of genes encoded by DNA contained in the nuclear chromosomes. In addition, each mitochondrion contains several copies of a small circular chromosome (Chap. 85e). The mitochondrial DNA (mtDNA) is ~16.5 kb and encodes transfer and ribosomal RNAs and 13 core proteins that are components of the respiratory chain involved in oxidative phosphorylation and ATP generation. The mitochondrial genome does not recombine and is inherited through the maternal line because sperm does not contribute significant cytoplasmic components to the zygote. A noncoding region of the mitochondrial chromosome, referred to as D-loop, is highly polymorphic. This property, together with the absence of mtDNA recombination, makes it a valuable tool for studies tracing human migration and evolution, and it is also used for specific forensic applications. Inherited mitochondrial disorders are transmitted in a matrilineal fashion; all children from an affected mother will inherit the disease, but it will not be transmitted from an affected father to his children (Fig. 82-13D). Alterations in the mtDNA that involves enzymes required for oxidative phosphorylation lead to reduction of ATP supply, generation of free radicals, and induction of apoptosis. Several syndromic disorders arising from mutations in the mitochondrial genome are known in humans and they affect both protein-coding and tRNA genes (Chap. 85e). The broad clinical spectrum often involves (cardio) myopathies and encephalopathies because of the high dependence of these tissues on oxidative phosphorylation. The age of onset and the clinical course are highly variable because of the unusual mechanisms of mtDNA transmission, which replicates independently from nuclear DNA. During cell replication, the proportion of wild-type and mutant mitochondria can drift among different cells and tissues. The resulting heterogeneity in the proportion of mitochondria with and without a mutation is referred to as heteroplasmia and underlies the phenotypic variability that is characteristic of mitochondrial diseases. Acquired somatic mutations in mitochondria are thought to be involved in several age-dependent degenerative disorders affecting predominantly muscle and the peripheral and central nervous system (e.g., Alzheimer’s and Parkinson’s diseases). Establishing that an mtDNA alteration is causal for a clinical phenotype is challenging because of the high degree of polymorphism in mtDNA and the phenotypic variability characteristic of these disorders. Certain pharmacologic treatments may have an impact on mitochondria and/or their function. For example, treatment with the antiretroviral compound azidothymidine (AZT) causes an acquired mitochondrial myopathy through depletion of muscular mtDNA. mosaicism Mosaicism refers to the presence of two or more genetically 439 distinct cell lines in the tissues of an individual. It results from a mutation that occurs during embryonic, fetal, or extrauterine development. The developmental stage at which the mutation arises will determine whether germ cells and/or somatic cells are involved. Chromosomal mosaicism results from nondisjunction at an early embryonic mitotic division, leading to the persistence of more than one cell line, as exemplified by some patients with Turner’s syndrome (Chap. 410). Somatic mosaicism is characterized by a patchy distribution of genetically altered somatic cells. The McCune-Albright syndrome, for example, is caused by activating mutations in the stimulatory G protein α (Gsα) that occur early in development (Chap. 424). The clinical phenotype varies depending on the tissue distribution of the mutation; manifestations include ovarian cysts that secrete sex steroids and cause precocious puberty, polyostotic fibrous dysplasia, café-au-lait skin pigmentation, growth hormone–secreting pituitary adenomas, and hypersecreting autonomous thyroid nodules (Chap. 412). x-inactivation, imPrintinG, anD uniParental Disomy According to traditional Mendelian principles, the parental origin of a mutant gene is irrelevant for the expression of the phenotype. There are, however, important exceptions to this rule. X-inactivation prevents the expression of most genes on one of the two X chromosomes in every cell of a female. Gene inactivation through genomic imprinting occurs on selected chromosomal regions of autosomes and leads to inheritable preferential expression of one of the parental alleles. It is of pathophysiologic importance in disorders where the transmission of disease is dependent on the sex of the transmitting parent and, thus, plays an important role in the expression of certain genetic disorders. Two classic examples are the Prader-Willi syndrome and Angelman’s syndrome (Chap. 83e). Prader-Willi syndrome is characterized by diminished fetal activity, obesity, hypotonia, mental retardation, short stature, and hypogonadotropic hypogonadism. Deletions of the paternal copy of the Prader-Willi locus located on the short arm of chromosome 15 result in a contiguous gene syndrome involving missing paternal copies of the necdin and SNRPN genes, among others. In contrast, patients with Angelman’s syndrome, characterized by mental retardation, seizures, ataxia, and hypotonia, have deletions involving the maternal copy of this region on chromosome 15. These two syndromes may also result from uniparental disomy. In this case, the syndromes are not caused by deletions on chromosome 15 but by the inheritance of either two maternal chromosomes (Prader-Willi syndrome) or two paternal chromosomes (Angelman’s syndrome). Lastly, the two distinct phenotypes can also be caused by an imprinting defect that impairs the resetting of the imprint during zygote development (defect in the father leads to Prader-Willi syndrome; defect in the mother leads to Angelman’s syndrome). Imprinting and the related phenomenon of allelic exclusion may be more common than currently documented, because it is difficult to examine levels of mRNA expression from the maternal and paternal alleles in specific tissues or in individual cells. Genomic imprinting, or uniparental disomy, is involved in the pathogenesis of several other disorders and malignancies (Chap. 83e). For example, hydatidiform moles contain a normal number of diploid chromosomes, but they are all of paternal origin. The opposite situation occurs in ovarian teratomata, with 46 chromosomes of maternal origin. Expression of the imprinted gene for insulin-like growth factor II (IGF-II) is involved in the pathogenesis of the cancer-predisposing Beckwith-Wiedemann syndrome (BWS) (Chap. 101e). These children show somatic overgrowth with organomegalies and hemihypertrophy, and they have an increased risk of embryonal malignancies such as Wilms’ tumor. Normally, only the paternally derived copy of the IGF-II gene is active and the maternal copy is inactive. Imprinting of the IGF-II gene is regulated by H19, which encodes an RNA transcript that is not translated into protein. Disruption or lack of H19 methylation leads to a relaxation of IGF-II imprinting and expression of both alleles. Alterations of the epigenome through gain and loss of DNA methylation, as well as altered histone modifications, play an important role in the pathogenesis of malignancies. Chapter 82 Principles of Human Genetics somatic mutations Cancer can be considered a genetic disease at the cellular level (Chap. 101e). Cancers are monoclonal in origin, indicating that they have arisen from a single precursor cell with one or several mutations in genes controlling growth (proliferation or apoptosis) and/or differentiation. These acquired somatic mutations are restricted to the tumor and its metastases and are not found in the surrounding normal tissue. The molecular alterations include dominant gain-of-function mutations in oncogenes, recessive loss-offunction mutations in tumor-suppressor genes and DNA repair genes, gene amplification, and chromosome rearrangements. Rarely, a single mutation in certain genes may be sufficient to transform a normal cell into a malignant cell. In most cancers, however, the development of a malignant phenotype requires several genetic alterations for the gradual progression from a normal cell to a cancerous cell, a phenomenon termed multistep carcinogenesis (Chaps. 101e and 102e). Genome-wide analyses of cancers using deep sequencing often reveal somatic rearrangements resulting in fusion genes and mutations in multiple genes. Comprehensive sequence analyses provide further insight into genetic heterogeneity within malignancies; these include intratumoral heterogeneity among the cells of the primary tumor, intermetastatic and intrametastatic heterogeneity, and interpatient differences. These analyses further support the notion of cancer as an ongoing process of clonal evolution, in which successive rounds of clonal selection within the primary tumor and metastatic lesions result in diverse genetic and epigenetic alterations that require targeted (personalized) therapies. The heterogeneity of mutations within a tumor can also lead to resistance to target therapies because cells with mutations that are resistant to the therapy, even if they are a minor part of the tumor population, will be selected as the more sensitive cells are killed. Most human tumors express telomerase, an enzyme formed of a protein and an RNA component, which adds telomere repeats at the ends of chromosomes during replication. This mechanism impedes shortening of the telomeres, which is associated with senescence in normal cells and is associated with enhanced replicative capacity in cancer cells. Telomerase inhibitors provide a novel strategy for treating advanced human cancers. In many cancer syndromes, there is an inherited predisposition to tumor formation. In these instances, a germline mutation is inherited in an autosomal dominant fashion inactivating one allele of an autosomal tumor-suppressor gene. If the second allele is inactivated by a somatic mutation or by epigenetic silencing in a given cell, this will lead to neoplastic growth (Knudson two-hit model). Thus, the defective allele in the germline is transmitted in a dominant mode, although tumorigenesis results from a biallelic loss of the tumor-suppressor gene in an affected tissue. The classic example to illustrate this phenomenon is retinoblastoma, which can occur as a sporadic or hereditary tumor. In sporadic retinoblastoma, both copies of the retinoblastoma (RB) gene are inactivated through two somatic events. In hereditary retinoblastoma, one mutated or deleted RB allele is inherited in an autosomal dominant manner and the second allele is inactivated by a subsequent somatic mutation. This two-hit model applies to other inherited cancer syndromes such as MEN 1 (Chap. 408) and neurofibromatosis type 2 (Chap. 118). nucleotiDe rePeat exPansion DisorDers Several diseases are associated with an increase in the number of nucleotide repeats above a certain threshold (Table 82-4). The repeats are sometimes located within the coding region of the genes, as in Huntington’s disease or the X-linked form of spinal and bulbar muscular atrophy (SBMA; Kennedy’s syndrome). In other instances, the repeats probably alter gene regulatory sequences. If an expansion is present, the DNA fragment is unstable and tends to expand further during cell division. The length of the nucleotide repeat often correlates with the severity of the disease. When repeat length increases from one generation to the next, disease manifestations may worsen or be observed at an earlier age; this phenomenon is referred to as anticipation. In Huntington’s disease, for example, there is a correlation between age of onset and length of the triplet codon expansion (Chap. 444e). Anticipation has also been documented in other diseases caused by dynamic mutations in trinucleotide repeats (Table 82-4). The repeat number may also vary in a tissue-specific manner. In myotonic dystrophy, the CTG repeat may be tenfold greater in muscle tissue than in lymphocytes (Chap. 462e). Complex Genetic Disorders The expression of many common diseases such as cardiovascular disease, hypertension, diabetes, asthma, psychiatric disorders, and certain cancers is determined by a combination of genetic background, environmental factors, and lifestyle. A trait is called polygenic if multiple genes contribute to the phenotype or multifactorial if multiple genes are assumed to interact with environmental factors. Genetic models for these complex traits need to account for genetic heterogeneity and interactions with other genes and the environment. Complex genetic traits may be influenced by modifier genes that are not linked to the main gene involved in the pathogenesis of the trait. This type of gene-gene interaction, or epistasis, plays an important role in polygenic traits that require the simultaneous presence of variations in multiple genes to result in a pathologic phenotype. Type 2 diabetes mellitus provides a paradigm for considering a multifactorial disorder, because genetic, nutritional, and lifestyle factors are intimately interrelated in disease pathogenesis (Table 82-5) (Chap. 417). The identification of genetic variations and environmental factors that either predispose to or protect against disease is essential for predicting disease risk, designing preventive strategies, and developing novel therapeutic approaches. The study of rare monogenic diseases may provide insight into some of the genetic and molecular mechanisms important in the pathogenesis of complex diseases. For example, the identification of the genes causing monogenic forms of permanent neonatal diabetes mellitus or maturity-onset diabetes defined them as candidate genes in the pathogenesis of diabetes mellitus type 2 (Tables 82-2 and 82-5). Genome scans have identified numerous genes and loci that may be associated with susceptibility to development of diabetes mellitus in certain populations. Efforts to identify susceptibility genes require very large sample sizes, and positive results may depend on ethnicity, ascertainment criteria, and statistical analysis. Association studies analyzing the potential influence of (biologically functional) SNPs and SNP haplotypes on a particular phenotype are providing new insights into the genes involved in the pathogenesis of these common disorders. Large variants ([micro]deletions, duplications, and inversions) present in the human population also contribute to the pathogenesis of complex disorders, but their contributions remain poorly understood. Linkage and Association Studies There are two primary strategies for mapping genes that cause or increase susceptibility to human disease: (1) classic linkage can be performed based on a known genetic model or, when the model is unknown, by studying pairs of affected relatives; or (2) disease genes can be mapped using allelic association studies (Table 82-6). Genetic linkaGe Genetic linkage refers to the fact that genes are physically connected, or linked, to one another along the chromosomes. Two fundamental principles are essential for understanding the concept of linkage: (1) when two genes are close together on a chromosome, they are usually transmitted together, unless a recombination event separates them (Figs. 82-6); and (2) the odds of a crossover, or recombination event, between two linked genes is proportional to the distance that separates them. Thus, genes that are farther apart are more likely to undergo a recombination event than genes that are very close together. The detection of chromosomal loci that segregate with a disease by linkage can be used to identify the gene responsible for the disease (positional cloning) and to predict the odds of disease gene transmission in genetic counseling. Polymorphisms are essential for linkage studies because they provide a means to distinguish the maternal and paternal chromosomes in an individual. On average, 1 out of every 1000 bp varies from one person to the next. Although this degree of variation seems low (99.9% identical), it means that >3 million sequence differences exist between any two unrelated individuals and the probability that the sequence at such loci will differ on the two homologous chromosomes is high (often >70–90%). These sequence variations include variable number of tandem repeats (VNTRs), short tandem repeats (STRs), and SNPs. Most STRs, also called polymorphic microsatellite markers, consist of di-, tri-, or tetranucleotide repeats that can be characterized readily using the polymerase chain reaction (PCR). Characterization of SNPs, using DNA chips or beads, permits comprehensive analyses of genetic variation, linkage, and association studies. Although these sequence variations often have no apparent functional consequences, they provide much of the basis for variation in genetic traits. In order to identify a chromosomal locus that segregates with a disease, it is necessary to characterize polymorphic DNA markers from affected and unaffected individuals of one or several pedigrees. One can Abbreviation: GWAS, genome-wide association study. Suitable for identification of susceptibility genes in Requires large sample size and matched control polygenic and multifactorial disorders population Suitable for testing specific allelic variants of known False-positive results in the absence of suitable candidate loci control population Facilitated by HapMap data, making GWAS more feasible Candidate gene approach does not permit detection of novel genes and pathways then assess whether certain marker alleles cosegregate with the disease. Markers that are closest to the disease gene are less likely to undergo recombination events and therefore receive a higher linkage score. Linkage is expressed as a lod (logarithm of odds) score—the ratio of the probability that the disease and marker loci are linked rather than unlinked. Lod scores of +3 (1000:1) are generally accepted as supporting linkage, whereas a score of –2 is consistent with the absence of linkage. allelic association, linkaGe Disequilibrium, anD HaPlotyPes Allelic association refers to a situation in which the frequency of an allele is significantly increased or decreased in individuals affected by a particular disease in comparison to controls. Linkage and association differ in several aspects. Genetic linkage is demonstrable in families or sibships. Association studies, on the other hand, compare a population of affected individuals with a control population. Association studies can be performed as case-control studies that include unrelated affected individuals and matched controls or as family-based studies that compare the frequencies of alleles transmitted or not transmitted to affected children. Allelic association studies are particularly useful for identifying susceptibility genes in complex diseases. When alleles at two loci occur more frequently in combination than would be predicted (based on known allele frequencies and recombination fractions), they are said to be in linkage disequilibrium. Evidence for linkage disequilibrium can be helpful in mapping disease genes because it suggests that the two loci are tightly linked. Detecting the genetic factors contributing to the pathogenesis of common complex disorders remains a great challenge. In many instances, these are low-penetrance alleles (e.g., variations that individually have a subtle effect on disease development, and they can only be identified by unbiased GWAS) (Catalog of Published Genome-Wide Association Studies; Table 82-1) (Fig. 82-14). Most variants occur in noncoding or regulatory sequences but do not alter protein structure. The analysis of complex disorders is further complicated by ethnic differences in disease prevalence, differences in allele frequencies in known susceptibility genes among different populations, locus and allelic heterogeneity, gene-gene and gene-environment interactions, and the possibility of phenocopies. The data generated by the HapMap Project are greatly facilitating GWAS for the characterization of complex disorders. Adjacent SNPs are inherited together as blocks, and these blocks can be identified by genotyping selected marker SNPs, so-called Tag SNPs, thereby reducing cost and workload (Fig. 82-4). The availability of this information permits the characterization of a limited number of SNPs to identify the set of haplotypes present in an individual (e.g., in cases and controls). This, in turn, permits performing GWAS by searching for associations of certain haplotypes with a disease phenotype of interest, an essential step for unraveling the genetic factors contributing to complex disorders. PoPulation Genetics In population genetics, the focus changes from alterations in an individual’s genome to the distribution pattern of different genotypes in the population. In a case where there are only two alleles, A and a, the frequency of the genotypes will be p2 + 2pq + q2 = 1, with p2 corresponding to the frequency of AA, 2pq to the frequency of Aa, and q2 to aa. When the frequency of an allele is known, the frequency of the genotype can be calculated. Alternatively, one can determine an allele frequency if the genotype frequency has been determined. Allele frequencies vary among ethnic groups and geographic regions. For example, heterozygous mutations in the CFTR gene are relatively common in populations of European origin but are rare in the African population. Allele frequencies may vary because certain allelic variants confer a selective advantage. For example, heterozygotes for the sickle cell mutation, which is particularly common in West Africa, are more resistant to malarial infection because the erythrocytes of heterozygotes provide a less favorable environment for Plasmodium parasites. Although homozygosity for the sickle cell mutation is associated with severe anemia and sickle crises (Chap. 127), heterozygotes have a higher probability of survival because of the reduced morbidity and mortality from malaria; this phenomenon has led to an increased frequency of the mutant allele. Recessive conditions are more prevalent in geographically isolated populations because of the more restricted gene pool. APPROACH TO THE PATIENT: For the practicing clinician, the family history remains an essential step in recognizing the possibility of a hereditary predisposition to disease. When taking the history, it is useful to draw a detailed 443 Rare alleles Mendelian disease Low frequency variants with intermediate effect Typical: Common variants with low effect on complex disease Rare: Common variants with high effect on complex disease Rare variants with small effect: difficult to identify Effect size 50 3.0 1.5 1.1 High Intermediate Modest Low 0.001 0.005 0.05 FIGURE 82-14 Relationship between allele frequency and effect size in monogenic and polygenic disorders. In classic Mendelian disorders, the allele frequency is typically low but has a high impact (single gene disorder). This contrasts with polygenic disorders that require the combination of multiple low impact alleles that are frequently quite common in the general population. pedigree of the first-degree relatives (e.g., parents, siblings, and children), because they share 50% of genes with the patient. Standard symbols for pedigrees are depicted in Fig. 82-11. The family history should include information about ethnic background, age, health status, and deaths, including infants. Next, the physician should explore whether there is a family history of the same or related illnesses to the current problem. An inquiry focused on commonly occurring disorders such as cancers, heart disease, and diabetes mellitus should follow. Because of the possibility of age-dependent expressivity and penetrance, the family history will need intermittent updating. If the findings suggest a genetic disorder, the clinician should assess whether some of the patient’s relatives may be at risk of carrying or transmitting the disease. In this circumstance, it is useful to confirm and extend the pedigree based on input from several family members. This information may form the basis for genetic counseling, carrier detection, early intervention, and disease prevention in relatives of the index patient (Chap. 84). In instances where a diagnosis at the molecular level may be relevant, it is important to identify an appropriate laboratory that can perform the appropriate test. Genetic testing is available for a rapidly growing number of monogenic disorders through commercial laboratories. For uncommon disorders, the test may only be performed in a specialized research laboratory. Approved laboratories offering testing for inherited disorders can be identified in continuously updated online resources (e.g., GeneTests; Table 82-1). If genetic testing is considered, the patient and the family should be counseled about the potential implications of positive results, including psychological distress and the possibility of discrimination. The patient or caretakers should be informed about the meaning of a negative result, technical limitations, and the possibility of false-negative and inconclusive results. For these reasons, genetic testing should only be performed after obtaining informed consent. Published ethical guidelines address the specific aspects that should be considered when testing children and adolescents. Genetic testing should usually be limited to situations in which the results may have an impact on medical management. Genomic medicine aims to enhance the quality of medical care through the use of genotypic analysis (DNA testing) to identify genetic predisposition to disease, to select more specific pharmacotherapy, and to design individualized medical care based on genotype. Genotype can be deduced by analysis of protein (e.g., hemoglobin, apoprotein E), mRNA, or DNA. However, technologic advances have made DNA analysis particularly useful because it can be readily applied. DNA testing is performed by mutational analysis or linkage studies in individuals at risk for a genetic disorder known to be present in a family. Mass screening programs require tests of high sensitivity and specificity to be cost-effective. Prerequisites for the success of genetic screening programs include the following: that the disorder is potentially serious; that it can be influenced at a presymptomatic stage by changes in behavior, diet, and/or pharmaceutical manipulations; and that the screening does not result in any harm or discrimination. Screening in Jewish populations for the autosomal recessive neurodegenerative storage disease Tay-Sachs has reduced the number of affected individuals. In contrast, screening for sickle cell trait/disease in African Americans has led to unanticipated problems of discrimination by health insurers and employers. Mass screening programs harbor additional potential problems. For example, screening for the most common genetic alteration in cystic fibrosis, the ΔF508 mutation with a frequency of ~70% in northern Europe, is feasible and seems to be effective. One has to keep in mind, however, that there is pronounced allelic heterogeneity and that the disease can be caused by about 2000 other mutations. The search for these less common mutations would substantially increase costs but not the effectiveness of the screening program as a whole. Next-generation genome sequencing permits comprehensive and cost-effective mutational analyses after selective enrichment of candidate genes. For example, tests that sequence all the common genes causing hereditary deafness are already commercially available. Occupational screening programs aim to detect individuals with increased risk for certain professional activities (e.g., α1 antitrypsin deficiency and smoke or dust exposure). Integrating genomic data into electronic medical records is evolving and may provide significant decision support at the point of care, for example, by providing the clinician with genomic data and decision algorithms for the prescription of drugs that are subject to pharmacogenetic influences. Mutational Analyses DNA sequence analysis is now widely used as a diagnostic tool and has significantly enhanced diagnostic accuracy. It is used for determining carrier status and for prenatal testing in monogenic disorders (Chap. 84). Numerous techniques, Chapter 82 Principles of Human Genetics discussed in previous versions of this chapter, are available for the detection of mutations. In a very broad sense, one can distinguish between techniques that allow for screening of known mutations (screening mode) or techniques that definitively characterize mutations. Analyses of large alterations in the genome are possible using classic methods such as cytogenetics, fluorescent in situ hybridization (FISH), and Southern blotting (Chap. 83e), as well as more sensitive novel techniques that search for multiple single exon deletions or duplications. More discrete sequence alterations rely heavily on the use of PCR, which allows rapid gene amplification and analysis. Moreover, PCR makes it possible to perform genetic testing and mutational analysis with small amounts of DNA extracted from leukocytes or even from single cells, buccal cells, or hair roots. DNA sequencing can be performed directly on PCR products or on fragments cloned into plasmid vectors amplified in bacterial host cells. Sequencing of all exons of the genome or selected chromosomes, or sequencing of numerous candidate genes in a single run, is now possible with next-generation sequencing platforms. The majority of traditional diagnostic methods were gel-based. Novel technologies for the analysis of mutations, genotyping, large-scale sequencing, and mRNA expression profiles are undergoing rapid evolution. DNA chip technologies allow hybridization of DNA or RNA to hundreds of thousands of probes simultaneously. Microarrays are being used clinically for mutational analysis of several human disease genes, as well as for the identification of viral or bacterial sequence variations. With advances in high-throughput DNA sequencing technology, complete sequencing of the genome or an exome has entered the clinical realm. Although comprehensive sequencing of large genomic regions or multiple genes is already a reality, the subsequent bioinformatics analysis, assembly of sequence fragments, and comparative alignments remains a significant and commonly underestimated challenge. The discovery of incidental (or secondary) findings that are unrelated to the indication for the sequencing analysis but indicators of other disorders of potential relevance for patient care can pose a difficult ethical dilemma. It can lead to the detection of undiagnosed medically actionable genetic conditions, but can also reveal deleterious mutations that cannot be influenced, as numerous sequence variants are of unknown significance. A general algorithm for the approach to mutational analysis is outlined in Fig. 82-15. The importance of a detailed clinical phenotype cannot be overemphasized. This is the step where one should also consider the possibility of genetic heterogeneity and phenocopies. If obvious candidate genes are suggested by the phenotype, they can be analyzed directly. After identification of a mutation, it is essential to demonstrate that it segregates with the phenotype. The functional characterization of novel mutations is labor intensive and may require analyses in vitro or in transgenic models in order to document the relevance of the genetic alteration. Prenatal diagnosis of numerous genetic diseases in instances with a high risk for certain disorders is now possible by direct DNA analysis. Amniocentesis involves the removal of a small amount of amniotic fluid, usually at 16 weeks of gestation. Cells can be collected and submitted for karyotype analyses, FISH, and mutational analysis of selected genes. The main indications for amniocentesis include advanced maternal age (>35 years), an abnormal serum triple marker test (α-fetoprotein, β human chorionic gonadotropin, pregnancy-associated plasma protein A, or unconjugated estriol), a family history of chromosomal abnormalities, or a Mendelian disorder amenable to genetic testing. Prenatal diagnosis can also be performed by chorionic villus sampling (CVS), in which a small amount of the chorion is removed by a transcervical or transabdominal biopsy. Chromosomes and DNA obtained from these cells can be submitted for cytogenetic and mutational analyses. CVS can be performed earlier in gestation (weeks 9–12) than amniocentesis, an aspect that may be of relevance when termination of pregnancy is a consideration. Later in pregnancy, beginning at about 18 weeks of gestation, percutaneous umbilical blood sampling (PUBS) permits collection of fetal blood for lymphocyte culture and analysis. Recently, the entire fetal genome has been determined prenatally from cells taken from the mother’s plasma through deep sequencing and the counting of parental haplotypes, or by inferring it from DNA sequences obtained from blood samples from the mother, father, and umbilical cord. These approaches enable screening for clinically relevant and deleterious alleles inherited from the parents, as well as for de novo germline mutations, and they may have the potential to change the diagnosis of genetic disorders in the prenatal setting. In combination with in vitro fertilization (IVF) techniques, it is even possible to perform genetic diagnoses in a single cell removed from the fourto eight-cell embryo or to analyze the first polar body from an oocyte. Preconceptual diagnosis thereby avoids therapeutic abortions but is costly and labor intensive. It should be emphasized that excluding a specific disorder by any of these approaches is never equivalent to the assurance of having a normal child. Mutations in certain cancer susceptibility genes such as BRCA1 and BRCA2 may identify individuals with an increased risk for the development of malignancies and result in risk-reducing interventions. The detection of mutations is an important diagnostic and FIGURE 82-15 Approach to genetic disease. prognostic tool in leukemias and lymphomas. The demonstration of the presence or absence of mutations and polymorphisms is also relevant for the rapidly evolving field of pharmacogenomics, including the identification of differences in drug treatment response or metabolism as a function of genetic background. For example, the thiopurine drugs 6-mercaptopurine and azathioprine are commonly used cytotoxic and immunosuppressive agents. They are metabolized by thiopurine methyltransferase (TPMT), an enzyme with variable activity associated with genetic polymorphisms in 10% of whites and complete deficiency in about 1 in 300 individuals. Patients with intermediate or deficient TPMT activity are at risk for excessive toxicity, including fatal myelosuppression. Characterization of these polymorphisms allows mercaptopurine doses to be modified based on TPMT genotype. Pharmacogenomics may increasingly permit individualized drug therapy, improve drug effectiveness, reduce adverse side effects, and provide cost-effective pharmaceutical care (Chap. 5). Determination of the association of genetic defects with disease, comprehensive data of an individual’s genome, and studies of genetic variation raise many ethical and legal issues. Genetic information is generally regarded as sensitive information that should not be readily accessible without explicit consent (genetic privacy). The disclosure of genetic information may risk possible discrimination by insurers or employers. The scientific components of the Human Genome Project have been paralleled by efforts to examine ethical, social, and legal implications. An important milestone emerging from these endeavors consists in the Genetic Information Nondiscrimination Act (GINA), signed into law in 2008, which aims to protect asymptomatic individuals against the misuse of genetic information for health insurance and employment. It does not, however, protect the symptomatic individual. Provisions of the U.S. Patient Protection and Affordable Care Act, effective in 2014, will fill this gap and prohibit exclusion from, or termination of, health insurance based on personal health status. Potential threats to the maintenance of genetic privacy consist in the emerging integration of genomic data into electronic medical records, compelled disclosures of health records, and direct-to-consumer genetic testing. It is widely accepted that identifying disease-causing genes can lead to improvements in diagnosis, treatment, and prevention. However, the information gleaned from genotypic results can have quite different impacts, depending on the availability of strategies to modify the course of disease (Chap. 84). For example, the identification of mutations that cause MEN 2 or hemochromatosis allows specific interventions for affected family members. On the other hand, at present, the identification of an Alzheimer’s or Huntington’s disease gene does not currently alter therapy and outcomes. Most genetic disorders are likely to fall into an intermediate category where the opportunity for prevention or treatment is significant but limited (Chap. 84). However, the progress in this area is unpredictable, as underscored by the finding that angiotensin II receptor blockers may slow disease progression in Marfan’s syndrome. Genetic test results can generate anxiety in affected individuals and family members. Comprehensive sequence analyses are particularly challenging because most individuals can be expected to harbor several serious recessive gene mutations. The impact of genetic testing on health care costs is currently unclear. It is likely to vary among disorders and depend on the availability of effective therapeutic modalities. A significant problem arises from the marketing of genetic testing directly to consumers by commercial companies. The validity of these tests has not been defined, and there are numerous concerns about the lack of appropriate regulatory oversight, the accuracy and confidentiality of genetic information, the availability of counseling, and the handling of these results. Many issues raised by the genome project are familiar, in principle, to medical practitioners. For example, an asymptomatic patient with increased low-density lipoprotein (LDL) cholesterol, high blood pressure, or a strong family history of early myocardial infarction is known to be at increased risk of coronary heart disease. In such cases, it is clear that the identification of risk factors and an appropriate intervention are beneficial. Likewise, patients with phenylketonuria, cystic fibrosis, or sickle cell anemia are often identified as having a genetic disease early in life. These precedents can be helpful for adapting policies that relate to genetic information. We can anticipate similar efforts, whether based on genotypes or other markers of genetic predisposition, to be applied to many disorders. One confounding aspect of the rapid expansion of information is that our ability to make clinical decisions often lags behind initial insights into genetic mechanisms of disease. For example, when genes that predispose to breast cancer such as BRCA are described, they generate tremendous public interest in the potential to predict disease, but many years of clinical research are still required to rigorously establish genotype and phenotype correlations. Genomics may contribute to improvements in global health by providing a better understanding of pathogens and diagnostics, and through contributions to drug development. There is, however, concern about the development of a “genomics divide” because of the costs associated with these developments and uncertainty as to whether these advances will be accessible to the populations of developing countries. The World Health Organization has summarized the current issues and inequities surrounding genomic medicine in a detailed report titled “Genomics and World Health.” Whether related to informed consent, participation in research, or the management of a genetic disorder that affects an individual or his or her family, there is a great need for more information about fundamental principles of genetics. The pervasive nature of the role of genetics in medicine makes it important for physicians and other health care professionals to become more informed about genetics and to provide advice and counseling in conjunction with trained genetic counselors (Chap. 84). The application of screening and prevention strategies will therefore require intensive patient and physician education, changes in health care financing, and legislation to protect patient’s rights. Chapter 82 Principles of Human Genetics Chromosome Disorders Nancy B. Spinner, Laura K. Conlin CHROMOSOME DISORDERS Alterations of the chromosomes (numerical and structural) occur in about 1% of the general population, in 8% of stillbirths, and in close to 50% of spontaneously aborted fetuses. The 3 × 109 base pairs that 83e encode the human genome are packaged into 23 pairs of chromosomes, which consist of discrete portions of DNA, bound to several classes of regulatory proteins. Technical advances that led to the ability to analyze human chromosomes immediately translated into the revelation that human disorders can be caused by an abnormality of chromosome number. In 1959, the clinically recognizable disorder, Down syndrome, was demonstrated to result from having three copies of chromosome 21 (trisomy 21). Very soon thereafter, in 1960, a small, structurally abnormal chromosome was recognized in the cells of some patients with chronic myelogenous leukemia (CML), and this abnormal chromosome is now known as the Philadelphia chromosome. Since these early discoveries, the techniques for analysis of human chromosomes, and DNA in general, have gone through several revolutions, and with each technical advancement, our understanding of the role of chromosomal abnormalities in human disease has expanded. While early studies in the 1950s and 1960s easily identified abnormalities of chromosome number (aneuploidy) and large structural alterations such as deletions (chromosomes with missing regions), duplications (extra copies of chromosome regions), or translocations (where portions of the chromosomes are rearranged), many other types of structural alterations could only be identified as techniques improved. The first important technical advance was the introduction of chromosome banding in the late 1960s, a technique that allowed for the staining of the chromosomes, so that each chromosome could be recognized by its pattern of alternating dark and light (or fluorescent and nonfluorescent) bands. Other technical innovations ranged from the introduction of fluorescence in situ hybridization in the 1980s to use of array-based and sequencing technologies in the early 2000s. Currently, we can appreciate that many types of chromosome abnormalities contribute to human disease including aneuploidy; structural alterations such as deletions and duplications, translocations, or inversions; uniparental disomy, where two copies of one chromosome (or a portion of a chromosome) are inherited from one parent; complex alterations such as isochromosomes, markers, and rings; and mosaicism for all of the aforementioned abnormalities. The first chromosome disorders identified had very striking and generally severe phenotypes, because the abnormalities involved large regions of the genome, but as methods have become more sensitive, it is now possible to recognize many more subtle phenotypes, often involving smaller genomic regions. Standard cytogenetic analysis refers to the examination of banded human chromosomes. Banded chromosome analysis allows for both the determination of the number and identity of chromosomes in the cell and recognition of abnormal banding patterns associated with a structural rearrangement. A stained band is defined as the part of a chromosome that is clearly distinguishable from its adjacent segments by appearing darker or lighter with one or more banding techniques. Cytogenetic analysis is most commonly carried out on cells in mitosis, requiring dividing cells. Actively growing cells are most often obtained from peripheral blood; however, it is only a small subset of the blood cells that are actually used for cytogenetic analysis. Often, chemicals, like phytohemagglutinin (PHA), are used to specially stimulate growth of T cells in a blood sample. Other sources of dividing cells include skin-derived fibroblasts, amniotic fluid or placental tissue (for prenatal diagnosis), or tumor tissue (for cancer diagnosis). After culturing, cells are treated with a mitotic spindle inhibitor, which prevents the separation of the chromatids during metaphase. Halting mitosis 83e-1 in metaphase is essential, because chromosomes are at their most condensed state during this stage of mitosis. The banding pattern of a metaphase chromosome is easily recognizable and is ideal for karyotyping. There are several different types of chromosome staining techniques, including R-banding, C-banding, and quinacrine staining, but the most commonly used is G-banding. G-banding is accomplished by treatment of the chromosomes with a proteolytic enzyme, such as trypsin, which digests some of the proteins holding DNA in a three-dimensional structure, followed by staining with a dye (Giemsa) that binds DNA. The resulting patterns have both dark and light bands; in general, the light bands occur in regions on the chromosome in which genes are actively being transcribed, and dark bands are in regions of less active transcription. The banded human karyotype has now been standardized based on an internationally agreed upon system for designating not only individual chromosomes but also chromosome regions, providing a way in which structural rearrangements and variants can be described in terms of their composition. The normal human female karyotype is referred to as 46,XX (46 chromosomes, with 22 pairs of autosomes and two of the same type of sex chromosomes [two Xs], indicating this is a female); and the normal human male karyotype is referred to as 46,XY (46 chromosomes, with 22 pairs of autosomes and one of each type of sex chromosome [one X and one Y], indicating this is a male). The anatomy of a chromosome includes the central constriction, known as the centromere, which is critical for movement of the chromosomes during mitosis and meiosis; the two chromosome arms (p for the smaller or petite arm, and q for the longer arm); and the chromosome ends, which contain the telomeres. The telomeres are made up of a hexanucleotide repeat (TTAGGG)n, and unlike the centromere, they are not visible at the light microscope level. Telomeres are functionally important because they confer stability to the end of the chromosome. Broken chromosomes tend to fuse end to end, whereas a normal chromosome with an intact telomere structure is stable. To create the standard chromosome-banding map, each chromosome is divided into segments that are numbered, and then further subdivided. The precise band names are recorded in an international document so that each band has a distinct number. Figure 83e-1 shows an ideogram (chromosome map with bands) of the X chromosome and a G-banded X chromosome. This system provides a way for a chromosome abnormality to be written, with an indication of which band is deleted, duplicated, or rearranged. p22.3 p2 p21.3 p arm p11.2 centromere q1 q21.1 FIGuRE 83e-1 Ideogram of the X chromosome and a G-banded X chromosome. The labeling of the X ideogram shows the positioning of the p and q arms, the centromere, and the telomeres. The numbering of the bands is also demonstrated, indicating the broadest subbands (p1, p2, q1, q2) and the further subdivisions to the right. Numbering begins at the centromere and moves out along each arm toward the telomeres. Molecular cytogenetics provides a link between chromosome and molecular analysis and overcomes some of the limitations of standard cytogenetics. Deletions smaller than several million base pairs are not routinely detectable by standard G-banding techniques, and chromosomal abnormalities with indistinct or novel banding patterns can be difficult or impossible to interpret. To carry out cytogenetic analysis, cells must be dividing, which is not always possible to obtain (e.g., in autopsy or tumor material that has already been fixed). Finally, growth selection or bias may occasionally cause the results of cytogenetic studies to be misleading because cells that proliferate in vitro may not be representative of the original population, as is often the case with tumor specimens. Fluorescence in situ hybridization (FISH) is a combined cytogenetic-molecular technique that solves many of the aforementioned problems. FISH permits determination of the number and location of specific DNA sequences in human cells. FISH can be performed on metaphase chromosomes, as with G-banding, but can also be performed on cells not actively progressing through mitosis. FISH performed on nondividing cells is referred to as interphase or nuclear FISH (Fig. 83e-2). The FISH procedure relies on the complementarity between the two strands of the DNA double helix and uses a molecular probe, which can be a pool of sequences across an entire chromosome, a DNA sequence for a repetitive part of the genome (e.g., centromeres or telomeres), or a specific DNA sequence found only once in the genome (e.g., a disease-associated gene). The choice of probes for FISH studies is important and will vary with the information needed for the diagnosis of a particular disorder. The most common type of probes are locus-specific probes, which are used to determine if a critical gene or region is absent (indicating a deletion), or present in the normal number of copies, or if an additional copy of the region is present. FISH on metaphase chromosomes will give the additional information of the location of the additional copy, which is necessary information to determine whether a structural rearrangement, such as a translocation, is present. FISH can also be performed with probes that bind to repeated sequences, such as DNA found in centromeres or telomeres, or with probes that bind to an entire chromosome (“painting” probes), to determine the chromosome composition of an abnormal chromosome. Interphase FISH studies can also help to identify structural alterations when probes are used that map to both sides of a translocation breakpoint. Each side of the breakpoint is labeled in a different color, and when no translocation is present the two probes appear to be overlapping. When a translocation is present, the two probes appear separate from one another. These set of probes, called “breakapart” probes, are commonly used to detect recurrent translocations in cancer cells. Array-based methods were introduced into the clinical lab beginning in 2003 and quickly revolutionized the field of cytogenetics. These techniques used arrays (collections of DNA segments from the entire genome) which could be interrogated with respect to copy number. With standard cytogenetics, the missing or extra pieces of DNA have to be big enough to see in the microscope on banded chromosomes (usually larger than 5 Mb). FISH requires a preselection of an informative molecular probe prior to analysis. In contrast, array-based techniques permit analysis of many regions of the genome in a single analysis, with greatly increased resolution over standard cytogenetics. Array-based techniques allow for scanning of the genome for small deletions or duplications quickly and accurately. The resolution of the p13 –1.00 –0.75 –0.50 –0.25 0.00 0.25Log R ratio 0.50 0.75 1.00 p12 p11.2 q11.2 q14 q21.1 q21.3 q22.2 q23 q26.1 q26.2 q26.3 15 FIGuRE 83e-2 G-banding, fluorescence in situ hybridization (FISH), and single nucleotide polymorphism (SNP) array demonstrate an abnormal chromosome 15. A. G-banding shows an abnormal chromosome 15, with unrecognizable material in place of the p arm in the chromosome on the right (top arrow). B. Metaphase FISH (only chromosome 15s are shown) using a probe from the 15q telomere region (red) and a control probe that maps outside of the duplicated region (green). C. Interphase FISH demonstrates three copies of the 15q tel probe in red, and two copies of the 15q control probe (green). D. Genome-wide SNP array demonstrates the increased copy number for a portion of 15q. Note that the G-banding alone indicates the abnormal chromosome 15, but the origin of the extra material can only be demonstrated by FISH or array. The FISH analysis requires additional information about possible genetic causes to select the correct probe. The array can exactly identify the origin of the extra material, but by itself would not provide positional information. test is a function of the number of probes or DNA sequences present on the array. Arrays may use probes of different sizes (ranging from 50 to 200,000 base pairs of DNA) and different probe densities depending on the requirements of the application. Low-resolution platforms can have hundreds of probes, targeted to known disease regions, whereas high-resolution platforms can have millions of probes spread across the entire genome. Depending on the size of the probes and the probe placement across the genome, array-based testing may be able to detect single exon deletions or duplications. Comparative Genomic Hybridization (CGH) and Single Nucleotide Polymorphism (SNP) Analysis CGH and SNP-based genotyping arrays can both be used for the analysis of genomic deletions and duplications. For both techniques, oligonucleotide probes are placed onto a slide or chip in a grid format. Each of these probes is specific for a particular genomic region. In array CGH, the amount of DNA from a patient is compared to that in a clinically normal control, or pool of controls, for each of the probes present on the array. DNA from a patient is fluorescently labeled with a dye of one color, and DNA from a control individual is labeled with another color. These DNA samples are then hybridized at the same time to the array. The resulting fluorescent signal will vary depending on whether both the control and patient DNA are present in equal amounts or if one has a different copy number than the other. SNP platforms use arrays targeting SNPs that are distributed across the genome. SNP arrays vary in density of markers and in the technology used for genotyping, depending on the manufacturer of the array. SNP arrays were initially designed to determine genotypes at a biallelic, polymorphic base (e.g., CC, CT, or TT) and have been increasingly used in genome-wide association studies to identify disease susceptibility genes. SNP arrays were subsequently adapted to identify genomic deletions and duplications (Fig. 83e-2). SNP arrays, in addition to identifying copy number changes, can also detect regions of the genome that have an excess of homozygous genotypes and absence of heterozygous genotypes (e.g., CC and TT genotypes only, with no CT genotypes). Absence of heterozygosity is sometimes associated with uniparental disomy (discussed later in this chapter) but is also observed when an individual’s parents are related to one another (identity by descent). Regions of homozygosity have been used to help identify genes in which homozygous mutations result in disease phenotypes in families with known consanguinity. Array-based techniques (which we will now refer to as cytogenomic analysis) have proven superior to chromosome analysis in the identification of clinically significant deletions or duplications. It is estimated that for a deletion or duplication to be visualized by standard cytogenetics it must be minimally between 5 and 10 million base pairs in size. In almost all cases, deletions and duplications of this size contain multiple genes, and these deletions and duplications are disease causing. However, utilization of array-based cytogenomic testing, which can routinely identify deletions and duplications smaller than 50,000 base pairs, reveals that clinically normal individuals all have some deletions and duplications. This presents a dilemma for the analyst to discern which smaller copy number variations (CNVs) are disease causing (pathogenic) and which are likely benign polymorphisms. Although initially burdensome, the cytogenomics community has been curating these CNVs for almost a decade, and databases have been created reporting CNVs routinely seen in clinically normal individuals and those routinely seen in individuals with clinical abnormalities. Nevertheless, each copy number variant that is identified in an individual undergoing genomic testing must be evaluated for gene content and overlap with CNVs in other patients and in controls. Array technologies are DNA based, unlike cytogenetic technologies, which are cell based. Although resolution of gains and losses are greatly increased with array technology, this technique cannot identify structural changes. When DNA is extracted for array studies, chromosomal structure is lost because the DNA is fragmented for better hybridization to the slides. As an example, the array may be able to detect a duplication of a small region of a chromosome, but no information on the location of this extra material can be determined from this test. The location of this extra copy in the genome may be critical, as the chromosomal material may be involved in a translocation, insertion, marker, or other complex rearrangement. Depending on the chromosomal position of this extra material, the patient may have different clinical outcomes, and recurrence risks for the family can be significantly different. Often, combinations of array-based and cytogenetic-based techniques are required to fully characterize chromosomal abnormalities (see Table 83e-1 for comparison of these technologies). Recent advances in genomic sequencing, known as next-generation sequencing (NGS), have vastly increased the speed and throughput of DNA sequence analysis. NGS is rapidly finding its way into the diagnostic lab for detection of clinically relevant intragenic mutations, and new bioinformatic tools for analysis of genomic deletions and duplications are being developed. It is anticipated that NGS will soon allow the complete analysis of a patient’s genome, with identification of intragenic mutations as well as chromosome abnormalities resulting in gain or loss of genetic material. Identification of completely balanced translocations is the most challenging for NGS, but recent reports of successes in this area suggest that in a matter of time, sequencing will be used for all types of genomic analysis. Cytogenetic analysis is most commonly used for (1) examination of the fetal chromosomes or genome during pregnancy (prenatal diagnosis) or in the event of a spontaneous miscarriage; (2) examination of chromosomes in the neonatal or pediatric population to look for an underlying diagnosis in the case of congenital or developmental anomalies, including short stature and abnormalities of sexual differentiation or progression; (3) chromosome analysis in adults who are facing fertility problems; or (4) examination of cancer cells to look for alterations that aid in establishing a diagnosis or contributing to the prognosis of a tumor (Table 83e-2). Prenatal diagnosis is carried out by analysis of samples obtained by four techniques: amniocentesis, chorionic villous sampling, fetal blood sampling, and analysis of cell free DNA from maternal serum. Amniocentesis, which has been the most commonly used test to date, is usually performed between 15 and 17 weeks of gestational age and carries a small but significant risk for miscarriage. Amniocentesis can be performed as early as 12 weeks, but because there is a lower volume of fluid, the risks for fetal injury or miscarriage are greater. Chorionic villous sampling (CVS) or placental biopsy is routinely carried out earlier than amniocentesis, between 10 and 12 weeks, but a reported increase in limb defects when the procedure is carried out earlier than Timing of Testing Indications for Testing 10 weeks has resulted in reduced use of this test in some centers. Fetal blood sampling (percutaneous umbilical blood sampling [PUBS]) is a riskier procedure that is carried out in the second or third trimester of pregnancy, usually to follow up on an unclear finding from an amniocentesis (such as mosaicism) or an ultrasound abnormality that was detected later in pregnancy. One of the far-reaching recent advances in prenatal diagnosis of chromosome and other genetic disorders is the utilization of cell free fetal DNA that can be identified in maternal serum. The obvious advantages of using fetal DNA obtained from maternal serum is that the DNA can be obtained at minimal risk to the pregnancy, because it requires a maternal blood sample, rather than amniotic fluid which is obtained by puncturing the uterine membranes and carries a risk of miscarriage or infection. Although cell free fetal DNA screening, also called noninvasive prenatal screening, has started to be offered clinically, it requires further confirmation of fetal tissues when an abnormal result is identified. Furthermore, ethical concerns have been raised, because it is feared that the ease of doing this test may encourage testing for individuals who are not truly prepared to deal with the choices that accompany diagnosis of a genetic disease and this testing may change the ethical implications of prenatal testing. Nevertheless, this is an active of area of research, both in terms of the technology and the utilization and implications. Common Indications Common indications for prenatal diagnosis by cytogenetic or cytogenomic analysis are (1) advanced maternal age, (2) presence of an abnormality of the fetus on ultrasound examination, and (3) abnormalities in maternal serum screening that reveal an increased risk for chromosome abnormality. Maternal age is well known to be an important risk factor for having a fetus with trisomy. At a maternal age less than 25 years, 2% of all clinically recognized pregnancies are trisomic, but by a maternal age of 36 years, this figure increases to 10%, and by the maternal age of 42 years, the figure increases to >33%. Based on the risk of having a chromosomally abnormal fetus in comparison to the risk for an adverse event from amniocentesis or CVS, the recommendation is that women over the age of 35 consider prenatal testing if they want to know the chromosomal status of their fetus. The precise mechanism for the maternal age effect is not known, but it is believed that it involves a breakdown in the process of chromosome segregation. A similar effect is not seen for trisomy and paternal age. This difference may reflect the fact that oocytes are generated early in ovary development in the female, whereas spermatogonia are generated continuously after puberty in the male. Abnormalities on prenatal ultrasound are the second most frequent indication for prenatal genetic screening. Ultrasound screening can reveal structural or functional anomalies in the fetus, which might be associated with chromosome or genomic disorders. Follow-up chromosome studies may therefore be recommended. Maternal serum screening results are the third most frequent indication for prenatal chromosome analysis. There have been several versions of maternal serum screening offered over the past few decades. Currently, the “quad” screen analyzes levels of α fetoprotein (AFP), human chorionic gonadotropin (hCG), estriol, and inhibin-A. The values of these analytes are used to adjust the maternal age–predicted risk of a trisomy 21 or trisomy 18 fetus. Postnatal indications for cytogenetic or cytogenomic analysis in neonates or children are varied, and the list has been growing with the increasing ability to diagnose smaller genomic alterations via array-based techniques. Common indications include multiple congenital anomalies, suspicion of a known cytogenetic or cytogenomic syndrome, intellectual disability or developmental delay both with and without accompanying dysmorphic features, autism, failure to thrive in infancy or short stature during childhood, and disorders of sexual development. The ability to detect smaller genomic alterations with involvement of fewer genes, sometimes as few as a single gene, suggests that a wider range of phenotypes could be investigated by cytogenomic analysis. Reasons for chromosome testing in adults include recurrent miscarriages or infertility, where balanced chromosome rearrangements such as reciprocal translocations may occur. Additionally, some adults with anomalies who were not diagnosed when they were children are referred for cytogenetic analysis, often when other members of their family want to understand any potential genetic implications, as they plan their own families. Aneuploidy (extra or missing chromosomes) is the most common type of abnormality, occurring in 3/1000 newborns and at much higher frequency (about 35%) in spontaneously aborted fetuses. The only autosomal trisomies that are compatible with being live born in humans are trisomies 13, 18, and 21, although there are several chromosomes that can be trisomic in mosaic form. Trisomy 21 is associated with the relatively common disorder Down syndrome. Down syndrome has characteristic features including recognizable facial features, along with intellectual disability and abnormalities of multiple other organ systems including the heart. Both trisomy 13 and trisomy 18 are much more severe disorders than Down syndrome, with low frequency of patients surviving past 1 year of age. Trisomy 13 is characterized by low birth weight, postaxial polydactyly, microcephaly, ocular malformations such as anophthalmia or microphthalmia, cleft lip and palate, cardiac defects, and renal malformations. Trisomy 18 neonates have distinct facial characteristics at birth accompanied by an abnormal neurologic exam, underdeveloped genitalia, general lack of responsiveness, and structural birth defects such as congenital heart disease, esophageal atresia, and omphalocele. Mosaicism refers to the presence of two or more populations of cells with distinct chromosome constitutions: for example, an individual with a normal female karyotype in some cells (46,XX) and trisomy 21 in other cells (47,XX,+21). In general, individuals who are mosaic for a chromosomal abnormality have less severe phenotypes than individuals with that same finding in every cell. The severity and presentation of phenotypes are related to the mosaic levels and the tissue distribution of the abnormal cells. There are a number of trisomies that have been reported in mosaic form including mosaic trisomies for chromosomes 8, 9, 14, 17, and 22. A number of trisomies have also been reported in spontaneous abortions (SABs) that have not been seen in live-born individuals, including trisomy 16, which is the most common trisomy in SABs. Monosomy for human chromosomes is very rare, with the single exception being monosomy for the X chromosome, associated with Turner syndrome (45,X). Monosomy for the X chromosome occurs in 1% of all conceptions, yet 98% of these conceptions do not go to term and result in SABs. Trisomies for the sex chromosomes also occur, with 47,XXX (trisomy X or triple X syndrome), 47,XXY (Klinefelter syndrome), and 47,XYY all reported in individuals with relatively mild phenotypes (Chap. 410). Klinefelter syndrome is the most common clinically recognized sex chromosome abnormality, and clinical features include gynecomastia, azoospermia, small testes, and hypogonadism. The 47,XYY karyotype is most often found in boys with developmental delay and or behavioral difficulties, but population-based studies have shown that intelligence for individuals with this karyotype is generally within the normal range, although slightly lower than that found in siblings. Structural chromosome abnormalities include deletions, duplications, translocations, inversions, as well as other types of abnormalities, each relatively rare, but nonetheless contributing to clinical disease resulting from chromosome anomalies. These rare alterations include isochromosomes, ring chromosomes, dicentric chromosomes, and marker chromosomes (structurally abnormal chromosomes that cannot be identified based on cytogenetics alone). Both translocations and inversions can be completely balanced in some cases, such that there is no disruption of coding regions of the genome, with a completely normal clinical phenotype; however, carriers are at risk for unbalanced forms of these rearrangements in their offspring. Reciprocal translocations are found in approximately 1/500–1/600 individuals in the general population and result from the exchange of chromosomal segments between at least two chromosomes. These usually occur between nonhomologous chromosomes and can be identified based on an altered banding pattern on G-banding. Balanced translocation carriers are at risk for abnormal chromosome segregation during meiosis and therefore have a higher risk for infertility, SAB, and live-born offspring with multiple congenital malformations. These phenotypes are observed when only one of the pairs of chromosomes involved in a translocation is inherited from a parent, resulting in an unbalanced genotype (Fig. 83e-3). Sometimes the exchanged segments are so small that they cannot be appreciated by banding (cryptic translocation), and these are sometimes recognized 83e-5 when a phenotypically affected child with an unbalanced form is born. Parental chromosomes can then be studied by FISH to determine if the rearrangement is inherited from a parent with a balanced form of the translocation. The majority of reciprocal, apparently balanced translocations occur in phenotypically normal individuals. The risk for a clinical abnormality when a new reciprocal translocation is identified (usually during prenatal diagnostic studies) is about 7%. Analysis of cytogenetically reciprocal translocations using arrays has demonstrated that translocations in clinically normal individuals are more likely to show no deletions or duplications at the breakpoint, whereas translocations in clinically affected individuals are more likely to have breakpoint-associated deletions or duplications. Most reciprocal translocations occur uniquely, at apparently random positions throughout the genome; however, there are a few exceptions with multiple cases of recurrent translocations occurring. These recurrent translocations include t(11;22), which results in Emanuel syndrome in the unbalanced form, and several translocations involving a region on 4p, 8p, and 12p. These recurrent translocations occur in regions of the genome that contain specific types of AT-rich repeats, or other repeat sequences, that are prone to rearrangement. A special category of translocations is the Robertsonian translocations, which involve the acrocentric chromosomes. An acrocentric chromosome has unique genetic material only on the long arm of the chromosomes, whereas the short arm contains repetitive DNA. The acrocentric chromosomes are 13, 14, 15, 21, and 22. Robertsonian translocations occur when an entire long arm of an acrocentric chromosome is translocated onto the short arm of another acrocentric chromosome. Balanced carriers of a Robertsonian translocation contain only 45 chromosomes, with one chromosome consisting of two long arms of an acrocentric chromosome. Technically, this is an unbalanced translocation, as two short arms of the acrocentric chromosomes are missing; however, because the short arms are repetitive, there is no phenotypic consequence. Unbalanced Robertsonian carriers have 46 chromosomes, but have three copies of the long arm of an acrocentric chromosome. The most FIGuRE 83e-3 Segregation of a balanced translocation in a mother, with inheritance of an unbalanced form in her child. Note that the mother has two rearranged chromosomes, but her child only received one of these, resulting in extra copies of a region of the blue chromosome, with loss of some material from the red chromosome. 14. Unbalanced Robertsonian translocations involving chromosomes 13 and 21 result in trisomy 13 and Down syndrome, respectively. Approximately 4% of patients with Down syndrome have a translocation, and because recurrence risks are different for families of these individuals, all patients with clinically identified Down syndrome should have a karyotype to look for translocations. Inversions are another type of chromosome abnormality involving rearranged segments, where there are two breaks within a chromosome, with the intervening chromosomal material inserted in an inverted orientation. As with reciprocal translocations, if a break occurs within a gene or control region for a gene, a clinical phenotype may result, but often there are no consequences for the inversion carrier; however, there is risk for abnormalities in the offspring of carriers, as recombinant chromosomes may result after crossing over between a normal chromosome and an inverted chromosome during meiosis. Deletion refers to the loss of a chromosomal segment, which results in the presence of only a single copy of that region in an individual’s genome. A deletion can be at the end of a chromosome (terminal), or it can be within the chromosome (interstitial). Deletions that are visible at the microscopic level in standard cytogenetic analysis are generally greater than 5 Mb in size. Smaller deletions have been identified by FISH and by chromosomal microarray. The clinical consequences of a deletion depend on the number and function of genes in the deleted region. Genes that cause a phenotype when a single copy is deleted are known as haploinsufficient genes (one copy is not sufficient), and it is estimated that less than 10% of genes are haploinsufficient. Genes associated with disease that are not haploinsufficient include genes for known recessive disorders, such as cystic fibrosis or Tay-Sachs disease. The first chromosome deletion syndromes were diagnosed clinically and were subsequently demonstrated to be caused by a chromosome deletion on cytogenetic analysis. Examples of these disorders include the Wolf-Hirschhorn syndrome, which is associated with deletions of a small region of the short arm of chromosome 4 (4p); the cri-du-chat syndrome, associated with deletion of a small region of the short arm of chromosome 5 (5p); Williams syndrome, which is associated with interstitial deletions of the long arm of chromosome 7 (7q11.23); and the DiGeorge/velocardiofacial syndromes, associated with interstitial deletions of the long arm of chromosome 22 (22q11.2). Initial cytogenetic studies were able to provide a rough localization of the deletions in different patients, but with the increased usage of arrays, precise mapping of the extent and gene content of these deletions has become much easier. In many cases, one or two genes that are critical for the phenotype associated with these deletions have been identified. In other cases, the phenotype stems from the deletion of multiple genes. The increased utilization of genomic testing by array, which can identify deletions that are much smaller than those detectable by standard cytogenetic analysis, has resulted in the discovery of several new cytogenomic disorders. These include the 1q21.1, 15q13.3, 16p11.2, and 17q21.31 microdeletion syndromes. Duplication of genomic regions is better tolerated than deletion, as evidenced by the viability of several autosomal trisomies (whole chromosome duplications) but no autosomal monosomies (whole chromosome deletions). There are several duplication syndromes where the duplicated region of the genome is present as a supernumerary chromosome. Utilization of chromosome microarray analysis has made analysis of the origins of duplicated chromosome material straightforward (Fig. 83e-2). Recurrent syndromes associated with supernumerary chromosomes include the inverted duplication 15 (inv dup 15) syndrome, caused by the presence of a marker chromosome derived from chromosome 15, with two copies of proximal 15q resulting in tetrasomy (four copies) of this region. The inv dup 15 syndrome has a distinct phenotype and is associated with hypotonia, developmental delay, intellectual disability, epilepsy, and autistic behavior. Another syndrome is the cat eye syndrome, named for the “cat-eyelike” appearance of the pupil, resulting from a coloboma of the iris. This syndrome results from a supernumerary chromosome derived from a portion of chromosome 22, and the marker chromosomes can vary in size and are often mosaic. Consistent with expectations of a mosaic disorder, the phenotype of this syndrome is highly variable and includes renal malformations, urinary tract anomalies, congenital heart defects, anal atresia with fistula, imperforate anus, and mild to moderate intellectual disability. Another rare duplication syndrome is the Pallister-Killian syndrome (PKS), which illustrates the principle of tissue-specific mosaicism. Individuals with PKS have coarse facial features with pigmentary skin anomalies, localized alopecia, profound intellectual disability, and seizures. The disorder is caused by a supernumerary isochromosome for the short arm of chromosome 12 (isochromosome 12p). Isochromosomes consist of two copies of one chromosome arm (p or q), rather than one copy of each arm. This isochromosome is not generally seen in peripheral blood lymphocytes when they are analyzed by G-banding, but it is detected in fibroblasts. Array technology has been reported to detect the isochromosome in uncultured peripheral blood in some patients, and it has been hypothesized that a growth bias against cells with the isochromosome prevents their identification in cytogenetic studies. Numerical abnormalities, translocations, and deletions are the most common chromosome alterations observed in the diagnostic laboratory, but in addition to inversions and duplications, several other types of abnormal chromosomes have been reported, including ring chromosomes, where the two ends of the chromosome fuse to form a circle, and insertions, where a piece of one chromosome is inserted into another chromosome or elsewhere into the same chromosome. Uniparental disomy (UPD) is the inheritance of a pair of chromosomes (or part of a chromosome) from only one parent. This usually occurs as a result of nondisjunction during meiosis, with a gamete missing or having an extra copy of a chromosome. A resulting fertilized egg would then have only one parental contribution for a given chromosome pair, or a trisomy for a given chromosome. If the monosomy or trisomy is not compatible with life, the embryo may undergo a “rescue” to normal copy number. If a monosomy is rescued, the single chromosome may be duplicated, resulting in a cell with two identical chromosomes (monosomy rescue) (Fig. 83e-4). In the case of trisomies, a subsequent nondisjunction can result in cells where one of the extra chromosomes is lost (trisomy rescue) (Fig. 83e-4). For trisomy rescue, there is a one in three chance that the lost FIGuRE 83e-4 Mechanisms of formation of uniparental disomy. Panel A demonstrates nondisjunction in one parent (mother, represented in red), with trisomy in the zygote. A subsequent nondisjunction, with loss of the paternal chromosome (represented in blue), restores the diploid karyotype but leaves two copies of the maternal chromosome (maternal uniparental disomy [UPD]). Panel B demonstrates nondisjunction in one parent (mother, indicated by red oval), resulting in only one copy of this chromosome in the zygote. Subsequent nondisjunction duplicates the single chromosome, rescuing the monosomy, but resulting in two copies of the paternal chromosome (represented in blue; paternal UPD). chromosome will be the sole chromosome from one parent, resulting in a cell with two chromosomes from the same parent. UPD is sometimes associated with clinical abnormalities, and this can occur by two mechanisms. UPD can cause disease when there is an imprinted gene on the involved chromosome, resulting in altered gene expression. Imprinting is the chemical marking of the parental origin of a chromosome, and genes that are imprinted are only expressed from either the maternal or paternal chromosome (Chap. 82). Imprinting therefore results in the differential expression of affected genes, based on parent of origin. Imprinting usually occurs through differential modification of the chromosome from one of the parents, and methylation is one of several epigenetic mechanisms (others include histone acetylation, ubiquitylation, and phosphorylation). Imprinted chromosomes that are associated with phenotypes include paternal UPD6 (associated with neonatal diabetes), maternal UPD7 and UPD11 (associated with Russell-Silver syndrome), paternal UPD11 (associated with Beckwith-Wiedemann syndrome), paternal UPD14, maternal UPD15 (Angelman syndrome), and paternal UPD15 (Prader-Willi syndrome). UPD can also result in disease if the two copies from the same parent are the same chromosome (uniparental isodisomy), and the chromosome contains an allele involving a pathogenic mutation associated with a recessive disorder. Two copies of the deleterious allele would result in the associated disease, even though only one parent is a disease carrier. Chromosome changes can occur during meiosis or mitosis and can occur at any point across the lifespan. Mosaicism for a developmental disorder is one consequence of mitotic chromosome abnormalities, and another consequence is cancer, when the chromosome change 83e-7 confers a growth or proliferation advantage on the cell. The types of chromosome abnormalities seen in cancer are similar to those seen in the developmental disorders (e.g., aneuploidy, deletion, duplication, translocation, isochromosomes, rings, inversion). Tumor cells often have multiple chromosome changes, some of which happen early in the development of a tumor, and may contribute to its selective advantage, whereas others are secondary effects of the deregulation that characterizes many tumors. Chromosome changes in cancer have been studied extensively and have been shown to provide important diagnostic, classification, and prognostic information. The identification of cancer type–specific translocation breakpoints has led to the identification of a number of cancer-associated genes.�For example, the small abnormal chromosome found to be associated with chronic myelogenous leukemia (CML) in 1960 was shown to be the result of translocation between chromosomes 9 and 22 once techniques for analysis of banded chromosomes were introduced, and subsequently, the translocation breakpoint was cloned to reveal the c-abl oncogene on chromosome 9. This translocation produces a fusion protein, which has been targeted for treatment of CML. For detailed discussion of cancer genetics, see Chap. 101e. 446 the practice of Genetics in Clinical medicine Susan M. Domchek, J. Larry Jameson, Susan Miesfeldt APPLICATIONS OF MOLECULAR GENETICS IN CLINICAL MEDICINE Genetic testing for inherited abnormalities associated with disease 84 risk is increasingly used in the practice of clinical medicine. Germline alterations include chromosomal abnormalities (Chap. 83e), specific gene mutations with autosomal dominant or recessive patterns of transmission (Chap. 82), and single nucleotide polymorphisms with small relative risks associated with disease. Germline alterations are responsible for disorders beyond classic Mendelian conditions with genetic susceptibility to common adult-onset diseases such as asthma, hypertension, diabetes mellitus, macular degeneration, and many forms of cancer. For many of these diseases, there is a complex interplay of genes (often multiple) and environmental factors that affect lifetime risk, age of onset, disease severity, and treatment options. The expansion of knowledge related to genetics is changing our understanding of pathophysiology and influencing our classification of diseases. Awareness of genetic etiology can have an impact on clinical management, including prevention and screening for or treatment of a range of diseases. Primary care physicians are relied upon to help patients navigate testing and treatment options. Consequently, they must understand the genetic basis for a large number of genetically influenced diseases, incorporate personal and family history to determine the risk for a specific mutation, and be positioned to provide counseling. Even if patients are seen by genetic specialists who assess genetic risk and coordinate testing, primary care providers should provide information to their patients regarding the indications, limitations, risks, and benefits of genetic counseling and testing. They must also be prepared to offer risk-based management following genetic risk assessment. Given the pace of genetics, this is an increasingly difficult task. The field of clinical genetics is rapidly moving from single gene testing to multigene panel testing, with techniques such as wholeexome and -genome sequencing on the horizon, increasing the complexity of test selection and interpretation, as well as patient education and medical decision making. Adult-onset hereditary diseases follow multiple patterns of inheritance. Some are autosomal dominant conditions. These include many common cancer susceptibility syndromes such as hereditary breast and ovarian cancer (due to germline BRCA1 and BRCA2 mutations) and Lynch syndrome (caused by germline mutations in the mismatch repair genes MLH1, MSH2, MSH6, and PMS2). In both of these examples, inherited mutations are associated with a high penetrance (lifetime risk) of cancer, although risk is not 100%. In other conditions, although there is autosomal dominant transmission, there is lower penetrance, thereby making the disorders more difficult to recognize. For example, germline mutations in CHEK2 increase the risk of breast cancer, but with a moderate lifetime risk in the range of 20–40%, as opposed to 50–70% for mutations in BRCA1 or BRCA2. Other adult-onset hereditary diseases are transmitted in an autosomal recessive fashion where two mutant alleles are necessary to cause disease. Examples include hemochromatosis and MYH-associated colon cancer. There are more pediatric-onset autosomal recessive disorders, such as lysosomal storage diseases and cystic fibrosis. The genetic risk for many adult-onset disorders is multifactorial. Risk can be conferred by genetic factors at a number of loci, which individually have very small effects (usually with relative risks of <1.5). These risk loci (generally single nucleotide polymorphisms [SNPs]) combine with other genes and environmental factors in ways that are not well understood. SNP panels are available to assess risk of disease, but the optimal way of using this information in the clinical setting remains uncertain. Many diseases have multiple patterns of inheritance, adding to the complexity of evaluating patients and families for these conditions. For example, colon cancer can be associated with a single germline mutation in a mismatch repair gene (Lynch syndrome, autosomal dominant), biallelic mutations in MYH (autosomal recessive), or multiple SNPs (polygenic). Many more individuals will have SNP risk alleles than germline mutations in high-penetrance genes, but cumulative lifetime risk of colon cancer related to the former is modest, whereas the risk related to the latter is significant. Personal and family histories provide important insights into the possible mode of inheritance. When two or more first-degree relatives are affected with asthma, cardiovascular disease, type 2 diabetes, breast cancer, colon cancer, or melanoma, the relative risk for disease among close relatives ranges from twoto fivefold, underscoring the importance of family history for these prevalent disorders. In most situations, the key to assessing the inherited risk for common adult-onset diseases is the collection and interpretation of a detailed personal and family medical history in conjunction with a directed physical examination. Family history should be recorded in the form of a pedigree. Pedigrees should convey health-related data on firstand second-degree relatives. When such pedigrees suggest inherited disease, they should be expanded to include additional family members. The determination of risk for an asymptomatic individual will vary depending on the size of the pedigree, the number of unaffected relatives, the types of diagnoses, and the ages of disease onset. For example, a woman with two first-degree relatives with breast cancer is at greater risk for a specific Mendelian disorder if she has a total of 3 female first-degree relatives (with only 1 unaffected) than if she has a total of 10 female first-degree relatives (with 7 unaffected). Factors such as adoption and limited family structure (few women in a family) should to be taken into consideration in the interpretation of a pedigree. Additional considerations include young age of disease onset (e.g., a 30-year nonsmoking woman with a myocardial infarction), unusual diseases (e.g., male breast cancer or medullary thyroid cancer), and the finding of multiple potentially related diseases in an individual (e.g., a woman with a history of both colon and endometrial cancer). Some adult-onset diseases are more prevalent in certain ethnic groups. For instance, 2.5% of individuals of Ashkenazi Jewish ancestry carry one of three founder mutation in BRCA1 and BRCA2. Factor V Leiden mutations are much more common in Caucasians than in Africans or Asians. Additional variables that should be documented are nonhereditary risk factors among those with disease (such as cigarette smoking and myocardial infarction; asbestos exposure and lung disease; and mantle radiation and breast cancer). Significant associated environmental exposures or lifestyle factors decrease the likelihood of a specific genetic disorder. In contrast, the absence of nonhereditary risk factors typically associated with a disease raises concern about a genetic association. A personal or family history of deep vein thrombosis in the absence of known environmental or medical risk factors suggests a hereditary thrombotic disorder. The physical examination may also provide important clues about the risk for a specific inherited disorder. A patient presenting with xanthomas at a young age should prompt consideration of familial hypercholesterolemia. The presence of trichilemmomas in a woman with breast cancer raises concern for Cowden syndrome, associated with PTEN mutations. Recall of family history is often inaccurate. This is especially so when the history is remote and families lose contact or separate geographically. It can be helpful to ask patients to fill out family history forms before or after their visits, because this provides them with an opportunity to contact relatives. Ideally, this information should be embedded in electronic health records and updated intermittently. Attempts should be made to confirm the illnesses reported in the family history before making important and, in certain circumstances, irreversible management decisions. This process is often labor FIGURE 84-1 A 36-year-old woman (arrow) seeks consultation because of her family history of cancer. The patient expresses concern that the multiple cancers in her relatives imply an inherited predisposition to develop cancer. The family history is recorded, and records of the patient’s relatives confirm the reported diagnoses. Symbol key Breast cancer 52 Breast ca 44 46 Ovarian ca 43 Ovarian cancer 2 40 Ovarian ca 38 42 Breast ca 38 24 Pneumonia 56 36 62 69 Breast ca 44 55 Ovarian ca 54 6210 Accident 6 40 5 2 2 intensive and ideally involves interviews of additional family members or reviewing medical records, autopsy reports, and death certificates. Although many inherited disorders will be suggested by the clustering of relatives with the same or related conditions, it is important to note that disease penetrance is incomplete for most genetic disorders. As a result, the pedigree obtained in such families may not exhibit a clear Mendelian inheritance pattern, because not all family members carrying the disease-associated alleles will manifest clinical evidence of the condition. Furthermore, genes associated with some of these disorders often exhibit variable disease expression. For example, the breast cancer–associated gene BRCA2 can predispose to several different malignancies in the same family, including cancers of the breast, ovary, pancreas, skin, and prostate. For common diseases such as breast cancer, some family members without the susceptibility allele (or genotype) may develop breast cancer (or phenotype) sporadically. Such phenocopies represent another confounding variable in the pedigree analysis. Some of the aforementioned features of the family history are illustrated in Fig. 84-1. In this example, the proband, a 36-year-old woman (IV-1), has a strong history of breast and ovarian cancer on the paternal side of her family. The early age of onset and the co-occurrence of breast and ovarian cancer in this family suggest the possibility of an inherited mutation in BRCA1 or BRCA2. It is unclear however, without genetic testing, whether her father harbors such a mutation and transmitted it to her. After appropriate genetic counseling of the pro-band and her family, the most informative and cost-effective approach to DNA analysis in this family is to test the cancer-affected 42-year-old living cousin for the presence of a BRCA1 or BRCA2 mutation. If a mutation is found, then it is possible to test for this particular alteration in other family members, if they so desire. In the example shown, if the proband’s father has a BRCA1 mutation, there is a 50:50 probability that the mutation was transmitted to her, and genetic testing can be used to establish the absence or presence of this alteration. In this same example, if a mutation is not detected in the cancer-affected cousin, testing would not be indicated for cancer-unaffected relatives. A critical first step before initiating genetic testing is to ensure that the correct clinical diagnosis has been made, whether it is based on family history, characteristic physical findings, pathology, or biochemical testing. Such careful clinical assessment can define the phenotype. In the traditional model of genetic testing, testing is directed initially toward the most probable genes (determined by the phenotype), which 447 prevents unnecessary testing. Many disorders exhibit the feature of locus heterogeneity, which refers to the fact that mutations in different genes can cause phenotypically similar disorders. For example, osteogenesis imperfecta (Chap. 427), long QT syndrome (Chap. 277), muscular dystrophy (Chap. 462e), and hereditary predisposition to breast (Chap. 108) or colon (Chap. 110) cancer can each be caused by mutations in a number of distinct genes. The patterns of disease transmission, disease risk, clinical course, and treatment may differ significantly depending on the specific gene affected. Historically, the choice of which gene to test has been determined by unique clinical and family history features and the relative prevalence of candidate genetic disorders. However, rapid changes in genetic testing techniques, as discussed below, may impact this paradigm. It is now technically and financially feasible to sequence many genes (or even the whole exome) at one time. The incorporation of multiplex testing for germline mutations is rapidly evolving. Genetic testing is regulated and performed in much the same way as other specialized laboratory tests. In the United States, genetic testing laboratories are Clinical Laboratory Improvement Amendments (CLIA) approved to ensure that they meet quality and proficiency standards. A useful information source for various genetic tests is www.genetests.org. It should be noted that many tests need to be ordered through specialized laboratories. Genetic testing is performed largely by DNA sequence analysis for mutations, although genotype can also be deduced through the study of RNA or protein (e.g., apolipoprotein E, hemoglobin S, and immunohistochemistry). For example, universal screening for Lynch syndrome via immunohistochemical analysis of colorectal cancers for absence of expression of mismatch repair proteins is under way at multiple hospitals throughout the United States. The determination of DNA sequence alterations relies heavily on the use of polymerase chain reaction (PCR), which allows rapid amplification and analysis of the gene of interest. In addition, PCR enables genetic testing on minimal amounts of DNA extracted from a wide range of tissue sources including leukocytes, mucosal epithelial cells (obtained via saliva or buccal swabs), and archival tissues. Amplified DNA can be analyzed directly by DNA sequencing, or it can be hybridized to DNA chips or blots to detect the presence of normal and altered DNA sequences. Direct DNA sequencing is frequently used for determination of hereditary disease susceptibility and prenatal diagnosis. Analyses of large alterations of the genome are possible using cytogenetics, fluorescent in situ hybridization (FISH), Southern blotting, or multiplex ligation-dependent probe amplification (MLPA) (Chap. 83e). Massively parallel sequencing (also called next-generation sequencing) is significantly altering the approach to genetic testing for adult-onset hereditary susceptibility disorder. This technology encompasses several high-throughput approaches to DNA sequencing, all of which can reliably sequence many genes at one time. Technically, this involves the use of amplified DNA templates in a flow cell, a very different process than traditional Sanger sequencing which is time-consuming and expensive. Multiplex panels for inherited susceptibility are commercially available and include testing of a number of genes that have been associated with the condition of interest. For example, panels are available for Brugada syndrome, hypertrophic cardiomyopathy, and Charcot-Marie-Tooth neuropathy. For many syndromes, this type of panel testing may make sense. However, in other situations, the utility of panel testing is less certain. Currently available breast cancer susceptibility panels contain six genes or more. Many of the genes included in the larger panels are associated with only a modest risk of breast cancer, and the clinical application is uncertain. An additional problem of sequencing many genes (rather than the genes for which there is most suspicion) is the identification of one or more variants of uncertain significance (VUS), discussed below. Whole-exome sequencing (WES) is also now commercially available, although largely used in individuals with syndromes unexplained by Chapter 84 The Practice of Genetics in Clinical Medicine traditional genetic testing. As cost declines, WES may be more widely used. Whole-genome sequencing is also commercially available. Although it may be quite feasible to sequence the entire genome, there are many issues in doing so, including the daunting task of analyzing the vast amount of data generated. Other issues include: (1) the optimal way in which to obtain informed consent, (2) interpretation of frequent sequence variation of uncertain significance, (3) interpretation of alterations in genes with unclear relevance to specific human pathology, and (4) management of unexpected but clinically significant genetic findings. Testing strategies are evolving as a result of these new genetic testing platforms. As the cost of multiple gene panels and WES continue to fall, and as interpretation of such test results improve, there may be a shift from sequential single-gene (or a few genes) testing to multigene testing. For example, presently, a 30-year-old woman with breast cancer but no family history of cancer and no syndromic features would undergo BRCA1/2 testing. If negative, she would subsequently be offered TP53 testing. Notably, a reasonable number of individuals offered TP53 testing for Li-Fraumeni syndrome decline because mutations are associated with extremely high cancer risks (including childhood cancers) in multiple organs and there are no proven interventions to mitigate risk. Without features consistent with Cowden syndrome, the woman would not be routinely offered PTEN testing or testing for CHEK2, ATM, BRIP, BARD, NBN, and PALB2. However, it is now possible to synchronously analyze all of the aforementioned genes, for a nominally higher cost than BRCA1/2 testing alone. Concerns about such panels include appropriate consent strategies related to unexpected findings, VUS, and unclear clinical utility of testing moderate-penetrance genes. Thus, changes from the traditional model of single-gene genetic testing should be done with caution (Fig. 84-2). Limitations to the accuracy and interpretation of genetic testing exist. In addition to technical errors, genetics tests are sometimes designed to detect only the most common mutations. In addition, genetic testing has evolved over time. For example, it was not possible to obtain commercially available comprehensive large genomic rearrangement testing for BRCA1 and BRCA2 until 2006. Therefore, a negative result must be qualified by the possibility that the individual may have a mutation that was not included in the test. In addition, a negative result does not mean that there is not a mutation in some other gene that causes a similar inherited disorder. A negative result, unless there is known mutation in the family, is typically classified as uninformative. VUS are another limitation to genetic testing. A VUS (also termed unclassified variant) is a sequence variation in a gene where the effect of the alteration on the function of the protein is not known. Many of these variants are single nucleotide substitutions (also called missense mutations) that result in a single amino acid change. Although many VUSs will ultimately be reclassified as benign polymorphisms, some will prove to be functionally important. As more genes are sequenced (for example, in a multiplex panel or through WES), the percentage of individuals found to have a VUS increases significantly. The finding of a VUS is difficult for patients and providers alike and complicates decisions regarding medical management. Clinical utility is an important consideration because genetic testing for susceptibility to chronic diseases is increasingly integrated into the practice of medicine. In some situations, there is clear clinical utility to genetic testing with significant evidence-based changes in medical management decisions based on results. However, in many cases, the discovery of disease-associated genes has outpaced studies that assess how such information should be used in the clinical management of the patient and family. This is particularly true for moderate-and low-penetrance gene mutations. Therefore, predictive genetic testing should be approached with caution and only offered to patients who have been adequately counseled and have provided informed consent. Predictive genetic testing falls into two distinct categories. Presymptomatic testing applies to diseases where a specific genetic alteration is associated with a near 100% likelihood of developing disease. In contrast, predisposition testing predicts a risk for disease that is less than 100%. For example, presymptomatic testing is available for those at risk for Huntington’s disease; whereas, predisposition testing is considered for those at risk for hereditary colon cancer. It is important to note that for the majority of adult-onset disorders, testing is only predictive. Test results cannot reveal with confidence whether, when, or how the disease will manifest itself. For example, not everyone with the apolipoprotein Traditional approach to genetic testing Genetic testing in the era of E4 allele will develop Alzheimer’s disease, and next-generation sequencing? individuals without this genetic marker can still develop the disorder. The optimal testing strategy for a family is to initiate testing in an affected family member first. Identification of a mutation can direct the testing of other at-risk family members (whether symptomatic or not). In the absence of additional familial or environmental risk factors, individuals who test negative for the mutation found in the affected family member can be informed that they are at general population risk for that particular disease. Furthermore, they can be reassured that they are not at risk for passing the mutation on to their children. On the other hand, asymptomatic family members who test positive for the known mutation must be informed that they are at increased risk for disease development and for transmitting the alteration to their children. Pretest counseling and education are important, as is an assessment of the patient’s ability to understand and cope with test results. Genetic testing has implications for entire families, and thus individuals interested in pursuing genetic testing must consider how test results might impact their relationships with relatives, partners, spouses, and children. In families with a known genetic mutation, those who test posi-FIGURE 84-2 Approach to genetic testing. tive must consider the impact of their carrier status on their present and future lifestyles; those who test negative may manifest survivor guilt. Parents who are found to have a disease-associated mutation often express considerable anxiety and despair as they address the issue of risk to their children. In addition, some individuals consider options such as preimplantation genetic diagnosis in their reproductive decision making. When a condition does not manifest until adulthood, clinicians and parents are faced with the question of whether at-risk children should be offered genetic testing and, if so, at what age. Although the matter is debated, several professional organizations have cautioned that genetic testing for adult-onset disorders should not be offered to children. Many of these conditions have no known interventions in childhood to prevent disease; consequently, such information can pose significant psychosocial risk to the child. In addition, there is concern that testing during childhood violates a child’s right to make an informed decision regarding testing upon reaching adulthood. On the other hand, testing should be offered in childhood for disorders that may manifest early in life, especially when management options are available. For example, children with multiple endocrine neoplasia 2 (MEN 2) may develop medullary thyroid cancer early in life and should be considered for prophylactic thyroidectomy (Chap. 408). Similarly, children with familial adenomatous polyposis (FAP) due to a mutation in APC may develop polyps in their teens with progression to invasive cancer in the twenties, and therefore, colonoscopy screening is started between the ages of 10 and 15 years (Chap. 110). Informed consent for genetic testing begins with education and counseling. The patient should understand the risks, benefits, and limitations of genetic testing, as well as the potential implications of test results. Informed consent should include a written document, drafted clearly and concisely in a language and format that is understandable to the patient. Because molecular genetic testing of an asymptomatic individual often allows prediction of future risk, the patient should understand all potential long-term medical, psychological, and social implications of testing. There have long been concerns about the potential for genetic discrimination. The Genetic Information Nondiscrimination Act (GINA) was passed in 2008 and provides some protections related to job and health insurance discrimination. It is important to explore with patients the potential impact of genetic test results on future health as well as disability and life insurance coverage. Patients should understand that alternatives remain available if they decide not to pursue genetic testing, including the option of delaying testing to a later date. The option of DNA banking should be presented so that samples are readily available for future use by family members, if needed. Depending on the nature of the genetic disorder, posttest interventions may include: (1) cautious surveillance and awareness; (2) specific medical interventions such as enhanced screening, chemoprevention, or risk-reducing surgery; (3) risk avoidance; and (4) referral to support services. For example, patients with known deleterious mutations in BRCA1 or BRCA2 are strongly encouraged to undergo risk-reducing salpingo-oophorectomy and are offered intensive breast cancer screening as well as the option of risk-reducing mastectomy. In addition, such women may wish to take chemoprevention with tamoxifen, raloxifene, or exemestane. Those with more limited medical management and prevention options, such as patients with Huntington’s disease, should be offered continued follow-up and supportive services, including physical and occupational therapy and social services or support groups as indicated. Specific interventions will change as research continues to enhance our understanding of the medical management of these genetic conditions and more is learned about the functions of the gene products involved. Individuals who test negative for a mutation in a disease-associated gene identified in an affected family member must be reminded that they may still be at risk for the disease. This is of particular importance for common diseases such as diabetes mellitus, cancer, and coronary artery disease. For example, a woman who finds that she does not carry 449 the disease-associated mutation in BRCA2 previously discovered in the family should be reminded that she still requires the same breast cancer screening recommended for the general population. Genetic counseling should be distinguished from genetic testing and screening, although genetic counselors are often involved in issues related to testing. Genetic counseling refers to a communication process that deals with human problems associated with the occurrence of risk of a genetic disorder in a family. Genetic risk assessment is complex and often involves elements of uncertainty. Counseling, therefore, includes genetic education as well as psychosocial counseling. Genetic counseling can be useful in a wide range of situations (Table 84-1). The role of the genetic counselor includes the following: 1. Gather and document a detailed family history. 2. Educate patients about general genetic principles related to disease risk, both for themselves and for others in the family. 3. Assess and enhance the patient’s ability to cope with the genetic information offered. 4. Discuss how nongenetic factors may relate to the ultimate expression of disease. 5. Address medical management issues. 6. Assist in determining the role of genetic testing for the individual and the family. 7. Ensure the patient is aware of the indications, process, risks, benefits, and limitations of the various genetic testing options. 8. Assist the patient, family, and referring physician in the interpretation of the test results. 9. Refer the patient and other at-risk family members for additional medical and support services, if necessary. Genetic counseling is generally offered in a nondirective manner, wherein patients learn to understand how their values factor into a particular medical decision. Nondirective counseling is particularly appropriate when there are no data demonstrating a clear benefit associated with a particular intervention or when an intervention is considered experimental. For example, nondirective genetic counseling is used when a person is deciding whether to undergo genetic testing for Huntington’s disease. At this time, there is no clear benefit (in terms of medical outcome) to an at-risk individual undergoing genetic testing for this disease because its course cannot be altered by therapeutic interventions. However, testing can have an important impact on the individual’s perception of advanced care planning and his or her interpersonal relationships and plans for childbearing. Therefore, the decision to pursue testing rests on the individual’s belief system and values. On the other hand, a more directive approach is appropriate when a condition can be treated. In a family with FAP, colon cancer screening and prophylactic colectomy should be recommended for known APC mutation carriers. The counselor and clinician following this family must ensure that the at-risk family members have access to the resources necessary to adhere to these recommendations. Genetic education is central to an individual’s ability to make an informed decision regarding testing options and treatment. An adequate knowledge of patterns of inheritance will allow patients to understand the probability of disease risk for themselves and other family members. It is also important to impart the concepts of disease penetrance and expression. For most complex adult-onset genetic Previous history of a child with birth defects or a genetic disorder Personal or family history suggestive of a genetic disorder Chapter 84 The Practice of Genetics in Clinical Medicine disorders, asymptomatic patients should be advised that a positive test result does not always translate into future disease development. In addition, the role of nongenetic factors, such as environmental exposures and lifestyle, must be discussed in the context of multi-factorial disease risk and disease prevention. Finally, patients should understand the natural history of the disease as well as the potential options for intervention, including screening, prevention, and in certain circumstances, pharmacologic treatment or prophylactic surgery. Specific treatments are available for a number of genetic disorders. Strategies for the development of therapeutic interventions have a long history in childhood metabolic diseases; however, these principles have been applied in the diagnosis and management of adult-onset diseases as well (Table 84-2). Hereditary hemochromatosis is usually caused by mutations in HFE (although other genes have been less commonly associated) and manifests as a syndrome of iron overload, which can lead to liver disease, skin pigmentation, diabetes mellitus, arthropathy, impotence in males, and cardiac issues (Chap. 428). When identified early, the disorder can be managed effectively with therapeutic phlebotomy. Therefore, when the diagnosis of hemochromatosis has been made in a proband, it is important to counsel and offer testing to other family members in order to minimize the impact of the disorder. Preventative measures and therapeutic interventions are not restricted to metabolic disorders. Identification of familial forms of long QT syndrome, associated with ventricular arrhythmias, allows early electrocardiographic testing and the use of prophylactic antiarrhythmic therapy, overdrive pacemakers, or defibrillators. Individuals with familial hypertrophic cardiomyopathy can be screened by ultrasound, treated with beta blockers or other drugs, and counseled about the importance of avoiding strenuous exercise and dehydration. Those with Marfan’s syndrome can be treated with beta blockers or Abbreviations: AD, autosomal dominant; AR, autosomal recessive; HNPCC, hereditary nonpolyposis colorectal cancer; MRI, magnetic resonance imaging; XL, X-linked. angiotensin II receptor blockers and monitored for the development of aortic aneurysms. The field of pharmacogenetics identifies genes that alter drug metabolism or confer susceptibility to toxic drug reactions. Pharmacogenetics seeks to individualize drug therapy in an attempt to improve treatment outcomes and reduce toxicity. Examples include thiopurine methyltransferase (TPMT) deficiency, dihydropyrimidine dehydrogenase deficiency, malignant hyperthermia, and glucose-6-phosphate deficiency. Despite successes in this area, it is not always clear how to incorporate pharmacogenetics into clinical care. For example, although there is an association with CYP and VKORCgenotypes and warfarin dosing, there is no evidence that incorporating genotyping into clinical practice improves patient outcomes. The identification of germline abnormalities that increase the risk of specific types of cancer is rapidly changing clinical management. Identifying family members with mutations that predispose to FAP or Lynch syndrome leads to recommendations of early cancer screening and prophylactic surgery, as well as consideration of chemoprevention and attention to healthy lifestyle habits. Similar principles apply to familial forms of melanoma as well as cancers of the breast, ovary, and thyroid. In addition to increased screening and prophylactic surgery, the identification of germline mutations associated with cancer may also lead to the development of targeted therapeutics, for example, the ongoing development of PARP inhibitors in those with BRCA-associated cancers. Although the role of genetic testing in the clinical setting continues to evolve, such testing holds the promise of allowing early and more targeted interventions that can reduce morbidity and mortality. Rapid technologic advances are changing the ways in which genetic testing is performed. As genetic testing becomes less expensive and technically easier to perform, it is anticipated that there will be an expansion of its use. This will present challenges, but also opportunities. It is critical that physicians and other health care professionals keep current with advances in genetic medicine in order to facilitate appropriate referral for genetic counseling and judicious use of genetic testing, as well as to provide state-of-the-art, evidence-based care for affected or at-risk patients and their relatives. Chapter 84 The Practice of Genetics in Clinical Medicine Mitochondrial DNa and heritable traits and Diseases Karl Skorecki, Doron Behar Mitochondria are cytoplasmic organelles whose major function is to generate ATP by the process of oxidative phosphorylation under aero-85e bic conditions. This process is mediated by the respiratory electron transport chain (ETC) multiprotein enzyme complexes I–V and the two electron carriers, coenzyme Q (CoQ) and cytochrome c. Other cellular processes to which mitochondria make a major contribution include apoptosis (programmed cell death) and additional cell type–specific functions (Table 85e-1). The efficiency of the mitochondrial ETC in ATP production is a major determinant of overall body energy balance and thermogenesis. In addition, mitochondria are the predominant source of reactive oxygen species (ROS), whose rate of production also relates to the coupling of ATP production to oxygen consumption. Given the centrality of oxidative phosphorylation to the normal activities of almost all cells, it is not surprising that mitochondrial dysfunction can affect almost any organ system (Fig. 85e-1). Thus, physicians in many disciplines might encounter patients with mitochondrial diseases and should be aware of their existence and characteristics. The integrated activity of an estimated 1500 gene products is required for normal mitochondrial biogenesis, function, and integrity. Almost all of these are encoded by nuclear genes and thus follow the rules and patterns of nuclear genomic inheritance (Chap. 84). These nuclear-encoded proteins are synthesized in the cell cytoplasm and imported to their location of activity within the mitochondria through a complex biochemical process. In addition, the mitochondria contain their own small genome consisting of numerous copies (polyploidy) per mitochondrion of a circular, double-strand mitochondrial DNA (mtDNA) molecule comprising 16,569 nucleotides. This mtDNA sequence (also known as the “mitogenome”) might represent the remnants of endosymbiotic prokaryotes from which mitochondria are thought to have originated. The mtDNA sequence contains a total of 37 genes, of which 13 encode mitochondrial protein components of the ETC (Fig. 85e-2). The remaining 22 tRNAand 2 rRNA-encoding genes are dedicated to the process of translating the 13 mtDNAencoded proteins. This dual nuclear and mitochondrial genetic control of mitochondrial function results in unique and diagnostically challenging patterns of inheritance. The current chapter focuses on heritable traits and diseases related to the mtDNA component of the dual genetic control of mitochondrial function. The reader is referred to Chaps. 84 and 462e for consideration of mitochondrial disease originating from mutations in the nuclear genome. The latter include disorders due to mutations in nuclear genes directly encoding structural components or assembly factors of the oxidative phosphorylation complexes, (2) disorders due to mutations in nuclear genes encoding proteins indirectly related to oxidative phosphorylation, and of mtDNA copy number in affected tissues without mutations or rear-85e-1 rangements in the mtDNA. As a result of its circular structure and extranuclear location, the replication and transcription mechanisms of mtDNA differ from the corresponding mechanisms in the nuclear genome, whose nucleosomal packaging and structure are more complex. Because each cell contains many copies of mtDNA, and because the number of mitochondria can vary during the lifetime of each cell, mtDNA copy number is not directly coordinated with the cell cycle. Thus, vast differences in mtDNA copy number are observed between different cell types and tissues and during the lifetime of a cell. Another important feature of the mtDNA replication process is a reduced stringency of proofreading and replication error correction, leading to a greater degree of sequence variation compared to the nuclear genome. Some of these sequence variants are silent polymorphisms that do not have the potential for a phenotypic or pathogenic effect, whereas others may be considered pathogenic mutations. With respect to transcription, initiation can occur on both strands and proceeds through the production of an intronless polycistronic precursor RNA, which is then processed to produce the 13 individual mRNA and 24 individual tRNA and rRNA products. The 37 mtDNA genes comprise fully 93% of the 16,569 nucleotides of the mtDNA in what is known as the coding region. The control region consists of ~1.1 kilobases (kb) of noncoding DNA, which is thought to have an important role in replication and transcription initiation. In contrast to homologous pair recombination that takes place in the nucleus, mtDNA molecules do not undergo recombination, such that mutational events represent the only source of mtDNA genetic diversification. Moreover, with very rare exceptions, it is only the maternal DNA that is transmitted to the offspring. The fertilized oocyte degrades mtDNA carried from the sperm in a complex process involving the ubiquitin proteasome system. Thus, although mothers transmit their mtDNA to both their sons and daughters, only the daughters are able to transmit the inherited mtDNA to future generations. Accordingly, mtDNA sequence variation and associated phenotypic traits and diseases are inherited exclusively along maternal lines. As noted below, because of the complex relationship between mtDNA mutations and disease expression, sometimes this maternal inheritance is difficult to recognize at the clinical or pedigree level. However, evidence of paternal transmission can almost certainly rule out an mtDNA genetic origin of phenotypic variation or disease; conversely, a disease affecting both sexes without evidence of paternal transmission strongly suggests a heritable mtDNA disorder (Fig. 85e-2). MULTIPLE COPY NUMBER (POLYPLOIDY), HIGH MUTATION RATE, HETEROPLASMY, AND MITOTIC SEGREGATION Each aerobic cell in the body has multiple mitochondria, often numbering many hundreds or more in cells with extensive energy production requirements. Furthermore, the number of copies of mtDNA within each mitochondrion varies from several to hundreds; this is true of both somatic as well as germ cells, including oocytes in females. In the case of somatic cells, this means that the impact of most newly acquired somatic mutations is likely to be very small in terms of total cellular or organ system function; however, because of the manyfold higher mutation rate during mtDNA replication, numerous different mutations may accumulate with aging of the organism. It has been proposed that the total cumulative burden of acquired somatic mtDNA mutations with age may result in an overall perturbation of mitochondrial function, contributing to age-related reduction in the efficiency of oxidative phosphorylation and increased production of damaging ROS. The accumulation of such acquired somatic mtDNA mutations with aging may contribute to age-related diseases, such as metabolic syndrome and diabetes, cancer, and neurodegenerative and cardiovascular disease in any given individual. However, somatic mutations are not carried forward to the next generation, and the individual, play a pivotal role in the manifestation and severity of disease and are crucial to understanding the complexity of inheritance of mtDNA disorders. At the level of the oocyte, the percentage of mtDNA molecules bearing each version of the polymorphic sequence variant or mutation depends Liver on stochastic events related to partition- Hepatopathy ing of mtDNA molecules during the process of oogenesis itself. Thus, oocytes differ from each other in the degree of ATP heteroplasmy for that sequence variant or mutation. In turn, the heteroplasmic state is carried forward to the zygote and to the organism as a whole, to varying Nuclear Subunits Oxidative degrees, depending on mitotic segrega-DNA phosphorylation tion of mtDNA molecules during organ Fanconi's syndrome system development and maintenance. Glomerulopathy For this reason, in vitro fertilization, Brain followed by preimplantation genetic diagnosis (PGD), is not as predictive of the genetic health of the offspring in Mitochondrial the case of mtDNA mutations as in the DNA case of the nuclear genome. Similarly, Dementia Pancreas Migraine Diabetes mellitus the impact of somatic mtDNA mutations acquired during development and subsequently also shows an enormous spectrum of variability. Blood Mitotic segregation refers to the Pearson's syndrome unequal distribution of wild-type and mutant versions of mtDNA molecules Inner ear during all cell divisions that occur dur-Sensorineural ing prenatal development and subsehearing loss Colon quently throughout the lifetime of an Pseudo-obstruction individual. The phenotypic effect or FIGURE 85e-1 Dual genetic control and multiple organ system manifestations of mitochondrial disease impact will, thus, be a funcdisease. (Reproduced with permission from DR Johns: Mitochondrial DNA and disease. N Engl J Med tion not only of the inherent disruptive 333:638, 1995.) hereditary impact of mtDNA mutagenesis requires separate consideration of events in the female germline. The multiple mtDNA copy number within each cell, including the maternal germ cells, results in the phenomenon of heteroplasmy, in contrast to much greater uniformity (homoplasy) of somatic nuclear DNA sequence. Heteroplasmy for a given mtDNA sequence variant or mutation arises as a result of the coexistence within a cell, tissue, or individual of mtDNA molecules bearing more than one version of the sequence variant (Fig. 85e-3). The importance of the heteroplasmy phenomena to the understanding of mtDNA-related mitochondrial diseases is critical. The coexistence of mutant and nonmutant mtDNA and the variation of the mutant load among individuals from the same maternal sibship, and across organs and tissues within the same FIGURE 85e-2 Maternal inheritance of mitochondrial DNA (mtDNA) disorders and heritable traits. Affected women (filled circles) transmit the trait to their children. Affected men (filled squares) do not transmit the trait to any of their offspring. effect (pathogenicity) on the mtDNA tions) or integrity of the mtDNA molecule (control region mutations), but also of its distribution among the multiple copies of mtDNA in the various mitochondria, cells, and tissues of the affected individual. Thus, one consequence can be the generation of a bottleneck due to the marked decline in given sets of mtDNA variants, consequent to such mitotic segregation. Heterogeneity arises from differences in the degree of heteroplasmy among oocytes of the affected female, together with subsequent mitotic segregation of the pathogenic mutation during tissue and organ development, and throughout the lifetime of the individual offspring. The actual expression of disease might then depend on a threshold percentage of mitochondria whose function is disrupted by mtDNA mutations. This in turn confounds hereditary transmission patterns and hence genetic diagnosis of pathogenic heteroplasmic mutations. Generally, if the proportion of mutant mtDNA is less than 60%, the individual is unlikely to be affected, whereas proportions exceeding 90% cause clinical disease. In contrast to classic mtDNA diseases, most of which begin in childhood and are the result of heteroplasmic mutations as noted above, during the course of human evolution, certain mtDNA sequence variants have drifted to a state of homoplasmy, wherein all of the mtDNA molecules in the organism contain the new sequence variant. This arises due to a “bottleneck” effect followed by genetic drift during the very process of oogenesis itself (Fig. 85e-3). In other words, during certain stages of oogenesis, the mtDNA copy number becomes so substantially reduced that the particular mtDNA species bearing the novel or derived sequence variant may become the increasingly predominant, mate because of the phenotypic heterogeneity that occurs as a function of heteroplasmy, the challenge of detecting and assessing heteroplasmy in different affected tissues, and the other unique features of mtDNA function and FIGURE 85e-3 Heteroplasmy and the mitochondrial genetic bottleneck. inheritance described above. It is estimated that at least During the production of primary oocytes, a selected number of mitochondrial DNA (mtDNA) molecules are transferred into each oocyte. Oocyte maturation is mutation with the potential to causes disease, but that associated with the rapid replication of this mtDNA population. This restriction amplification event can lead to a random shift of mtDNA mutational load actually affect up to approximately 1 in 8500 individuals. between generations and is responsible for the variable levels of mutated mtDNA The true disease burden relating to mtDNA sequence observed in affected offspring from mothers with pathogenic mtDNA mutations. variation will only be known when the following capa- Mitochondria that contain mutated mtDNA are shown in red, and those with bilities become available: (1) ability to distinguish a normal mtDNA are shown in green. (Reproduced with permission from R Taylor, D Turnbull: Mitochondrial DNA mutations in human disease. Nat Rev Genetics 6:389, 2005.) and eventually exclusive, version of the mtDNA for that particular nucleotide site. All of the offspring of a woman bearing an mtDNA sequence variant or mutation that has become homoplasmic will also be homoplasmic for that variant and will transmit the sequence variant forward in subsequent generations. Considerations of reproductive fitness limit the evolutionary or population emergence of homoplasmic mutations that are lethal or cause severe disease in infancy or childhood. Thus, with a number of notable exceptions (e.g., mtDNA mutations causing Leber’s hereditary optic neuropathy; see below), most homoplasmic mutations are considered to be neutral markers of human evolution, which are useful and interesting in the population genetics analysis of shared maternal ancestry but which have little significance in human phenotypic variation or disease predisposition. More importantly is the understanding that this accumulation of homoplastic mutations occurs at a genetic locus that is transmitted only through the female germline and that lacks recombination. In turn, this enables reconstruction of the sequential topology and radiating phylogeny of mutations accumulated through the course of human evolution since the time of the most recent common mtDNA ancestor of all contemporary mtDNA sequences, some 200,000 years ago. The term haplogroup is usually used to define major branching points in the human mtDNA phylogeny, nested one within the other, which often demonstrate striking continental geographic ancestral partitioning. At the level of the complete mtDNA sequence, the term haplotype is usually used to describe the sum of mutations observed for a given mtDNA sequence and as compared to a reference sequence, such that all haplotypes falling within a given haplogroup share the total sum of mutations that have accumulated since the most recent common ancestor and the bifurcation point they mark. The remaining observed variants are private to each haplotype. Consequentially, human mtDNA sequence is an almost perfect molecular prototype for a nonrecombining locus, and its variation has been extensively used in phylogenetic studies. Moreover, the mtDNA mutation rate is considerably higher than the rate observed for the nuclear genome, especially in the control region, which contains the displacement, or Oocyte maturation D-loop, in turn comprising two adjacent hypervariable 85e-3 and mtDNA amplification Fertilization regions (HVR-I and HVR-II). Together with the absence of recombination, this amplifies drift to high frequencies of novel haplotypes. As a result, mtDNA haplotypes are more highly partitioned across geographically defined High level of mutation (affected offspring) populations than sequence variants in other parts of the genome. Despite extensive research, it has not been well established that such haplotype-based partitioning has a significant influence on human health conditions. However, mtDNA-based phylogenetic analysis can be used both as a quality assurance tool and as a filter of mutation (mildly affected offspring) in distinguishing neutral mtDNA variants comprising human mtDNA phylogeny from potentially deleterious mutations. Low level of mutation (unaffected offspring) The true prevalence of mtDNA disease is difficult to esti notype-modifying or pathogenic mutation, (2) accurate assessment of heteroplasmy that can be determined with fidelity, and (3) a systems biology approach (Chap. 87e) to determine the network of epistatic interactions of mtDNA sequence variations with mutations in the nuclear genome. Given the vital roles of mitochondria in all nucleated cells, it is not surprising that mtDNA mutations can affect numerous tissues with pleiotropic effects. More than 200 different disease-causing, mostly heteroplasmic mtDNA mutations have been described affecting ETC function. Figure 85e-4 provides a partial mtDNA map of some of the better characterized of these disorders. A number of clinical clues can increase the index of suspicion for a heteroplasmic mtDNA mutation as an etiology of a heritable trait or disease, including (1) familial clustering with absence of paternal transmission; (2) adherence to one of the classic syndromes (see below) or paradigmatic combinations of disease phenotypes involving several organ systems that normally do not fit together within a single nuclear genomic mutation category; (3) a complex of laboratory and pathologic abnormalities that reflect disruption in cellular energetics (e.g., lactic acidosis and neurodegenerative and myodegenerative symptoms with the finding of ragged red fibers, reflecting the accumulation of abnormal mitochondria under the muscle sarcolemmal membrane); or (4) a mosaic pattern reflecting a heteroplasmic state. Heteroplasmy can sometimes be elegantly demonstrated at the tissue level using histochemical staining for enzymes in the oxidative phosphorylation pathway, with a mosaic pattern indicating heterogeneity of the genotype for the coding region for the mtDNA-encoded enzyme. Complex II, CoQ, and cytochrome c are exclusively encoded by nuclear DNA. In contrast, complexes I, III, IV, and V contain at least some subunits encoded by mtDNA. Just 3 of the 13 subunits of the ETC complex IV enzyme, cytochrome c oxidase, are encoded by mtDNA, and, therefore, this enzyme has the lowest threshold for dysfunction when a threshold level of mutated mtDNA is reached. Histochemical staining for cytochrome c oxidase activity in tissues of patients affected with heteroplasmic inherited mtDNA mutations Parkinsonism, LS, MELAS, PEO, LHON, MELAS, myopathy, cardiomyopathy, diabetes and deafness Myopathy, LHON cardiomyopathy, PEO Myopathy, MELAS Myopathy, lymphoma LS, ataxia, chorea, myopathy PEO Myopathy, PEO Myoglobinuria, motor neuron disease, PPK, deafness, Myopathy, multisystem disease, NARP, MILS, Myopathy, PEO Cardiomyopathy ECM ECM, LHON, myopathy, cardiomyopathy, MELAS and parkinsonism LHON, MELAS, diabetes, LHON and dystonia ND5 LS, MELAS Cardiomyopathy, ECM PEO, myopathy, LHON, myopathy, LHON and dystonia Progressive myoclonus, epilepsy, and optic atrophy Cardiomyopathy, encephalomyopathy FBSN SIDS, ECM Cardiomyopathy, LS, ECM, PEO, MERRF, myoglobinuria MELAS, deafness FIGURE 85e-4 Mutations in the human mitochondrial genome known to cause disease. Disorders that are frequently or prominently associated with mutations in a particular gene are shown in boldface. Diseases due to mutations that impair mitochondrial protein synthesis are shown in blue. Diseases due to mutations in protein-coding genes are shown in red. ECM, encephalomyopathy; FBSN, familial bilateral striatal necrosis; LHON, Leber’s hereditary optic neuropathy; LS, Leigh syndrome; MELAS, mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes; MERRF, myoclonic epilepsy with ragged red fibers; MILS, maternally inherited Leigh syndrome; NARP, neuropathy, ataxia, and retinitis pigmentosa; PEO, progressive external ophthalmoplegia; PPK, palmoplantar keratoderma; SIDS, sudden infant death syndrome. (Reproduced with permission from S DiMauro, E Schon: Mitochondrial respiratory-chain diseases. N Engl J Med 348:2656, 2003.) (or with the somatic accumulation of mtDNA mutations, see below) can show a mosaic pattern of reduced histochemical staining in comparison with histochemical staining for the complex II enzyme, succinate dehydrogenase (Fig. 85e-5). Heteroplasmy can also be detected at the genetic level through direct Sanger-type mtDNA genotyping under special conditions, although clinically significant low levels of heteroplasmy can escape detection in genomic samples extracted from whole blood using conventional genotyping and sequencing techniques. The emerging next-generation sequencing (NGS) techniques and their rapid penetration and recognition as useful clinical diagnostic tools are expected to also dramatically improve the clinical genetic diagnostic evaluation of mitochondrial diseases at the level of both the nuclear genome and mtDNA. In the context of the larger nuclear genome, the ability of NGS techniques to dramatically increase the speed at which DNA can be sequenced at a fraction of the cost of conventional Sanger-type sequencing technology is particularly beneficial. Low sequencing costs and short turnaround time expedite “first-tier” screening of panels of hundreds of previously known or suspected mitochondrial disease genes or screening for the entire exome or genome in an attempt to identify novel genes and mutations affecting different patients or families. In the context of the mtDNA, NGS approaches hold the particular promise for rapid and reliable detection of heteroplasmy in different affected tissues. Although Sanger sequencing allows for complete coverage of the mtDNA, it is limited by the lack of deep coverage and low sensitivity for heteroplasmy detection when it is much less than 50%. In contrast, NGS technology is an excellent tool for rapidly and accurately obtaining a patient’s predominant mtDNA sequence and also lower frequency heteroplasmic variants. This is enabled by deep coverage of the genome through multiple independent sequence reads. Accordingly, recent studies making use of NGS techniques have demonstrated sequence accuracy equivalent to Sanger-type sequencing, but also have uncovered heretofore unappreciated heteroplasmy rates ranging between 10 and 50% and detection of single-nucleotide heteroplasmy down to levels of <10%. Clinically, the most striking overall characteristic of mitochondrial genetic disease is the phenotypic heterogeneity associated with mtDNA mutations. This extends to intrafamilial phenotypic heterogeneity for the same mtDNA pathogenic mutation and, conversely, to the overlap of phenotypic disease manifestations with distinct mutations. Thus, although fairly consistent and well-defined “classic” syndromes have been attributed to specific mutations, frequently “nonclassic” combinations of disease phenotypes ranging from isolated myopathy to extensive multi-system disease are often encountered, rendering genotype-phenotype correlation challenging. In both classical and nonclassical mtDNA disorders, there is often a clustering of some combination of abnormalities affecting the neurologic system (including optic nerve atrophy, pigment retinopathy, and sensorineural hearing loss), cardiac and skeletal muscle (including extraocular muscles), and endocrine and metabolic systems (including diabetes mellitus). Additional organ systems that may be affected include the hematopoietic, renal, hepatic, and gastrointestinal systems, although these are more frequently involved in infants and children. Disease-causing mtDNA coding region mutations can affect either one of the 13 protein encoding genes or one of the 24 protein synthetic genes. Clinical manifestations do not readily distinguish these two categories, although lactic acidosis and muscle pathologic findings tend to be more prominent in the latter. In all cases, either defective ATP production due to disturbances in the ETC or enhanced generation of ROS has been invoked as the mediating biochemical mechanism between mtDNA mutation and disease manifestation. The clinical presentation of adult patients with mtDNA disease can be divided into three categories: (1) clinical features suggestive of mitochondrial disease (Table 85e-2), but not a well-defined classic syndrome; (2) classic mtDNA syndromes; and (3) clinical presentation confined to one organ system (e.g., isolated sensorineural deafness, cardiomyopathy, or diabetes mellitus). Table 85e-3 provides a summary of eight illustrative classic mtDNA syndromes or disorders that affect adult patients and highlights some of the most interesting features of mtDNA disease in terms of molecular pathogenesis, inheritance, and clinical presentation. The first five of these syndromes result from heritable point mutations in either protein-encoding or protein synthetic mtDNA genes; the other three result from rearrangements or deletions that usually do not involve the germline. are often homoplasmic for the disease-85e-5 causing mutation. The somewhat later onset in young adulthood and modifying effect of protective background nuclear genomic haplotypes may have enabled homoplasmic pathogenic mutations to have escaped evolutionary censoring. Mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS) is a multisystem disorder with a typical onset between 2 to 10 years of age. Following normal early psychomotor development, the most common initial symptoms are seizures, recurrent headaches, anorexia, and recurrent vomiting. Exercise intolerance or proximal limb weakness can be the initial manifestation, followed by generalized tonic-clonic seizures. Short stature is common. Seizures are often associated with stroke-like episodes of transient FIGURE 85e-5 Cytochrome c oxidase (COX) deficiency in mitochondrial DNA (mtDNA)– hemiparesis or cortical blindness that associated disease. Transverse tissue sections that have been stained for COX and succinate dehy-may produce altered consciousness drogenase (SDH) activities sequentially, with COX-positive cells shown in brown and COX-deficient and may recur. The cumulative residcells shown in blue. A. Skeletal muscle from a patient with a heteroplasmic mitochondrial tRNA ual effects of the stroke-like episodes point mutation. The section shows a typical “mosaic” pattern of COX activity, with many muscle gradually impair motor abilities, vision, fibers harboring levels of mutated mtDNA that are above the crucial threshold to produce a func-and cognition, often by adolescence tional enzyme complex. B. Cardiac tissue (left ventricle) from a patient with a homoplasmic tRNA or young adulthood. Sensorineural mutation that causes hypertrophic cardiomyopathy, which demonstrates an absence of COX in hearing loss adds to the progressive most cells. C. A section of cerebellum from a patient with mtDNA rearrangement that highlights decline of these individuals. A plethora the presence of COX-deficient neurons. D, E. Tissues that show COX deficiency due to clonal expan-of less common symptoms have been sion of somatic mtDNA mutations within single cells—a phenomenon that is seen in both post-described including myoclonus, ataxia, mitotic cells (D; extraocular muscles) and rapidly dividing cells (E; colonic crypt) in aging humans. episodic coma, optic atrophy, cardio(Reproduced with permission from R Taylor, D Turnbull: Mitochondrial DNA mutations in human disease. myopathy, pigmentary retinopathy, Nat Rev Genetics 6:389, 2005.) Leber’s hereditary optic neuropathy (LHON) is a common cause of maternally inherited visual failure. LHON typically presents during young adulthood with subacute painless loss of vision in one eye, with symptoms developing in the other eye 6–12 weeks after the initial onset. In some instances, cerebellar ataxia, peripheral neuropathy, and cardiac conduction defects are observed. In >95% of cases, LHON is due to one of three homoplasmic point mutations of mtDNA that affect genes encoding different subunits of complex I of the mitochondrial ETC; however, not all individuals who inherit a primary LHON mtDNA mutation develop optic neuropathy, and males are four to five times more likely than females to be affected, indicating that additional environmental (e.g., tobacco exposure) or genetic factors are important in the etiology of the disorder. Both the nuclear and mitochondrial genomic backgrounds modify disease penetrance. Indeed, a region of the X chromosome containing a high-risk haplotype for LHON was recently identified, supporting the formulation that nuclear genes act as modifiers and affording an explanation for the male prevalence of LHON. This haplotype can be used in predictive genomic testing and prenatal screening for this disease. In contrast to the other classic mtDNA disorders, it is of interest that patients with this syndrome Neurologic: stroke, epilepsy, migraine headache, peripheral neuropathy, cranial neuropathy (optic atrophy, sensorineural deafness, dysphagia, dysphasia) Skeletal myopathy: ophthalmoplegia, exercise intolerance, myalgia Cardiac: conduction block, cardiomyopathy Respiratory: hypoventilation, aspiration pneumonitis Endocrine: diabetes mellitus, premature ovarian failure, hypothyroidism, hypoparathyroidism Ophthalmologic: cataracts, pigment retinopathy, neurologic and myopathic (optic atrophy, ophthalmoplegia) ophthalmoplegia, diabetes mellitus, hir sutism, gastrointestinal dysmotility, and nephropathy. The typical age of death ranges from 10 to 35 years, but some individuals live into their sixth decade. Intercurrent infections or intestinal obstructions are often the terminal events. Laboratory investigation commonly demonstrates elevated lactate concentrations at rest with excessive increase after moderate exercise. Brain imaging during stroke-like episodes shows areas of increased T2 signal, typically involving the posterior cerebrum and not conforming to the distribution of major arteries. Electrocardiogram (ECG) may show evidence of cardiomyopathy, preexcitation, or incomplete heart block. Electromyography and nerve conduction studies are consistent with a myopathic process, but axonal and sensory neuropathy may coexist. Muscle biopsy typically shows ragged red fibers with the modified Gomori trichrome stain or “ragged blue fibers” resulting from the hyperintense reaction with the histochemical staining for succinate dehydrogenase. The diagnosis of MELAS is based on a combination of clinical findings and molecular genetic testing. Mutations in the mtDNA gene MT-TL1 encoding tRNAleu are causative. The most common mutation, present in approximately 80% of individuals with typical clinical findings, is an A-to-G transition at nucleotide 3243 (m.3243A>G). Mutations can usually be detected in mtDNA from leukocytes in individuals with typical MELAS; however, the occurrence of heteroplasmy can result in varying tissue distribution of mutated mtDNA. In the absence of specific treatment, various manifestations of MELAS are treated according to standard modalities for prevention, surveillance, and treatment. Myoclonic epilepsy with ragged red fibers (MERRF) is a multisystem disorder characterized by myoclonus, seizures, ataxia, and myopathy with ragged red fibers. Hearing loss, exercise intolerance, neuropathy, and short stature are often present. Almost all MERRF patients have mutation in the mtDNA tRNAlys gene, and the m.8344A>G mutation in the mtDNA gene encoding the lysine amino acid tRNA is responsible for 80–90% of MERRF cases. Abbreviations: CSF, cerebrospinal fluid; NARP, neuropathy, ataxia, and retinitis pigmentosa. Neuropathy, ataxia, and retinitis pigmentosa (NARP) is characterized by moderate diffuse cerebral and cerebellar atrophy and symmetric lesions of the basal ganglia on magnetic resonance imaging (MRI). A heteroplasmic m.8993T>G mutation in the ATPase 6 subunit gene has been identified as causative. Ragged red fibers are not observed in muscle biopsy. When >95% of mtDNA molecules are mutant, a more severe clinical, neuroradiologic. and neuropathologic picture (Leigh syndrome) emerges. Point mutations in the mtDNA gene encoding the 12S rRNA result in heritable nonsyndromic hearing loss. One such mutation causes heritable ototoxic susceptibility to aminoglycoside antibiotics, which opens a pathway for a simple pharmacogenetic test in the appropriate clinical settings. Kearns-Sayre syndrome (KSS), sporadic progressive external ophthalmoplegia (PEO), and Pearson syndrome are three disease phenotypes caused by large-scale mtDNA rearrangements including partial deletions or partial duplication. The majority of single large-scale rearrangements of mtDNA are thought to result from clonal amplification of a single sporadic mutational event, occurring in the maternal oocyte or during early embryonic development. Because germline involvement is rare, most cases are sporadic rather than inherited. KSS is characterized by the triad of onset before age 20, chronic progressive external ophthalmoplegia, and pigmentary retinopathy. Cerebellar syndrome, heart block, increased cerebrospinal fluid protein content, diabetes mellitus, and short stature are also part of the syndrome. Single deletions/duplication can also result in milder phenotypes such as PEO, characterized by late-onset progressive external ophthalmoplegia, proximal myopathy, and exercise intolerance. In both KSS and PEO, diabetes mellitus and hearing loss are frequent accompaniments. Pearson syndrome is also characterized by diabetes mellitus from pancreatic insufficiency, together with pancytopenia and lactic acidosis, caused by the large-scale sporadic deletion of several mtDNA genes. Two important dilemmas in classic mtDNA disease have benefited from recent important research insights. The first relates to the greater involvement of neuronal, muscular, renal, hepatic, and pancreatic manifestations in mtDNA disease in these syndromes. This observation has appropriately been mostly attributed to the high energy utilization of the involved tissues and organ systems and, hence, greater dependency on mitochondrial ETC integrity and health. However, because mutations are stochastic events, mitochondrial mutations should occur in any organ during embryogenesis and development. Recently, additional explanations have been suggested based on studies of the common m.3243A>G transition. The proportion of this mutation in peripheral blood cells was shown to decrease exponentially with age. m.1555A>G mutation in 12S Homoplasmic Maternal rRNA m.7445A>G mutation in 12S Homoplasmic Maternal rRNA Single deletions or duplications Heteroplasmic Mostly sporadic, somatic mutations Large deletion Heteroplasmic Sporadic, somatic mutations The 5-kb “common deletion” Heteroplasmic Sporadic, somatic mutations A selective process acting at the stem cell level with a strong bias against the mutated form would have its greatest effect to reduce the mutant mtDNA only in highly proliferating cells, such as those derived from the hematopoietic system. Tissues and organs with lower cell turnover, such as those involved with mtDNA mutations, would not benefit from this effect and, thus, would be the most affected. The other dilemma arises from the observation that only a subset of mtDNA mutations accounts for the majority of the familial mtDNA diseases. The random occurrence of mutations in the mtDNA sequence should yield a more uniform distribution of disease-causing mutations. However, recent studies using the introduction of one severe and one mild point mutation into the female germline of experimental animals demonstrated selective elimination during oogenesis of the severe mutation and selective retention of the milder mutation, with the emergence of mitochondrial disease in offspring after multiple generations. Thus, oogenesis itself can act as an “evolutionary” filter for mtDNA disease. The clinical presentations of classic syndromes, groupings of disease manifestations in multiple organ systems, or unexplained isolated presentations of one of the disease features of a classic mtDNA syndrome should prompt a systematic clinical investigation as outlined in Fig. 85e-6. Indeed, mitochondrial disease should be considered in the differential diagnosis of any progressive multisystem disorder. Despite the centrality of disruptive oxidative phosphorylation, an elevated blood lactate level is neither specific nor sensitive because there are many causes of blood lactic acidosis, and many patients with mtDNA defects presenting in adulthood have normal blood lactate. An elevated cerebrospinal fluid lactate is a more specific test for mitochondrial disease if there is central nervous system involvement. The serum creatine kinase may be elevated but is often normal, even in the presence of a proximal myopathy. Urinary organic and amino acids may also be abnormal, reflecting metabolic and kidney proximal tubule dysfunction. Every patient with seizures or cognitive decline should have an electroencephalogram. A brain computed tomography (CT) scan may show calcified basal ganglia or bilateral hypodense regions with cortical atrophy. MRI is indicated in patients with brainstem signs or stroke-like episodes. For some mitochondrial diseases, it is possible to obtain an accurate diagnosis with a simple molecular genetic screen. For examples, 95% of patients with LHON harbor one of three mtDNA point mutations (m.11778A>G, m.A3460A>G, or m.14484T>C). These patients have very high levels of mutated mtDNA in peripheral blood cells, and Blood: creatine kinase, liver functions, glucose, lactate Urine: organic and amino acids CSF: glucose, protein, lactate Cardiac x-ray, ECG, ECHO EEG, EMG, nerve conduction Brain CT/MRI PCR/RFLP analysis of blood for known mutations Specific point mutation syndrome: e.g., MELAS, MERRF, and LHON FIGURE 85e-6 Clinical and laboratory investigation of a suspected mitochondrial DNA (mtDNA) disorder. CSF, cerebrospinal fluid; CT, computed tomography; ECG, electrocardiogram; ECHO, echocardiography; EEG, electroencephalogram; EMG, electromyogram; LHON, Leber’s hereditary optic neuropathy; MELAS, mitochondrial encephalomyopathy, lactic acidosis, and stoke-like episodes; MERFF, myoclonic epilepsy with ragged red fibers; MRI, magnetic resonance imaging; PCR, polymerase chain reaction; RFLP, restriction fragment length polymorphism. therefore, it is appropriate to send a blood sample for molecular genetic analysis by polymerase chain reaction (PCR) or restriction fragment length polymorphism. The same is true for most MERRF patients who harbor a point mutation in the lysine tRNA gene at position 8344. In contrast, patients with the m.3243A>G MELAS mutation often have low levels of mutated mtDNA in blood. If clinical suspicion is strong enough to warrant peripheral blood testing, then patients with a negative result should be investigated further by performing a skeletal muscle biopsy. Muscle biopsy histochemical analysis is the cornerstone for investigation of patients with suspected mitochondrial disease. Histochemical analysis may show subsarcolemmal accumulation of mitochondria with the appearance of ragged red fibers. Electron microscopy might show abnormal mitochondria with paracrystalline inclusions. Muscle histochemistry may show cytochrome c oxidase (COX)–deficient fibers, which indicate mitochondrial dysfunction (Fig. 85e-5). Respiratory chain complex assays may also show reduced enzyme function. Either of these two abnormalities confirms the presence of a mitochondrial disease, to be followed by an in-depth molecular genetic analysis. Recent evidence has provided important insights into the importance of nuclear-mtDNA genomic cross-talk and has provided a descriptive framework for classifying and understanding disorders that emanate from perturbations in this cross-talk. Although not strictly considered as mtDNA genetic disorders, manifestations do overlap those highlighted above (Fig. 85e-7). The relationship among the degree of heteroplasmy, tissue distribution of the mutant mtDNA, and disease phenotype simplifies inference of a clear causative relationship between heteroplasmic mutation and disease. With the exception of certain mutations (e.g., those causing most cases of LHON), drift to homoplasmy of such mutations would be precluded normally by the severity of impaired oxidative phosphorylation and the consequent reduction in reproductive fitness. Therefore, sequence variants that have reached homoplasmy should be neutral in terms of human evolution and, hence, useful only for tracing human evolution, demography, and migration, as described above. One important exception is in the case of one or more of the homoplasmic population-level variants, which designate the mtDNA haplogroup J, and the interaction with the mtDNA mutations causing LHON. Reduced disease predilection suggests that one or more of the ancient sequence variants designating mtDNA haplogroup J appears to attenuate predisposition to degenerative disease, in the face of other risk factors. Whether or not additional epistatic interactions between population-level mtDNA haplotypes and common health conditions will be found remains to be determined. If such influences do exist, then they are more likely to be relevant to health conditions in the postreproductive age groups, wherein evolutionary filters would not have had the opportunity to censor deleterious effects and interactions and wherein the effects of oxidative stress may play a role. Although much has been written about the possible associations of population-level common mtDNA variants and human health and disease phenotypes or adaptation to different environmental influences (e.g., climate), a word of caution is in order. Many studies that purport to show such associations with phenotypes such as longevity, athletic performance, and metabolic and neurodegenerative disease are limited by small sample sizes, possible genotyping inaccuracies, and the possibility of population stratification or ethnic ancestry bias. Because mtDNA haplogroups are so prominently partitioned along phylogeographic lines, it is difficult to rule out the possibility that a haplogroup for which an association has been found is simply a marker for differences in Succinyl-CoA synthase (SUCLA2, SUCLG1) FIGURE 85e-7 Disorders associated with perturbations in nuclearmitochondrial genomic cross-talk. Clinical features and genes associated with multiple mitochondrial DNA (mtDNA) deletions, mtDNA depletion, and mitochondrial neurogastrointestinal encephalomyopathy syndromes. ANT, adenine nucleotide translocators; adPEO, autosomal dominant progressive external ophthalmoplegia; arPEO, autosomal recessive progressive external ophthalmoplegia; IOSCA, infantile-onset spinocerebellar ataxia; SCAE, spinocerebellar ataxia and epilepsy. (Reproduced with permission from A Spinazzola, M Zeviani: Disorders from perturbations of nuclear-mitochondrial intergenomic cross-talk. J Intern Med 265:174, 2009.) populations with a societal or environmental difference or with different allele frequencies at other genomic loci, which are actually causally related to the heritable trait or disease of interest. The difficulty in generating cellular or animal models to test the functional influence of homoplasmic sequence variants (as a result of mtDNA polyploidy) further compounds the challenge. The most likely formulation is that the risk conferred by different mtDNA haplogroup–defining homoplasmic mutations for common diseases depends on the concomitant nuclear genomic background, together with environmental influences. Progress in minimizing potentially misleading associations in mtDNA heritable trait and disease studies should include ensuring adequate sample size taken from a large sample recruitment base, using carefully matched controls and population structure determination, and performing analysis that takes into account epistatic interactions with other genomic loci and environmental factors. Studies on aging humans and animals have shown a potentially important correlation of age with the accumulation of heterogeneous mtDNA mutations, especially in those organ systems that undergo the most prominent age-related degenerative tissue phenotype. Sequencing of PCR-amplified single mtDNA molecules has demonstrated an average of two to three point mutations per molecule in elderly subjects when compared with younger ones. Point mutations observed include those responsible for known heritable heteroplasmic mtDNA disorders, such as the m.3344A>G and m.3243A>G mutations responsible for the MERRF and MELAS syndromes, respectively. However, the cumulative burden of these acquired somatic point mutations with age was observed to remain well below the threshold expected for phenotypic expression (<2%). Point mutations at other sites not normally involved in inherited mtDNA disorders have also been shown to accumulate to much higher levels in some tissues of elderly individuals, with the description of tissue-specific “hot spots” for mtDNA point mutations. Along the same lines, an age-associated and tissue-specific accumulation of mtDNA deletions has been observed, including deletions involved in known heritable mtDNA disorders, as well as others. The accumulation of functional mtDNA deletions in a given tissue is expected to be associated with mitochondrial dysfunction, as reflected in an age-associated patchy and reduced COX activity on histochemical staining, especially in skeletal and cardiac muscle and brain. A particularly well-studied and potentially important example is the accumulation of mtDNA deletions and COX deficiency observed in neurons of the substantia nigra in Parkinson’s disease patients. The progressive accumulation of ROS has been proposed as the key factor connecting mtDNA mutations with aging and age-related disease pathogenesis (Fig. 85e-8). As noted above, ROS are a by-product of oxidative phosphorylation and are removed by detoxifying antioxidants into less harmful moieties; however, exaggerated production of ROS or impaired removal results in their accumulation. One of the main targets for ROS-mediated injury is DNA, and mtDNA is particularly vulnerable because of its lack of protective histones and less efficient injury repair systems compared with nuclear DNA. In turn, accumulation of mtDNA mutations results in inefficient oxidative phosphorylation, with the potential for excessive production of ROS, generating a “vicious cycle” of cumulative mtDNA damage. Indeed, measurement of the oxidative stress biomarker 8-hydroxy-2-deoxyguanosine has been used to measure age-dependent increases in mtDNA oxidative damage at a rate exceeding that of nuclear DNA. It should be noted that mtDNA mutation can potentially occur in postmitotic cells as well, because mtDNA replication is not synchronized with the cell cycle. Two other proposed links between mtDNA mutation and aging, besides ROS-mediated tissue injury, are the perturbations in efficiency of oxidative phosphorylation with disturbed cellular aerobic function FIGURE 85e-8 Multiple pathways of mitochondrial DNA (mtDNA) damage and aging. Multiple factors may impinge on the integrity of mitochondria that lead to loss of cell function, apoptosis, and aging. The classic pathway is indicated with blue arrows; the generation of reactive oxygen species (ROS; superoxide anion, hydrogen peroxide, and hydroxyl radicals), as a by-product of mitochondrial oxidative phosphorylation, results in damage to mitochondrial macromolecules, including the mtDNA, with the latter leading to deleterious mutations. When these factors damage the mitochondrial energy-generating apparatus beyond a functional threshold, proteins are released from the mitochondria that activate the caspase pathway, leading to apoptosis, cell death, and aging. (Reproduced with permission from L Loeb et al: The mitochondrial theory of aging and its relationship to reactive oxygen species damage and somatic mtDNA mutations. Proc Natl Acad Sci USA 102:18769, 2005.) and perturbations in apoptotic pathways, whose execution steps involve mitochondrial activity. Genetic intervention studies in animal models have sought to clarify the potential causative relationship between acquired somatic mtDNA mutation and the aging phenotype, and the role of ROS in particular. Replication of the mitochondrial genome is mediated by the activity of the nuclear-encoded polymerase gamma gene. A transgenic homozygous mouse knock-in mutation of this gene renders the polymerase enzyme deficient in proofreading and results in a threeto fivefold increase in mtDNA mutation rate. Such mice develop a premature aging phenotype, which includes subcutaneous lipoatrophy, alopecia, kyphonia, and weight loss with premature death. Although the finding of increased mtDNA mutation and mitochondrial dysfunction with age has been solidly established, the causative role and specific contribution of mitochondrial ROS to aging and age-related disease in humans has yet to be proved. Similarly, although many tumors display higher levels of heterogeneous mtDNA mutations, a causal relationship to tumorigenesis has not been proved. Besides the age-dependent acquired accumulation in somatic cells of heterogeneous point mutations and deletions, a quite different effect of nonheritable and acquired mtDNA mutation has been described affecting tissue stem cells. In particular, disease phenotypes attributed to acquired mtDNA mutation have been observed in sporadic and apparently nonfamilial cases involving a single individual or even tissue, usually skeletal muscle. The presentation consists of decreased exercise tolerance and myalgias, sometimes progressing to rhabdomyolysis. As in the case of the sporadic, heteroplasmic, large-scale deletion, classic syndromes of chronic PEO, Pearson syndrome, and KSS, the absence of a maternal inheritance pattern, together with the finding of limited tissue distribution, suggests a molecular pathogenic mechanism emanating from mutations arising de novo in muscle stem cells after germline differentiation (somatic mutations that are not sporadic and occur in tissue-specific stem cells during fetal development or in the postnatal maintenance or postinjury repair stage). Such mutations would be expected to be propagated only within the progeny of that stem cell and affect a particular tissue within a given individual, without evidence of heritability. No specific curative treatment for mtDNA disorders is currently available; therefore, the management of mitochondrial disease is largely supportive. Management issues may include early diagnosis and treatment of diabetes mellitus, cardiac pacing, ptosis correction, and intraocular lens replacement for cataracts. Less specific interventions in the case of other disorders involve combined treatment strategies including dietary intervention and removal of toxic metabolites. Cofactors and vitamin supplements are widely used in the treatment of diseases of mitochondrial oxidative phosphorylation, although there is little evidence, apart from anecdotal reports, to support their use. This includes administration of artificial electron acceptors, including vitamin K3, vitamin C, and ubiquinone (coenzyme Q10); administration of cofactors (coenzymes) including riboflavin, carnitine, and creatine; and use of oxygen radical scavengers, such as vitamin E, copper, selenium, ubiquinone, and idebenone. Drugs that could interfere with mitochondrial function, such as the anesthetic agent propofol, barbiturates, and high doses of valproate, should be avoided. Supplementation with the nitric oxide synthase substrate, L-arginine, has been advocated as a vasodilator treatment during stroke-like episodes. The physician should also be familiar with environmental interactions, such as the strong and consistent association between visual loss in LHON and smoking. A clinical penetrance of 93% was found in men who smoked. Asymptomatic carriers of an LHON mtDNA mutation should, therefore, be strongly advised not to smoke and to moderate their alcohol intake. Although not a cure, these interventions might stave off the devastating clinical manifestations of the LHON mutation. Another example is strict avodiance of aminoglycosides in the familial syndrome of ototoxic susceptibility to aminoglycosides in the presence of the mtDNA m.1555A>G mutation of the 12SrRNA encoding gene. GENETIC COUNSELING, PRENATAL DIAGNOSIS, AND PREIMPLANTATION GENETIC DIAGNOSIS IN MTDNA DISORDERS The provision of accurate genetic counseling and reproductive options to families with mtDNA mutations is challenging due to the unique genetic features of mtDNA inheritance that distinguish it from Mendelian genetics. mtDNA defects are transmitted by maternal inheritance. mtDNA de novo mutations are often large deletions, affect one family member, and usually represent no significant risk to other members of the family. In contrast, mtDNA point mutations or duplications can be transmitted down the maternal line. Accordingly, the father of an affected individual has no risk of harboring the disease-causing mutation, and a male cannot transmit the mtDNA mutation to his offspring. In contrast, the mother of an affected individual usually harbors the same mutation but might be completely asymptomatic. This wide phenotypic variability is primarily related to the phenomena of heteroplasmy and the mutation load carried by different members of the same family. Consequently, a symptomatic or asymptomatic female harboring a disease-causing mutation in a heteroplasmic state will transmit to her offspring variable amounts of the mutant mtDNA molecules. The offspring will be symptomatic or asymptomatic primarily according to the mutant load transmitted via the oocyte and, to some extent, subsequent mitotic segregation during development. Interactions with the mtDNA haplotype background or nuclear human genome (as in the case of LHON) serve as an additional important determinant of disease penetrance. Because the severity of the disease phenotype associated with the heteroplasmic mutation load is a function of the stochastic differential segregation and copy number 85e-9 of mutant mtDNA during the oogenesis bottleneck and, subsequently, following tissue and organ development in the offspring, it is rarely predictable with any degree of accuracy. For this reason, prenatal diagnosis (PND) and PGD techniques that have evolved into integral and well-accepted standards of practice are severely hampered in the case of mtDNA-related diseases. The value of PND and PGD is limited, partly due to the absence of data on the rules that govern the segregation of wild-type and mutant mtDNA species (heteroplasmy) among tissue in the developing embryo. Three factors are required to ensure the reliability of PND and PGD: (1) a close correlation between the mutant load and the disease severity, (2) a uniform distribution of mutant load among tissues, and (3) no major change in mutant load with time. These criteria are suggested to be fulfilled for the NARP m.8993T>G mutation but do not seem to apply to other mtDNA disorders. In fact, the level of mutant mtDNA in a chorionic villous or amniotic fluid sample may be very different from the level in the fetus, and it would be difficult to deduce whether the mutational load in the prenatal samples provides clinically useful information regarding the postnatal and adult state. Because the treatment options for patients with mitochondrial disease are rather limited, preventive interventions that eliminate the likelihood of transmission of affected mtDNA into offspring are desirable. The lack of utility of PND and PGD techniques to reliably diagnose and predict mitochondrial disorders at preimplantation-stage products of conception has resulted in the search for alternative preventive approaches for the same problem. One possible approach to “diluting” or even entirely eliminating the mutant mtDNA is applicable only in the earliest embryonic state and in effect represents a form of germline preventive therapy (Fig. 85e-9). This possibility has been explored by using alternative assisted reproduction techniques such as ooplasmic transfer (OT), metaphase chromosome transfer (CT), pronuclear transfer (PNT), and germinal vesicle transfer (GVT) in animal models and, to an extent, in humans. OT is a technique wherein a certain volume (5–15%) of healthy donor oocyte cytoplasm with normal mitochondria is injected into the patient oocyte containing mutated mitochondria. The reasoning behind OT is to supplement the patient’s oocyte with uncompromised cytoplasmic factors such as mtDNA, mRNA, proteins, and other molecules by injecting cytoplasm from healthy oocytes. In PNT, following fertilization, pronuclei of a patient’s zygote are removed with a cytoplasm (“karyoplast”). The karyoplast is transferred to the perivitelline space of a donated zygote, which has been already enucleated. The karyoplast is then fused with enucleated zygote by electric pulses or inactivated Sendai viruses (HVJ). The reconstructed zygote contains a nucleus from the patient (patient nuclear DNA) and cytoplasm from the donor. Thus, the majority of the patient mtDNA is replaced with mtDNA from the donor oocyte. In CT, meiosis II stage of oocyte maturation provides an opportunity for the reconstruction of oocytes with different nuclear and cytoplasmic components before fertilization takes place. Reconstructed oocytes by metaphase chromosome transfer are then fertilized to produce embryos with desired mtDNA haplotypes. In GVT, replacement of compromised cytoplasm with healthy cytoplasm through germinal vesicle transfer before the start of chromosome segregation is carried out. These approaches have not yet met with widely reported clinical success, yet there is room for optimism. As noted above, analysis of heteroplasmy and inheritance patterns indicates that even a small increase in copies of nonmutant mtDNA can exceed the threshold required to ameliorate serious clinical disease. All of the approaches described above show promise in achieving this goal and thus reducing the burden of clinical mtDNA disease in the future. Nuclear transfer into Preimplantation donated oocytes: genetic diagnosis a future possibility? Mother’s oocytes fertilized with FIGURE 85e-9 Possible approaches for prevention of mitochondrial DNA (mtDNA) disease. A. No intervention: offspring’s mutant mtDNA load will vary greatly. B. Oocyte donation: currently permitted in some constituencies but limited by the availability of oocyte donors. C. Preimplantation genetic diagnosis: available for some mtDNA diseases (reliable in determining background nuclear genomic haplotype risk). D. Nuclear transfer: research stage, including initial studies in nonhuman primates. Red represents mutant mtDNA, pink and white represent successively higher proportions of normal mtDNA. Blue represents genetic material from an unrelated donor. (Adapted with permission from J Poulton et al: Preventing transmission of maternally inherited mitochondrial DNA diseases. Br Med J 338:b94, 2009.) and prevention. Key terms used in the discussion of the human micro-86e-1 biome are defined in Table 86e-1. the human Microbiome We are holobionts—collections of human and microbial cells that 86e Jeffrey I. Gordon, Rob Knight function together in an elaborate symbiosis. The aggregate number The technologies that allowed us to decipher the human genome have revolutionized our ability to delineate the composition and functions of the microbial communities that colonize our bodies and make up our microbiota. Each body habitat, including the skin, nose, mouth, airways, gastrointestinal tract, and vagina, harbors a distinctive community of microbes. Efforts to understand our microbiota and its collection of microbial genes (our microbiome) are changing our views of “self” and deepening our understanding of many normal physiologic, metabolic, and immunologic features and their interpersonal and intrapersonal variations. In addition, this area of research is beginning to provide new insights into diseases not previously known to have microbial “contributors” and is suggesting new strategies for treatment of microbial cells in our microbiota exceeds the number of human cells in our adult bodies by up to 10-fold, and each healthy adult is estimated to harbor 105–106 microbial genes, in contrast to ~20,000 Homo sapiens genes. Members of our microbiota can function as mutualists (i.e., both host and microbe benefit from each other’s presence), as commensals (one partner benefits; the other is seemingly unaffected), and as potential or overt pathogens (one partner benefits; the other is harmed). Many clinicians view pathogens as individual microbial species or strains that can elicit disease in susceptible hosts. An emerging, more ecologic view is that pathogens do not function in isolation; rather, their invasion, emergence, and effects on the host reflect interactions with other members of a microbiota. An even more expansive view is that multiple organisms in a community conspire to Culture-independent A type of analysis in which the culture of microbes is not required but rather information is extracted directly from environmental analysis samples Diversity (alpha and Alpha diversity measures the effective number of species (kinds of organisms) at the level of individual habitats, sites, or samples. Beta beta) diversity measures differences in the number of kinds of organisms across habitats, sites, or samples. Domains of life The three major branches of life on Earth: the Eukarya (including humans), the Bacteria, and the Archaea Dysbiosis Any deleterious condition arising from a structural and/or functional aberration in one or more of the host organism’s microbial communities Gnotobiotics The rearing of animals under sterile (germ-free) conditions. These animals can subsequently be colonized at various stages of the life cycle with defined collections of microbes. Holobiont The biologic entity consisting of a host and all its internal and external symbionts, their gene repertoires, and their functions Human microbiome In ecology, biome refers to a habitat and the organisms in it. In this sense, the human microbiome would be defined as the collection of microorganisms associated with the human body. However, the term microbiome is also used to refer to the collective genomes and genes present in members of a given microbiota (see “Microbiota,” below), and the human metagenome is the sum of the human genome and microbial genes (microbiome). A core human microbiome is defined as everything shared in a given body habitat among all or the vast majority of human microbiomes. A core microbiome may include a common set of genomes and genes encoding various protein families and/or metabolic capabilities. Microbial genes that are variably represented in different humans may contribute to distinctive physiologic/metabolic phenotypes. Metagenomics An emerging field encompassing culture-independent studies of the structures and functions of microbial communities as well as the interactions of these communities with the habitats they occupy. Metagenomics includes (1) shotgun sequencing of microbial DNA isolated directly from a given environment and (2) high-throughput screening of expression libraries constructed from cloned community DNA to identify specific functions such as antibiotic resistance (functional metagenomics). DNA-level analyses provide the foundation for profiling of mRNAs and proteins produced by a microbiome (metatranscriptomics and metaproteomics) and for identification of a community’s metabolic network (metametabolomics). Microbial source tracking A collection of methods for assessing the environments of origin for microbes. One method, SourceTracker, uses a Bayesian approach to identify each bacterial taxon’s origins and estimates the proportions of each community made up by bacteria originating from different environments. Microbiota A microbial community—including Bacteria, Archaea, Eukarya, and viruses—that occupies a given habitat Pan-genome The group of genes found in genomes that make up a given microbial phylotype, including both core genes found in all genomes and variably represented genes found in a subset of genomes within the phylotype Phylogenetic analysis Characterization of the evolutionary relationships between organisms and their gene products Phylogenetic tree A “tree” in which organisms are shown according to their relationships to hypothetical common ancestors. When built from molecular sequences, the branch lengths are proportional to the amount of evolutionary change separating each ancestor–descendant pair. Phylotype A phylogenetic group of microbes, currently defined by a threshold percentage identity shared among their small-subunit rRNA genes (e.g., ≥97% for a species-level phylotype) Principal coordinates An ordination method for visualizing multivariate data based on the similarity/dissimilarity of the measured entities (e.g., visualization analysis of bacterial communities based on their UniFrac distances; see “UniFrac,” below) Random Forests analysis/ Machine learning is a collection of approaches that allow a computer to learn without being explicitly programmed. Random Forests machine learning is a machine-learning method for classification and regression that uses multiple decision trees during a training step. Rarefaction A procedure in which subsampling is used to assess whether all the diversity present in a given sample or set of samples has been observed at a given sampling depth and to extrapolate how much additional sampling would be needed to observe all the diversity Resilience A community’s ability to return to its initial state after a perturbation Shotgun sequencing A method for sequencing large DNA regions or collections of regions by fragmenting DNA and sequencing the resulting smaller sections Succession (primary and Succession (in an ecologic context) refers to changes in the structure of a community through time. Primary succession describes the secondary) sequence of colonizations and extinctions that occur in a new habitat. Secondary succession refers to changes in community structure after a disturbance. UniFrac A measure of the phylogenetic dissimilarity between two communities, calculated as the unshared proportion of the phylogenetic tree containing all the organisms present in either community Chapter 86e The Human Microbiome produce pathogenic effects in certain host and environmental contexts (a pathologic community). The ability to characterize microbial communities without culturing their component members has spawned the field of metagenomics (Table 86e-1). Metagenomics reflects a confluence of experimental and computational advances in the genome sciences as well as a more ecologic understanding of medical microbiology, according to which the functions of a given microbe and its impact on human biology depend on the context of other microbes in the same community. Traditional microbiology relies on culturing individual microbes, but metagenomics skips this step, instead sequencing DNA isolated directly from a given microbial community. The resulting datasets facilitate follow-up functional studies, such as the profiling of RNA and protein products expressed from the microbiome or the characterization of a microbial community’s metabolic activities. Metagenomics provides insight into how microbial communities vary in several situations critical to human health. One such situation is how microbial communities are assembled following birth and how they operate over time, including responses of established communities to various perturbations. Another is how microbial communities normally vary between different anatomic sites within an individual and between different groups of people representing different ages, physiologic states, lifestyles, geographies, and gender. Yet another is how microbial communities vary in disease; whether such variations are consistent among individuals grouped according to current criteria for a disease or its subtypes; whether the microbiota or microbiome provides new ways of classifying disease states; and, importantly, whether the structural and functional configurations of microbial communities are a cause or a consequence of disease. Analysis of our microbiomes also addresses one of the most fundamental questions in genetics: How does environment select our genes and directly influence their function? Each human encounters a unique environment during the course of his or her lifetime. Part of this personally experienced environment is incorporated into the genes and capabilities of our microbial communities. The microbiome therefore expands our conceptualization of “human” genetic potential from a single set of genes “fixed” at birth to a microbiome with additional genes and capabilities acquired via a process influenced by our family and life experiences, including modifiable lifestyle choices such as diet. This view recognizes a previously underappreciated dimension of human evolution that occurs at the level of our microbiomes and inspires us to determine how—and how fast—this microbial evolution effects changes in our human biology. For example, Westernization is associated with loss of bacterial species diversity (richness) in the microbiota, and this loss may be associated with the suite of Western diseases. The study of our microbiomes also raises important questions about personal identity, how we define the origins of health disparities, and privacy. Further, it offers the possibility of entirely new approaches to disease prevention and treatment, including regenerative medicine, which involves administration of microbial species (probiotics) to individuals harboring communities that have not developed into a mature, fully functional state or that have been perturbed in ways that can be restored by the addition of species that fill unoccupied “jobs” (niches). This chapter provides a general overview of how human microbial communities are analyzed; reviews ecologic principles that guide our understanding of microbial communities in health and disease; summarizes recent studies that establish correlations and, in some cases, causal relationships between our microbiota/microbiomes and various diseases; and discusses challenges faced in the translation of these findings to new therapeutic interventions. Life on Earth has been classified into three domains: Bacteria, Archaea, and Eukarya. The habitats of the surface-exposed human body harbor members of each domain plus their viruses. In large part, microbial diversity has not been characterized by culture-based approaches, partly because we do not know how to re-create the metabolic milieu fashioned by these communities in their native habitats and partly because a few organisms tend to outgrow the others. Culture-independent methods readily identify which organisms are present in a microbiota and their relative abundance. The gene widely used to identify microbes and their evolutionary relationships encodes the major RNA component of the small subunit (SSU) of ribosomes. Within each domain of life, the SSU gene is highly conserved, allowing the SSU gene sequences present in different organisms in that domain to be accurately aligned and regions of nucleotide sequence variation to be identified. Pairwise comparisons of SSU ribosomal RNA (rRNA) genes from different microbes allow construction of a phylogenetic tree that represents an evolutionary map on which previously unknown organisms can be assigned a position. This approach, known as molecular phylogenetics, permits characterization of each organism on the basis of its evolutionary distance from other organisms. Different phylogenetic types (phylotypes) can be viewed as comprising branches on an evolutionary tree. Characterization of Bacteria Because members of the Bacteria dominate our microbiota, most studies defining our various body habitat– associated microbial communities have sequenced the bacterial SSU gene that encodes 16S rRNA. This gene has a mosaic structure, with highly conserved domains flanking more variable regions. The most straightforward way to identify bacterial taxonomic groups (taxa) in a given community is to sequence polymerase chain reaction (PCR) products (amplicons) generated from the 16S rRNA genes present in that community. PCR primers directed at the conserved regions of the gene yield PCR amplicons encompassing one or more of that gene’s nine variable regions. PCR primer design is critical: differential annealing with primer pairs designed to amplify different variable regions can lead to overor underrepresentation of specific taxa, and different regions within the 16S rRNA gene can have different patterns of evolution. Therefore, caution must be exercised in comparisons of the relative abundance of taxa in samples characterized in different studies, as methodologic differences can lead to larger perceived differences in the inferred taxonomy than actually exist. A key innovation is multiplex sequencing. Amplicons from each microbial-community DNA sample are tagged by incorporation of a unique oligonucleotide barcode into the PCR primer. Amplicons harboring these sample-specific barcodes can then be pooled together so that multiple samples representing multiple communities can be sequenced simultaneously (Fig. 86e-1). One important choice is the tradeoff between the number of samples that can be processed simultaneously and the number of sequences generated per sample. Interpersonal differences in the bacterial components of the microbiota are typically large, as are differences between communities occupying different body habitats in the same individual (see below); thus fewer than 1000 16S rRNA reads are characteristically required to discriminate community type. However, the identification of systematic differences in microbiota composition that correlate with physiologic status or disease state is confounded by the substantial interpersonal variation that occurs normally. Sequencing of bacterial 16S rRNA genes creates a challenge for medical microbiology: how to define the taxonomic groups present in a community in a systematic and informative manner, so that one community can be compared with and contrasted to another. Within each domain of life, microbes are classified in a hierarchy beginning with phylum (the broadest group) followed by class, order, family, genus, and species. To determine taxonomy, 16S rRNA sequences are aligned on the basis of their sequence similarity—a process known as picking operational taxonomic units (OTUs). Grouping of 16S rRNA sequences from a given variable region into “bins” that share ≥97% nucleotide sequence identity (97%ID OTUs) is a commonly accepted, albeit arbitrary, way to define a species. Looking beyond the 16S rRNA gene, we find that different isolates (strains) of a given bacterial species have overlapping but not identical sets of genes in their genomes. The aggregate set of genes identified in all isolates (strains) of a given species-level phylotype represents its pan-genome. Most species are represented by multiple strains, sometimes with markedly different functions (for example, Variable region Community 1 2 3 Align sequences, bin into OTUs, and infer phylogeny by placing OTUs on a master reference phylogenetic tree UniFrac analysis Cluster samples based on UniFrac distances calculated from (iii), Chapter 86e The Human Microbiome (i) (ii) (iii) DE3 machine learning, feature identification, etc. PC2 PC1 1 2 FIGURE 86e-1 Pipeline for culture-independent studies of a microbiota. (A) DNA is extracted directly from a sampled human body habitat– associated microbial community. The precise location of the community and relevant patient clinical data are collected. Polymerase chain reac-tion (PCR) is used to amplify portions of small-subunit (SSU) rRNA genes (e.g., the genes encoding bacterial 16S rRNA) containing one or more variable regions. Primers with sample-specific, error-correcting barcodes are designed to recognize the more conserved regions of the 16S rRNA gene that flank the targeted variable region(s). (B) Barcoded amplicons from multiple samples (communities 1–3) are pooled and sequenced in batch in a highly parallel next-generation DNA sequencer. (C) The resulting reads are then processed, with barcodes denoting which sample the sequence came from. After barcode sequences are removed in silico, reads are aligned and grouped according to a specified level of shared identity; e.g., sequences that share ≥97% nucleotide sequence identity are regarded as representing a species. Once reads are binned into operational taxonomic units (OTUs) in this fashion, they are placed on a phylogenetic tree of all known bacteria and their phylogeny is inferred. (D) Communities can be compared to one another by either taxon-based methods, in which phylogeny is not considered and the number of shared taxa are simply scored, or phylogenetic methods, in which community similarity is considered in light of the evolutionary relationships of community members. The UniFrac metric is commonly used for phylogeny-based comparisons. In stylized examples (i), (ii), and (iii), communities with varying degrees of similarity are shown. Each circle represents an OTU colored on the basis of its community of origin and placed on a master phylogenetic tree that includes all lineages from all communities. Branches (horizontal lines) are colored with each community that contains members from that branch. The three examples vary in the amount of branch length shared between the OTUs from each community. In (i), there is no shared branch length, and thus the three communities have a similarity score of 0. In (ii), the communities are identical, and a similarity score of 1 is assigned. In (iii), there is an intermediate level of similarity: communities represented in red and green share more branch length and thus have a higher similarity score than red vs. blue or green vs. blue. The amount of shared branch length in each pairwise community comparison provides a distance matrix. (E) The results of taxonor phylogeny-based distance matrices can be displayed by principal coordinates analysis (PCoA), which plots each community spatially such that the largest component of variance is captured on the x-axis (PC1) and the second largest component of variance is displayed on the y-axis (PC2). In the example shown, the three communities in example (iii) from panel D are compared. Note that for shotgun sequencing of whole-community DNA (microbiome analysis), reads are compared with genes that are present in the genomes of sequenced cultured microbes and/or with genes that have been annotated by hierarchical functional classification schemes in various databases, such as the Kyoto Encyclopedia of Genes and Genomes (KEGG). Communities can then be compared on the basis of the distribution of functional groups in their microbiomes—an approach analogous to taxon-based methods for 16S rRNA–based comparisons—and the results plotted with PCoA. enteropathogenic versus commensal Escherichia coli). Differences in perfused ecosystem exposed to the complex and varying set of sub-genome content among strains of a given species reflect differences stances we ingest, to adapt to changing circumstances rather than in community membership as well as differences in the selective pres-depending on one strain to occupy a given niche important for proper sures these strains experience within and between habitats. Horizontal community functioning. In ecologic studies of different environments, gene transfer among members of a microbiota—mediated by phage, such as grasslands, forests, and reefs, increased diversity within a com-plasmids, and other mechanisms—is a major contributor to this munity increases its capacity to respond to disturbances and to restore strain-level variation. itself (i.e., its resilience); the same is likely true of microbial ecosystems. Strain-level diversity can be important in any consideration of how When characterizing the mechanisms by which a given species pro-microbial communities differ between individuals and how these com-duces an effect or effects on humans, it is important to consider the munities accommodate perturbations. For example, the great bacte-strain being tested; strain-level diversity has an impact on discovery rial strain-level diversity that exists in the gut is thought to be one of and development efforts aimed at identifying next-generation probiotthe features that allows this microbiota, which occupies a constantly ics that can be used therapeutically to promote health or treat disease. Identification of Archaeal and Eukaryotic Members Surveys based on SSU rRNA gene sequencing have largely focused on Bacteria, yet the census of “who’s there” in human body habitat–associated communities must also include the other two domains of life: Archaea and Eukarya. Differences in the sequences of archaeal and bacterial 16S rRNA genes, first recognized by Carl Woese in 1977, allowed these two domains of life to be distinguished. The representation of Archaea in human microbial communities is less well defined than that of Bacteria, in part due to the difficulty in optimizing the design of PCR primers that specifically target conserved regions of archaeal (versus bacterial) 16S rRNA genes. Identifying archaeal members is important to our understanding of the functional properties of the microbiota. For example, a major challenge faced by microbial communities when breaking down polysaccharides (the most abundant biologic polymers on Earth) is the maintenance of redox balance in the setting of maximal energy production. Many microbial species have branched fermentation pathways that allow them to dispose of reducing equivalents (e.g., by the production of H2, which is energetically efficient). However, there is a caveat: the hydrogen must be removed or it will inhibit reoxidation of pyridine nucleotides. Therefore, hydrogen-consuming (hydrogenotrophic) species are key to maximizing the energy-extracting capacity of primary fermenters. In the human gut, hydrogenotrophs include a phylogenetically diverse group of bacterial acetogens, a more limited group of sulfate-reducing bacteria that generate hydrogen sulfide, and methane-producing archaeal organisms (methanogens) that can represent up to 10% of the anaerobes present in the feces of some humans. However, the degree of archaeal diversity in the gut microbiota of healthy individuals appears to be low. Culture-independent surveys of eukaryotic diversity are also confounded by challenges related to the design of PCR primers that target the eukaryotic SSU gene (18S rRNA) as well as the internal transcribed spacer regions of rRNA operons. Metagenomic studies of healthy human adults living in countries with distinct cultural traditions and disparate geographic features and locations have revealed that the degree of eukaryotic diversity is lower than that of bacterial diversity. In the gut, which contains far more microbes than any other body habitat, the representation of fungi is significantly lower in individuals living in Westernized societies than in those living in non-Western societies. The most abundant fungal sequences belong to the phylum-level taxa Ascomycota and Microsporidia. The phyla Ascomycota and Basidiomycota appear to be mutually exclusive, and the presence of Candida in particular correlates with recent consumption of carbohydrates. Elucidation of Viral Dynamics Viruses are the most abundant biologic entity on Earth. Viral particles outnumber microbial cells by 10:1 in most environments. Humans are no exception in terms of viral colonization; our feces alone contain 108–109 viral particles per gram. Despite this abundance, many eukaryotic viral communities remain incompletely characterized, in part because the identification of viruses within metagenomic sequencing datasets is itself very challenging. Characterizing viral diversity requires different approaches: because no single gene is found in all viruses, no universal phylogenetic “barcode of life” equivalent to the SSU rRNA gene exists. One approach has been to selectively purify virus-like particles from community biospecimens, amplify the small amounts of DNA that are recovered, and randomly fragment the DNA and sequence the fragments (shotgun sequencing). The resulting sequences can be assembled into larger contigs whose function can be computationally predicted from homology to known genes, and the information obtained can be used to populate/expand nonredundant viral databases. These annotated nonredundant databases can then be used for more targeted mining of the rapidly expanding number of shotgun sequencing datasets generated from total-community DNA for known or putative DNA viruses. Given the dominance of bacteria in the gut microbiota, it is not surprising that phages (viruses that infect bacteria) dominate the identifiable components of the gut’s DNA virome. Prophages are a manifestation of a so-called temperate viral–bacterial host dynamic, in which a phage is integrated into its host bacterium’s genome. This temperate dynamic provides a way to constantly refashion the genomes of bacterial species through horizontal gene transfer. Genes encoded by a pro-phage genome may expand the niche and fitness of their bacterial host, for example, by enabling the metabolism of previously inaccessible nutrient sources. Prophage integration can also protect the host strain from superinfection, “immunizing” the strain against infection by closely related phages. A temperate prophage life cycle allows the virus to expand in a 1:1 ratio with its bacterial host. If the integrated virus conveys increased fitness, the prevalence of the bacterial host and its phage will increase in the microbiota. Induction of a lytic cycle, where the prophage replicates and kills the host, may follow. Lytic cycles can cause high bacterial turnover. Lysis debris (e.g., components of capsules) can be used as nutrient sources by surviving bacteria; this change in the energy dynamic in a community is referred to as a phage shunt. A subpopulation of bacteria that undergoes lytic induction may sweep away other sensitive species present in the community, thus increasing the niche space available for survivors (i.e., those bacteria that already have an integrated prophage). Periodic induction of prophages leads to a “constant diversity dynamic” that helps maintain community structure and function. Interest in viral communities has expanded in recent years, especially given a potentially therapeutic role for phages as an alternative or adjunct to antibiotics. Virome members have evolved elegant survival mechanisms that allow them to evade host defenses, diversify, and establish elaborate and mutually beneficial symbioses with their hosts. A number of recent studies have tried to adapt these mechanisms for therapeutic purposes (e.g., the use of synthetic phages to treat Pseudomonas aeruginosa infections in burn patients or in other settings). Phage therapy is not a new idea: Félix d’Herelle, co-discoverer of phages, recognized their potential medical applications nearly a century ago. However, only recently have our technologic capabilities and our knowledge of the human microbiota made phage therapy realistically attainable within our lifetimes. At many levels, different people are very much alike: our genomes are >99% identical, and we have similar collections of human cells. However, our microbial communities differ drastically, both between people and between habitats within a single human body. The greatest variation (beta diversity, described below) is between body sites. For example, the difference between the microbial communities residing in a person’s mouth versus the same person’s gut is comparable to the difference in communities residing in soil versus seawater. Even within a body site, the differences among people are not subtle: gut, skin, and oral communities can all differ by 80–90%, even from the broad, bacterial species–level view. The English poet John Donne said that “no man is an island”; however, from a microbial perspective, each of us consists of not just one isolated island but rather a whole archipelago of distinct habitats that exchange microbes with one another and with the outside environment at some as yet undetermined level. Before we can discuss these differences and understand their relevance to human disease, it is important to understand some basic terms and ecologic principles. Alpha Diversity Alpha diversity is defined as the effective number of species present in a given sample. Communities that are compositionally more diverse (i.e., have more OTUs) or that are phylogenetically more diverse are defined as having greater alpha diversity. Alpha diversity can be measured by plotting the number of different types of SSU rRNA sequences identified at a given phylogenetic level (species, genera, etc.) in a sample as a function of the number of SSU rRNA gene reads collected. The most commonly used metrics of alpha diversity are Sobs (the number of species observed in a given number of sequences), Chao1 (a measure based on the number of species observed only once), the Shannon index (a measure of the number of bits of information gained by revealing the identity of a randomly chosen member of the community), and phylogenetic diversity (a measure of the total branch length of a phylogenetic tree encompassing a sample). Diversity estimators are particularly sensitive to errors introduced during PCR and sequencing. Beta Diversity Beta diversity refers to the differences between communities and can be defined with phylogenetic or nonphylogenetic distance measurements. UniFrac is a commonly used phylogenetic metric that compares the evolutionary history of different microbial communities, noting the degree to which any two communities share branch length on a tree of microbial life: the more similar communities are to each other, the more branch length they share (Fig. 86e-1). UniFrac-based measurements of distances between communities can be visually represented with principal coordinates analysis or other geometric techniques that project a high-dimensional dataset down onto a small number of dimensions for a more approachable analysis (Fig. 86e-1). Principal coordinates analysis can also be applied to nonphylogenetic methods for comparing communities, such as Euclidean distance, Jensen-Shannon divergence, or Bray-Curtis dissimilarity, which operate independent of evolutionary tree data but can make biologic patterns more difficult to identify. The taxonomic data or distance matrices can also be used as input into a range of machine-learning algorithms (such as Random Forests) that employ supervised classification to identify differences between labeled groups of samples. Supervised classification is useful for identifying differences between cases and controls but can obscure important patterns intrinsic to the data, including confounding variables such as different sequencing runs or patient populations. As noted above, the greatest beta diversity is that among body sites. This fact underscores the need to specify body habitat in microbiota analyses of any type, including microbial surveillance studies examining the flow of normal and pathogenic organisms into and out of different body sites in patients and their health care providers. Several other key points have emerged from beta diversity studies of human-associated microbial communities—notably, that (1) there is a high level of interpersonal variability in every body habitat studied to date, (2) intrapersonal variation in a given body habitat is less pronounced, and (3) family members have more similar communities than unrelated individuals living in separate households. Thus, a person is his/ her own best control, and examination of an individual over time as a function of disease state or treatment intervention is desirable. Similarly, family members serve as logical reference controls, although age is a major covariate that affects microbiota structure. Studies of fecal samples obtained from twins over time have shown that the overall degree of phylogenetic similarity of bacterial communities does not differ significantly between monozygotic and dizygotic twin pairs, although monozygotic twin pairs may be more similar in some populations at earlier ages. These results, together with intervention studies in mice and epidemiologic observations in humans, emphasize that early environmental exposures are a very important determinant of adult-gut microbial ecology. In humans, the initial exposures depend on delivery mode: babies sampled within 20 min of birth have relatively undifferentiated microbial communities in the mouth, the skin, and the gut. For vaginally delivered babies, these communities resemble the specific microbial communities found in the mother’s vagina. For babies delivered by cesarean section, the communities resemble skin communities. Although studies of older children and of adults stratified by delivery mode are still rare in the literature, these differences have been shown to persist until at least 4 months of age and perhaps until age 7 years. The infant gut microbiota changes to resemble the adult gut community over the first 3 years of life; comparable studies have not been done in other body habitats to date. Exposures to environmental microbial reservoirs can continue to influence community structure. For example, unrelated cohabiting adults have more similar microbiotas in all of their body habitats than do non-cohabiting adults, and humans resemble the dogs they live with, at least in terms of skin microbiota. Gender and sexual maturation may also affect the microbiota structure, although efforts to isolate these variables are complicated by many confounding factors; any gender effect must be small compared with the effects of other variables such as diet (except in the case of the female urinary tract, which is influenced 86e-5 by the vaginal microbiota). The vaginal microbiota illustrates another intriguing aspect of the contributions made by various factors to interpersonal differences in microbial community structure within a given body habitat. Bacterial 16S rRNA–based studies of the midvaginal microbiota in sexually active women have documented significant differences in community configurations between four self-reported ethnic groups: Caucasian, black, Hispanic, and Asian. Unlike most other body habitats that have been surveyed, this ecosystem is dominated by a single genus, Lactobacillus. Four species of this genus together account for more than half of the bacteria in most vaginal communities. Five community categories have been defined: four are dominated by L. iners, L. crispatus, L. gasseri, and L. jensenii, respectively, and the fifth has proportionally fewer lactobacilli and more anaerobes. The representation of these community categories is distinct within each of the four ethnic groups and correlates with vaginal pH and Nugent score (the latter being a biomarker for bacterial vaginosis). Longitudinal studies of individuals are being conducted to identify factors that determine the assembly of these distinct communities—both within and among ethnic groups—as well as their resistance to or resilience after various physiologic and pathologic disturbances. For example, the menstrual cycle and pregnancy turn out to be surprisingly significant factors (cause larger changes) compared with sexual activity. Yet another factor affecting beta diversity is spatial location within a habitat. Several surveys show that the skin harbors bacterial communities with predictable, albeit complex, biogeographic features. To determine whether these differences are due to differences in local environmental factors, to the history of a given site’s exposure to microbes, or to a combination of the two, reciprocal microbiota transplantation has been performed. Microbial communities from one region of the skin were depleted by treatment with germicidal agents, and the region (plot) was inoculated with a “foreign” microbiota harvested from different regions of the skin or from different body habitats from the same or another individual. Community assembly at the site of transplantation was then tracked over time. Remarkably, assembly proceeded differently at different sites: forearm plots receiving a tongue microbiota remained more similar to tongue communities than to native forearm communities in terms of their composition and diversity, while forehead plots inoculated with tongue bacteria changed to become more similar to native forehead communities. Thus, in addition to the history of exposure to tongue bacteria, environmental factors operating at the forehead plot likely shape community assembly. Intriguingly, the factors that shape fungal skin communities appear to be entirely different from those that shape bacterial skin communities. The palm and forearm have high bacterial and low fungal diversity, whereas the feet have the opposite diversity pattern. Moreover, fungal communities are generally shaped by location (foot, torso, head), whereas bacterial communities are generally shaped by moisture phenotype (dry, moist, or sebaceous). Co-Occurrence Analysis Co-occurrence analysis seeks to identify which phylotypes are co-distributed across individuals in a given body habitat and/or between habitats and to determine the factors that explain the observed patterns of co-distribution. Positive correlations tend to reflect shared preferences for certain environmental features, while negative correlations typically reflect divergent preferences or a competitive relationship. Syntrophic (cross-feeding) relationships reflect interdependent interactions based on nutrient-sharing strategies. For example, in food webs, the products of one organism’s metabolism can be used by the other for its own unique metabolic capabilities (e.g., the interactions between fermentative organisms and methanogens). Enterotype Analysis Enterotype analysis seeks to classify individuals into discrete groups based on the configuration of their microbiotas, essentially drawing boundaries on a map defined by principal coordinates analysis or other ordination techniques. The first enterotype analysis used supervised clustering to define three major types of human-gut microbial configurations across three distinct human studies and provided a view that presupposed the existence of three Chapter 86e The Human Microbiome clusters. Subsequent work has shown that the range of variability in the gut microbiota of children and of non-Western populations greatly exceeds the variability captured in the populations used to define the original enterotypes; in addition, even in Western populations, the variability follows more of a continuum dominated by a gradient in the abundance of the genera Bacteroides and Prevotella. Another consideration in enterotype analysis is whether location on a map defined by healthy human variation is relevant to predisposition to disease or whether instead rare species with particular functions are more important discriminants. Functional Redundancy Functional redundancy arises when functions are performed by many bacterial taxa. Thus interpersonal differences in microbial bacterial diversity (i.e., which bacteria are present) are not necessarily accompanied by comparable degrees of difference in functional diversity (i.e., what these bacteria can do). Characterization of a microbiome by shotgun sequencing is important because, unlike SSU rRNA analyses, shotgun sequencing provides a direct readout of the genes (and, via comparative genomics, their functions) in a given community. One fundamental question is the degree to which variations in the species occupying a given body habitat correlate with variations in a community’s functional capabilities. For example, the neutral theory of community assembly developed by macro-ecologists suggests that species are added to the community without respect to function, automatically endowing the community with functional redundancy. If applicable to the microbial world, neutral community assembly would predict a high level of variation in the types of microbial lineages that occupy a given body habitat in different individuals, although the broad functions encoded in the microbiomes of these communities could be quite similar. Shotgun sequencing of the fecal microbiome has revealed that different microbial communities converge on the same functional state: in other words, there is a group of microbial genes represented in the guts of unrelated as well as related individuals. The same principle holds true at other body sites (Fig. 86e-2). The “core” gut microbiome is enriched in functions related to microbial survival (e.g., translation; metabolism of nucleotides, carbohydrates, and amino acids) and in functions that benefit the host (nutrient and energy partitioning from the diet to microbes and host). The latter functions encompass the food webs mentioned above, in which products of one type of microbe become the substrates for other microbes. These webs, which can be incredibly elaborate, change as microbes adjust their patterns of gene expression and metabolism in response to alterations in nutrient availability. Thus the sum of all the activities of the members of a microbial community can be viewed as an emergent rather than a fixed property. It is important to note that pairwise comparisons have shown that family members have functionally more similar gut microbiomes than do unrelated individuals. Thus, intrafamilial transmission of a gut microbiome within a given generation and across multiple generations could shape the biologic features of humans belonging to a kinship and modulate/mediate risks for a variety of diseases. Stability Like other ecosystems, human body habitat– associated microbial communities vary over time, and an understanding of this variation is essential for a functional FIGURE 86e-2 Interpersonal variation in organismal representation in body habitat–associated communities is more extensive than interpersonal variation in gene functional features. Bacterial taxonomy and metabolic function are compared in 107 oral microbiota and microbiome samples (top) and in 139 fecal microbiota and microbiome samples (bottom). Samples represent an arbitrarily chosen subset from 242 healthy young adults living in the United States, with equal numbers of men and women. The same DNA extracts from the same samples were used for both taxonomic and functional classifications; each sample was analyzed by bacterial 16S rRNA amplicon sequencing (mean, 5400 sequences per sample) and by shotgun sequencing of community DNA (mean, 2.9 billion bases per sample). Taxonomic groups vary dramatically in their representation among different samples, with different characteristic bacterial phyla in the oral versus the fecal microbiota; e.g., members of the Actinobacteria and Fusobacteria are far more common in the mouth than in the gut, while members of Bacteroidetes are far more common in fecal samples. In contrast, metabolic pathways are far more consistently represented in different samples, even when the species that contribute to these pathways are completely different. These results suggest a high degree of functional redundancy in microbial ecosystems—similar to that observed in macroecosystems, in which many fundamentally different lineages of organisms can play the same ecologic roles (e.g., pollinator or top predator). (Adapted from Human Microbiome Project Consortium: Nature 486:207, 2012; and CA Lozupone et al: Nature 489:220, 2012.) understanding of our microbiota. Few high-resolution time series of way for defining stability at the strain level than was available in the individual healthy adults have been published to date, but one available past. Application of these methods to the guts of healthy individualsdaily time series suggests that individuals tend to resemble themselves sampled over time has disclosed that a healthy adult gut harbors a permicrobially day to day over a span of 6–15 months, retaining their sistent collection of ~100 bacterial species and several hundred strains.separate identities during cohabitation. The development of low-error The stability of the bacterial components follows a power law: bacterial amplicon sequencing methods has provided a much more reliable strains acquired early in life can persist in the gut for decades, although their proportional representation changes as a function of numerous factors, including diet. Whole-genome sequencing of culturable components of the microbiota of study participants has confirmed that strains are retained in individuals for prolonged periods and are shared among family members. Resilience The ability of a microbiota or microbiome to rebound from a short-term perturbation, such as antibiotic administration or an infection, is defined as its resilience. This capacity can be visualized as a ball rolling over a landscape of local minima; essentially, the community moves into a new state and, to recover, must move through another, unstable state. In some cases, recovery will lead to the original stable state; in others, it will lead to a new stable state, which may be either healthy or unhealthy. Changes in, for example, diet or host physiologic status may introduce alterations into the landscape itself, making it easier to move from the initial state to any one of a number of other states, potentially with different health consequences. Microbial communities in our body habitats differ widely in resilience. For example, hand washing leads to profound changes in the microbial community, greatly increasing diversity (presumably because of the preferential removal of high-abundance, dominant phylotypes such as Propionibacterium). Within 6 h, the hand microbiota rebounds to resemble the original hand communities. The effects of repeated hand washing still need to be defined; for example, the surface microbiota on the skin (as measured by scrape biopsies) consists of ~50,000 microbial cells/cm2, whereas the subsurface microbiota (as measured by punch biopsies) consists of ~1,000,000 microbial cells/cm2. In a study of three healthy adult volunteers given a short course of ciprofloxacin (500 mg by mouth twice a day for 5 days—a regimen commonly used against uncomplicated urinary tract infections), overall gut-community configuration came to resemble baseline within 6 months after treatment cessation, although some taxa failed to recover. However, the effects of the antibiotic perturbation were highly individualized. Administration of a second course of treatment months later led to altered-community states, relative to baseline, in all three volunteers; again, the extent of the alteration differed with the individual. Crucially, as shown in this and other studies, a given bacterial taxon can respond differently to the same antibiotic in different individuals; this observation suggests that the rest of the microbial community plays an important role in determining the effects of antibiotics on a per-individual basis. In any body habitat, the microbial-community state after disturbance may be degraded. However, this degraded state may itself be resilient, and it may therefore be difficult to restore a more functional state. For example, Clostridium difficile infection can persist for years. The development and resilience of a degraded state may be driven by positive feedback loops, such as reactive oxygen species cascades involving host macrophages that promote the further growth of proinflammatory Proteobacteria, as well as negative-feedback loops such as depletion of the butyrate needed for promotion of a healthy gut epithelial barrier and further establishment of beneficial members of the microbiota. Consequently, microbiota-based therapies may require either (1) the elimination of a feedback loop that prevents establishment of a new community or (2) identification of a direction for change and a stimulus of sufficient magnitude (e.g., invasion and establishment of microbes from a fecal transplant or from a defined consortium of cultured, sequenced members of the human gut microbiota; see below) to overcome the resilience mechanisms inherent in the degraded state. A critical unresolved question that especially affects infants, whose microbiota is changing rapidly, is whether intervention during periods of rapid change or during periods of relative stability is generally more effective. ESTABLISHING CAUSAL RELATIONSHIPS BETWEEN THE GUT MICROBIOTA AND NORMAL PHYSIOLOGIC, METABOLIC, AND IMMUNOLOGIC PHENOTYPES AS WELL AS DISEASE STATES Gnotobiotic animals are raised in germ-free environments—with no exposure to microbes—and then colonized at specific stages of life with specified microbial communities. Gnotobiotic mice provide an excellent system for controlling host genotype, microbial community composition, diet, and housing conditions. Microbial communities harvested from donor mice with defined genotypes and phenotypes 86e-7 can be used to determine how the donors’ microbial communities affect the properties of formerly germ-free recipients. The recipients may also affect the transplanted microbiota and its microbiome. Thus gnotobiotic mice afford investigators an opportunity to marry comparative studies of donor communities to functional assays of community properties and to determine how (and for how long) these functions influence host biology. The Cardiovascular System The gut microbiota affects the elaborate microvasculature underlying the small-intestinal epithelium: capillary network density is markedly reduced in adult germ-free animals but can be restored to normal levels within 2 weeks after gut microbiota transplantation. Mechanistic studies have shown that the microbiota promotes vascular remodeling in the gut through effects on a novel extravascular tissue factor–protease-activated receptor (PAR1) signaling pathway. Heart weight measured echocardiographically or as wet mass and normalized to tibial length or lean body weight is significantly reduced in germ-free mice; this difference is eliminated within 2 weeks after colonization with a gut microbiota. During fasting, a gut microbiota–dependent increase in hepatic ketogenesis (regulated by peroxisome proliferator–activated receptor α) occurs, and myocardial metabolism is directed to ketone body utilization. Analyses of isolated, perfused working hearts from germ-free and colonized animals, together with in vivo assessments, have shown that myocardial performance in germ-free mice is maintained by increasing glucose utilization. However, heart weight is significantly reduced in both fasted and fed mice; this heart-mass phenotype is completely reversed in germ-free mice fed a ketogenic diet. These findings illustrate how the gut microbiota benefits the host during periods of nutrient deprivation and represent one link between gut microbes and cardiovascular metabolism and health. Conventionally raised apoE-deficient mice develop a less severe form of atherosclerosis than their germ-free counterparts when fed a high-fiber diet. This protective effect of the microbiota is obviated when animals are fed a diet low in fiber and high in simple sugars and fat. A number of the beneficial effects attributed to diets with high proportional representation of whole grains, fruits, and vegetables are thought to be mediated by end products of microbial metabolism of dietary compounds, including short-chain fatty acids and metabolites derived from flavonoids. Conversely, microbes can convert otherwise harmless dietary compounds into metabolites that increase risk for cardiovascular disease. Studies of mice and human volunteers have revealed that gut microbiota metabolism of dietary L-carnitine, which is present in large amounts in red meat, yields trimethylamineN-oxide, which can accelerate atherosclerosis in mice by suppressing reverse cholesterol transport. Yet another facet of microbial influence on cardiovascular physiology was revealed in a study of mice deficient in Olfr78 (a G protein– coupled receptor expressed in the juxtaglomerular apparatus, where it regulates renin secretion in response to short-chain fatty acids) or Gpr41 (another short-chain fatty acid receptor that, together with Olfr78, is expressed in the smooth muscle cells present in small resistance vessels). This study demonstrated that the microbiota can modulate host blood pressure via short-chain fatty acids produced by microbial fermentation. Bone Adult germ-free mice have greater bone mass than their conventionally raised counterparts. This increase in bone mass is associated with reduced numbers of osteoclasts per unit bone surface area, reduced numbers of CD11b+/GR1 osteoclast precursors in bone marrow, decreased numbers of CD4+ T cells, and reduced levels of expression of the osteolytic cytokine tumor necrosis factor α. Colonization with a normal gut microbiota resolves these observed differences between germ-free and conventionally raised animals. Brain Adult germ-free and conventionally raised mice differ significantly in levels of 38 out of 196 identified cerebral metabolites, 10 of which have known roles in brain function; included in the latter group are N-acetylaspartic acid (a marker of neuronal health and attenuation), Chapter 86e The Human Microbiome pipecolic acid (a presynaptic modulator of γ-aminobutyric acid levels), and serine (an obligatory co-agonist at the glycine site of the N-methyl-d-aspartate receptor). Propionate, a short-chain fatty acid product of gut microbial-community metabolism of dietary fiber, affects expression of genes involved in intestinal gluconeogenesis via a gut–brain neural circuit involving free fatty-acid receptor 3; this effect provides a mechanistic explanation for the documented beneficial impact of dietary fiber in enhancing insulin sensitivity and reducing body mass and adiposity. Studies of a mouse model (maternal immune activation) with stereotyped/repetitive and anxiety-like behaviors indicate that treatment with a member of the human gut microbiota, Bacteroides fragilis, corrects gut barrier (permeability) defects; reduces elevated levels of 4-ethylphenylsulfate, a metabolite seen in the maternal immune activation model that has been causally associated with the animals’ behavioral phenotypes; and ameliorates some behavioral effects. These observations highlight the importance of further exploration of potentially co-evolved relationships between the microbiota and host behavior. Immune Function Many foundational studies have shown that the gut microbiota plays a key role in the maturation of the innate as well as the adaptive components of the immune system. The intestinal epithelium, which is composed of four principal cell lineages (enterocytes plus goblet, Paneth, and enteroendocrine cells), acts as a physical and functional barrier to microbial penetration. Goblet cells produce mucus that overlies the epithelium, where it forms two layers: an outer (luminal-facing) looser layer that harbors microbes and a denser lower layer that normally excludes microbes. Members of the Paneth cell lineage reside at the base of crypts of Lieberkühn and secrete antimicrobial peptides. Studies in mice have demonstrated that Paneth cells directly sense the presence of a microbiota through expression of the signaling adaptor protein MyD88, which helps transduce signals to host cells upon recognition of microbial products through Toll-like receptors (TLRs). This recognition drives expression of antibacterial products (e.g., the lectin RegIIIγ) that act to prevent microbial translocation across the gut mucosal barrier. The intestine is enriched for B cells that produce IgA, which is secreted into the lumen; there it functions to exclude microbes from crossing the mucosal barrier and to restrict dissemination of food antigens. The microbiota plays a key role in development of an IgA response: germ-free mice display a marked reduction in IgA+ B cells. The absence of a normal IgA response can lead to a massive increase in bacterial load. B cell–derived IgA that targets specific members of the gut microbiota plays an important role in preventing activation of microbiota-specific T cells. Gut bacterial species elicit development of protective TH17 and TH1 responses that help ward off pathogen attack. Members of the microbiota also promote the development of a specialized population of CD4+ T cells that prevent unwarranted inflammatory responses. These regulatory T cells (Tregs) are characterized by expression of the transcription factor forkhead box P3 (FOXP3) and by expression of other cell-surface markers. There is a paucity of Tregs in the colonic lamina propria of germ-free mice. Specific members of the microbiota—including a consortium of Clostridium strains isolated from the mouse and human gut as well as several human-gut Bacteroides species —expand the Treg compartment and enhance immunosuppressive functions. The microbiota is a key trigger in the development of inflammatory bowel disease (IBD) in mice that harbor mutations in genes associated with IBD risk in humans. Moreover, components of the gut microbiota can modify the activity of the immune system to ameliorate or prevent IBD. Mice containing a mutant ATG16L1 allele linked to Crohn’s disease are particularly susceptible to IBD. Upon infection with mouse norovirus and treatment with dextran sodium sulfate, expression of a hypomorphic ATG16L1 allele leads to defects in small-intestinal Paneth cells and renders mice significantly more susceptible to ileitis than are wild-type control animals. This process is dependent on the gut microbiota and highlights how the intersection of host genetics, infectious agents, and the microbiota can lead to severe immune pathology; i.e., the pathogenic potential of a microbiota may be context-dependent, requiring a confluence of factors. An important observation is that members of the gut microbiota, including B. fragilis or members of Clostridium, prevent the severe inflammation that develops in mouse models mimicking various aspects of human IBD. The gut microbiota has been implicated in promoting immunopathology outside of the intestine. Multiple sclerosis develops in conventionally raised mice whose CD4+ T cell compartment is reactive to myelin oligodendrocyte protein; their germ-free counterparts are completely protected from development of multiple sclerosis–like symptoms. This protection is reversed by colonization with a gut microbiota from conventionally raised animals. Inflammasomes are cytoplasmic multiprotein complexes that sense stress and damage-associated patterns. Mice deficient in NLRP6, a component of the inflammasome, are more susceptible to colitis induced by administration of dextran sodium sulfate. This enhanced susceptibility is associated with alterations in the gut microbiota of these animals relative to that of wild-type controls. Mice are coprophagic, and co-housing of NLRP6-deficient mice with wild-type mice is sufficient to transfer the enhanced susceptibility to colitis induced by dextran sodium sulfate. Similar findings have been reported for mice deficient in the inflammasome adaptor ASC (apoptosis-associated speck-like protein containing a caspase recruitment domain). ASC-deficient mice are more susceptible to the development of a model of nonalcoholic steatohepatitis. This susceptibility is associated with alterations in gut microbiota structure and can be transferred to wild-type animals by co-housing. Obesity and Diabetes Germ-free mice are resistant to diet-induced obesity. Genetically obese ob/ob mice have gut microbial-community structures that are profoundly altered from those in their lean wild-type (+/+) and heterozygous +/ob littermates. Transplantation of the ob/ob mouse microbiota into wild-type germ-free animals transmits an increased-adiposity phenotype not seen in mice receiving microbiota transplants from +/+ and +/ob littermates. These differences are not attributable to differences in food consumption but rather are associated with differences in microbial community metabolism. Roux-en-Y gastric bypass produces pronounced decreases in weight and adiposity as well as improved glucose metabolism—changes that are not ascribable simply to decreased caloric intake or reduced nutrient absorption. 16S rRNA analyses have documented that changes in the gut microbiota after this surgery are conserved among mice, rats, and humans; animal studies have demonstrated these changes along the length of the gut but most prominently downstream of the site of surgical manipulation of the bowel. Notably, transplantation of the gut microbiota from mice that have undergone Roux-en-Y gastric bypass to germ-free mice that have not had this surgery produces reductions in weight and adiposity not seen in recipients of microbiotas from mice that underwent sham surgery. The gut microbiota confers protection against the development of type 1 diabetes mellitus in the non-obese diabetic (NOD) mouse model. Disease incidence is significantly lower in conventionally raised male NOD mice than in their female counterparts, while germ-free males are as susceptible as their female counterparts. Castration of males increases disease incidence, while androgen treatment of females provides protection. Transfer of the gut microbiota from adult male NOD mice to female NOD weanlings is sufficient to reduce the severity of disease relative to that among females receiving a microbiota from an adult female or an unmanipulated female. The blocking of protection by treatment with flutamide highlights a functional role for testosterone signaling in this microbiota-mediated protection against type 1 diabetes. NOD mice deficient in MyD88, a key component of the TLR signaling pathway, do not develop diabetes and exhibit increased relative abundance of members of the family-level taxon Lactobacillaceae. Consistent with these findings, investigators have documented lower levels of representation of members of the genus Lactobacillus in children with type 1 diabetes than in healthy controls. Components of lactobacilli have been shown to promote gut barrier integrity. Studies in various animal models indicate that translocation of bacterial components, including bacterial lipopolysaccharides, across a leaky gut barrier triggers low-grade inflammation, which contributes to insulin resistance. Mice deficient in TLR5 exhibit alterations in the gut microbiota and hyperphagia, and they develop features of metabolic syndrome, including hypertension, hyperlipidemia, insulin resistance, and increased adiposity. The gut microbiota regulates biosynthesis as well as metabolism of host-derived products; these products can signal through host receptors to shape host physiology. An example of this symbiosis is provided by bile acids, which direct metabolic effects that are largely mediated through the farnesoid X receptor (FXR, also known as NR1H4). In leptin-deficient mice, FXR deficiency protects against obesity and improves insulin sensitivity. In mice with diet-induced obesity that are subjected to vertical sleeve gastrectomy, the surgical procedure results in elevated levels of circulating bile acids, changes in the gut microbiota, weight loss, and improved glucose homeostasis. However, weight reduction and improved insulin sensitivity are mitigated in animals with engineered FXR-deficiency. Xenobiotic Metabolism Evidence is accumulating that pharmacogenomic studies need to consider the gene repertoire present in our H. sapiens genome as well as that in our microbiomes. For example, digoxin is inactivated by the human gut bacterium Eggerthella lenta, but only by strains with a cytochrome-containing operon. Expression of this operon is induced by digoxin and inhibited by arginine. Studies in gnotobiotic mice established that dietary protein affects (reduces) microbial metabolism of digoxin, with corresponding alterations in levels of the drug in both serum and urine. These findings reinforce the need to consider strain-level diversity in the gut microbiota when examining interpersonal variations in the metabolism of orally administered drugs. Characterizing the Effects of the Human Microbiota on Host Biology in Mice and Humans Questions about the relationship between human microbial communities and health status can be posed in the following general format: Is there a consistent configuration of the microbiota definable in the study population that is associated with a given disease state? How is the configuration affected by remission/relapse or by treatment? If a reconfiguration does occur with treatment, is it durable? How is host biology related to the configuration or reconfiguration? What is the effect size? Are correlations robust to individuals from different families and communities representing different ages, geographic locales, and lifestyles? As in all studies involving human microbial ecology, the issue of what constitutes a suitable reference control is extremely important. Should we choose the person himself or herself, family members, or ageor gender-matched individuals living in the same locale and representing similar cultural traditions? Critically, are the relationships observed between microbial community structure and expressed functions a response to disease state (i.e., side effects of other processes), or are they a contributing cause? In this sense, we are challenged to evolve a set of Koch’s postulates that can be applied to whole microbial communities or components of communities rather than just to a single purified organism. As in other circumstances in which experiments to determine causality of human disease are difficult or unethical, Hill’s criteria, which examine the strength, consistency, and biologic plausibility of epidemiologic data, can be useful. Sets of monoand dizygotic twins and their family members represent a valuable resource for initially teasing out relationships between environmental exposures, genotypes, and our own microbial ecology. Similarly, monozygotic twins discordant for various disease states enhance the ability to determine whether various diseases can be linked to a person’s microbiota and microbiome. A twin-pair sampling design rather than a conventional unrelated case–control design has advantages owing to the pronounced between-family variability in microbiota/microbiome composition and the potential for multiple states of a community associated with disease. Transplantation of a microbiota from suitable human donor controls representing different disease states and communities (e.g., twins discordant for a disease) to germ-free mice is helpful in establishing a causal role for the com-86e-9 munity in pathogenesis and for providing insights relevant to underlying mechanisms. In addition, transplantation provides a preclinical platform for identifying next-generation probiotics, prebiotics, or combinations of the two (synbiotics). Obesity and obesity-associated metabolic dysfunction illustrate these points. The gut microbiotas (and microbiomes) of obese individuals are significantly less diverse than those of lean individuals; the implication is that there may be unfilled niches (unexpressed functions) that contribute to obesity and its associated metabolic abnormalities. Le Chatelier and colleagues observed a bimodal distribution of gene abundance in their analysis of 292 fecal microbiomes: low-gene-count (LGC) individuals averaged 380,000 microbial genes per gut microbiome, while high-gene-count (HGC) individuals averaged 640,000 genes. LGC individuals had an increased risk for type 2 diabetes and other metabolic abnormalities, whereas the HGC group was metabolically healthy. When gene content was used to identify taxa that discriminated HGC and LGC individuals, the results revealed associations between anti-inflammatory bacterial species such as Faecalibacterium prausnitzii and the HGC group and between proinflammatory species such as Ruminococcus gnavus and the LGC group. LGC microbiomes had significantly greater representation of genes assigned to tricarboxylic acid cycle modules, peroxidases, and catalases—an observation suggesting a greater capacity to handle oxygen exposure and oxidative stress; HGC microbiomes were enriched in genes involved in the production of organic acids, including lactate, propionate, and butyrate— a result suggesting increased fermentative capacity. Transplantation of an uncultured fecal microbiota from twins stably discordant for obesity or of bacterial culture collections generated from their microbiota transmits their discordant adiposity phenotypes as well as obesity-associated metabolic abnormalities to recipient germ-free mice. Co-housing of the recipient coprophagic gnotobiotic mice results in invasion of specific bacterial species from the transplanted lean twin’s culture collection into the guts of cage mates harboring the obese twin’s culture collection (but not vice versa), thereby preventing the latter animals from developing obesity and its associated metabolic abnormalities. It is noteworthy that invasion and prevention of obesity and metabolic phenotypes are dependent on the type of human diets fed to animals: prevention is associated with a diet low in saturated fats and high in fruit and vegetable content, but not with a diet high in saturated fats and low in fruit and vegetable content. This approach provides evidence for a causal role for the microbiota in obesity and its attendant metabolic abnormalities. It also provides a method for defining unoccupied niches in disease-associated microbial communities, the role of dietary components in determining how these niches can be filled by human gut–derived bacterial taxa, and the effects of such occupancy on microbial and host metabolism. It also provides a way to identify health-promoting diets and next-generation probiotics representing naturally occurring members of our indigenous microbial communities that are well adapted to persist in a given body habitat. A key to this approach is the ability to harvest a microbial community from a donor representing a physiology, disease state, lifestyle, or geography of interest; to preserve the donor’s community by freezing it; and then to resurrect and replicate it in multiple recipient gnotobiotic animals that can be reared under conditions where environmental and host variables can be controlled and manipulated to a degree not achievable in clinical studies. Since these mice can be followed as a function of time prior to and after transplantation, in essence, a snapshot of a donor’s community can be converted into a movie. Transplantation of intact uncultured human (fecal) microbiota samples from multiple donors representing the phenotype of interest, with administration of the donors’ diets (or derivatives of those diets) to different groups of mice, is one way to assess whether transmissible responses are shared features of the microbiota or are highly donor specific. A second step is to determine whether the culturable component of a representative microbiota sample can transmit the phenotype(s) observed with the intact uncultured sample. Possession of a collection of cultured organisms that have co-evolved in a given donor’s body habitat sets the stage Chapter 86e The Human Microbiome for the selection of subsets of the collection for testing in gnotobiotic mice, the determination of which members are responsible for effecting the phenotype, and the elucidation of the mechanisms underlying these effects. The models used may inform the design and interpretation of clinical studies of the very individuals and populations whose microbiota are selected for creating these models. Human-to-human fecal microbiota transplantation (FMT) is currently the most direct way to establish proof-of-concept for a causal role for the microbiota in disease pathogenesis. A human donor’s feces are provided to a recipient via nasogastric tube or another technique. Numerous small trials have documented the effects of FMT from healthy donors to recipients with diseases ranging from C. difficile infection to Crohn’s disease, ulcerative colitis, and type 2 diabetes. Only a few of these studies have used a double-blind, placebo-controlled design. In a double-blind, controlled trial involving men 21–65 years old with a body mass index of >30 kg/m2 and documented insulin resistance, FMT was performed using a microbiota from metabolically healthy lean donors or from the study participants themselves. A microbiota from lean donors significantly improved peripheral insulin sensitivity over that in controls. This change was associated with an increase in the relative abundance of the butyrate-producing bacteria related to Roseburia intestinalis (in the feces) and Eubacterium hallii (in the small intestine). The efficacy of FMT for the treatment of recurrent C. difficile infection has been assessed in a number of small trials. One unblinded, placebo-controlled trial assessed the use of FMT in 42 patients with recurrent C. difficile infection (defined as at least one relapse after treatment with vancomycin or metronidazole for ≥10 d). Patients were pretreated with oral vancomycin. The experimental group then received FMT via nasoduodenal tube from healthy volunteer donors (<60 years of age) selected from the community. Controls underwent sterile lavage or received oral vancomycin alone. In 10 weeks of follow-up, infection was cured (with cure defined as three negative fecal tests for C. difficile toxin) in 81% of patients in the FMT group (13 of 16) but in only 23% (3 of 13) in the bowel-lavage control arm and 31% (4 of 13) in the vancomycin-only group. Metagenomic analysis of microbiota samples collected before and after treatment revealed an increased representation of Bacteroidetes and Clostridium clusters IV and XIVa, along with a 100-fold decrease in the relative abundance of Proteobacteria, in the FMT group. A meta-analysis of FMT in C. difficile infection examined 20 case-series publications, 15 case reports, and the one unblinded study described above. All but one of these studies used fresh (not frozen) fecal samples. Donor selection varied, although most donors were family members or relatives and most studies excluded donors who had recently received antibiotics. It is noteworthy that the concentrations of infused donor feces varied widely (i.e., from 5 g to 200 g, resuspended in 10–500 mL); these fecal suspensions were introduced at different sites along the gastrointestinal tract, including the stomach and points throughout the small intestine and colon. Resolution of infection, which was frequently assessed on the basis of symptom resolution (with C. difficile toxin testing rarely performed), was documented in 87% (467) of 536 treated patients. The most common adverse events reported were diarrhea (94% of cases) and abdominal cramps (31%) on the day of infusion. The meta-analysis was limited to clinical outcomes and did not specifically address the role of the microbiota in disease resolution (e.g., the extent of invasion of donor taxa; their persistence; or the long-term effects of transplantation on various facets of host biology, which generally have not been evaluated). Sober and thoughtful consideration needs to be applied to the therapeutic use of FMT, which represents an early and rudimentary approach to microbiota manipulation that very likely will be replaced by administration of defined collections of sequenced, cultured members of the human microbiota (probiotic consortia). A number of published reports on FMT have garnered significant public attention. This attention, coupled with an increasing public appreciation of the beneficial nature of our interactions with microbes, demands that the precautionary principle be honored and that risks versus benefits of such interventions be carefully evaluated. To date, most FMT trials have failed to define (or have differed in) significant confounders, including (1) the criteria used for donor sample selection; (2) the methods used for donor sample preparation and characterization as well as the decision about whether or not to create a repository for donor and recipient samples that will permit retrospective analyses (and meta-analyses for given disease states); the development of minimal standards for assessing the invasion of recipient gut communities by taxa from donor microbiota (using microbial source-tracking methods) as well as the timing, duration, nature, and breadth of sampling of the recipient as a function of transplantation; (4) the adoption of minimal standards for collection of patients’ clinical data (e.g., age, diet, antibiotic use) and the establishment of databases for entering these data (including use of a defined vocabulary for annotating the clinical data); and (5) the development of standards for informed consent in lieu of knowledge of the longterm effects of the procedure. The regulatory landscape is evolving. The U. S. Food and Drug Administration recently issued an enforcement policy specifically addressing the use of FMT for the treatment of recurrent C. difficile infection; this policy indicates that the agency intends to “exercise enforcement discretion regarding the investigational new drug (IND) requirements for the use of FMT to treat C. difficile infection not responding to standard therapies,” but it does not waive IND requirements for other FMT studies. The design of human microbiome studies is rapidly evolving, in part because the data are highly multivariate, are compositional, and do not meet distributional assumptions of standard statistical tests such as analysis of variance. Consequently, the proper number of subjects to enroll and the proper populations to target remain to be established. One useful approach is to review published studies and ask whether the reported conclusion could be obtained with fewer subjects (sample rarefaction) and/or fewer sequencing reads from SSU rRNA genes, whole-community DNA (microbiomes), or expressed community mRNA (metatranscriptomes) per subject (sequence rarefaction). A common yet critical problem to avoid is under-sampling of the types of objects under study. For example, if the goal is to compare factors applying to individuals (e.g., individual diet), then dozens of individuals in each clinical category may be needed. If the goal is to compare factors applying to populations (e.g., demographic properties), then many populations may be needed. Another key issue is whether the effect size to be studied, especially in meta-analysis, is greater than or less than technical effects. As noted above, different PCR primers will lead to different readouts of the taxonomy of a microbial community; these differences are, for example, greater than the differences between lean and obese subjects’ fecal microbiota but less than the difference between fecal communities in newborns and adults. A central challenge in human microbiome research is establishing the extent to which diagnostic tests and therapeutic approaches are generalizable. This challenge is illustrated by studies of the capacities of gut microbiomes to metabolize orally administered drugs. The results could be very informative for the pharmaceutical industry as it seeks new and more accurate ways to predict bioavailability and toxicity. However, these studies should prompt consideration of the fact that many clinical trials are outsourced to countries where trial participants have diets and microbial community structures that differ from those of the intended initial recipients of the (marketed) drug. Capture and preservation of the wide range of microbial diversity present in different human populations—and thus of the capacity of our microbial communities to catalyze elaborate and in many respects uncharacterized biotransformations—represent potentially fertile ground for the discovery of new drugs (and new industrial processes of societal value). The chemical entities that our microbial communities have evolved to synthesize in order to support their mutually beneficial relationships and the human genes that these chemotypes influence may become new classes of drugs and new targets for drug discovery, respectively. Therefore, characterization of groups of individuals living in countries that are undergoing rapid transformations in cultural traditions and socioeconomic conditions and are witnessing the emergence of a variety of diseases associated with increasingly Western lifestyles (globalization) is a timely challenge. Birth cohort studies (including studies of twins) initiated every 10 years in these countries may be able to capture the impact of globalization, including changing diets, on human microbial ecology. Although microbiome-associated diagnostics and therapeutics provide new and exciting dimensions for personalized medicine, attention must be paid to the potentially broad societal impact of this work. For example, studies of the human gut microbiome are likely to have a disruptive effect on current views of human nutrition, enhancing appreciation of how food and the metabolic output of interactions of dietary components with the microbiota are intimately connected to myriad features of human biology. Underlying the efforts to elucidate the relations among food, the microbiome, and human nutrition is a need to proactively develop materials for educational outreach with a narrative and vocabulary that is understandable to broad and varied consumer populations representing different cultural traditions and widely ranging degrees of scientific literacy. The results have the potential to catalyze efforts to integrate agricultural policies and practice, food production, and nutritional recommendations for consumer populations representing different ages, geographic locales, and states of health. Defining our metagenome (the genes embedded in our H. sapiens genome plus those in our microbiome) will likely lead to an entirely new level of refinement in our description of self, our genetic evolution, 86e-11 our postnatal development, the microbial legacy of our connection to family, and the consequences of personal lifestyle choices. While this information can help us understand the origins of certain yet unexplained health disparities, care must be taken to avoid stigmatization of individuals or groups of individuals having different cultural norms, belief systems, or behaviors. In partnership with human microbiome researchers, anthropologists need to examine the impact of studies of the human microbiome on the participants, assessing how this field and participants’ cultural traditions interact to affect these individuals’ perceptions about the natural world, the forces that affect their lives, and their connections to one another within the context of family and community. Studies of human microbial ecology are an important manifestation of progress in the genome sciences, represent a timely step in our quest to achieve a better understanding of our place in the natural world, and reflect the evolving focus of twenty-first-century medicine on disease prevention, new definitions of health, new ways to determine the origins of individual biologic differences, and new approaches to evaluating the impact of changes in our lifestyles and biosphere on our biology. As microbiome-directed diagnostics and therapeutics emerge, we must be sensitive to the societal impact of this work. Chapter 86e The Human Microbiome that challenge the discipline. Biologists study the experimental response of a variable of interest in a cell or organism while holding all other variables constant. In this way, it is possible to dissect the individual components of a biologic system and assume that a thorough understanding of a specific component (e.g., an enzyme or a transcription factor) will provide sufficient insight to explain the global behavior of that system (e.g., a metabolic pathway or a gene network, respectively). Biologic systems are, however, much more complex than this approach assumes and manifest behaviors that frequently (if not invariably) cannot be predicted from knowledge of their component parts characterized in isolation. Growing recognition of this shortcoming of conventional biologic research has led to the development of a new discipline, systems biology, which is defined as the holistic study of living organisms or their cellular or molecular network components to predict their response to perturbations. Concepts of systems biology can be applied readily to human disease and therapy and define the field of systems pathobiology, in which genetic or environmental perturbations produce disease and drug perturbations restore normal system behavior. Systems biology evolved from the field of systems engineering in which a linked collection of component parts constitute a network whose output the engineer wishes to predict. The simple example of an electronic circuit can be used to illustrate some basic systems engineering concepts. All the individual elements of the circuit—resistors, capacitors, transistors—have well-defined properties that can be characterized precisely. However, they can be linked (wired or configured) in a variety of ways, each of which yields a circuit whose response to voltage applied across it is different from the response of every other configuration. To predict the circuit’s (i.e., system’s) behavior, the engineer must study its response to perturbation (e.g., voltage applied across it) holistically rather than its individual components’ responses to that perturbation. Viewed another way, the resulting behavior of the system is greater than (or different from) the simple sum of its parts, and systems engineering utilizes rigorous mathematical approaches to predict these complex, often nonlinear, responses. By analogy to biologic systems, one can reason that detailed knowledge of a single enzyme in a metabolic pathway or of a single transcription factor in a gene network will not provide sufficient detail to predict the output of that metabolic pathway or transcriptional network, respectively. Only a systems-based approach will suffice. It has taken biologists a long time to appreciate the importance of systems approaches to biomedical problems. Reductionism has reigned supreme for many decades, largely because it is experimentally and analytically simpler than holism, and because it has provided insights into biologic mechanisms and disease pathogenesis that have led to successful therapies. However, reductionism cannot solve all biomedical problems. For example, the so-called off-target effects of new drugs that frequently limit their adoption likely reflect the failure of a drug to be studied in holistic context, i.e., the failure to explore all possible actions aside from the principal target action for which it was developed. Other approaches to understanding biology therefore are clearly needed. With the growing body of genomic, proteomic, and metabolomic data sets in which dynamic changes in the expression of many genes and many metabolites are recorded after a perturbation and with the growth of rigorous mathematical approaches to analyzing those changes, the stage has been set for applying systems engineering principles to modern biology. Physiologists historically have had more of a (bio)engineering perspective on the conduct of their studies and have been among the first systems biologists. Yet, with few exceptions, they, too, have focused on comparatively simple physiologic systems that are tractable using 87e-1 conventional reductionist approaches. Efforts at integrative modeling of human physiologic systems, as first attempted by Guyton for blood pressure regulation, represent one application of systems engineering to human biology. These dynamic physiologic models often focus on the acute response of a measurable physiologic parameter to a system perturbation, and do so from a classic analytic perspective in which all the conventional physiologic determinants of the output parameter are known and can be modeled quantitatively. Until recently, molecular systems analysis has been limited owing to inadequate knowledge of the molecular determinants of a biologic system of interest. Although biochemists have approached metabolic pathways from a systems perspective for over 50 years, their efforts have been limited by the inadequacy of key information for each enzyme (KM, kcat, and concentration) and substrate (concentration) in the pathway. With increasingly rich molecular data sets available for systems-based analyses, including genomic, transcriptomic, proteomic, and metabolomic data, biochemists are now poised to use systems biology approaches to explore biologic and pathobiologic phenomena. To understand how best to apply the principles of systems biology to human biomedicine, it is necessary to review briefly the building blocks of any biologic system and the determinants of system complexity. All systems can be analyzed by defining their static topology (architecture) and their dynamic (i.e., time-dependent) response to perturbation. In the discussion that follows, system properties are described that derive from the consequences of topology (form) or dynamic response (function). Any system of interacting elements can be represented schematically as a network in which the individual elements are depicted as nodes and their connections are depicted as links. The nature of the links among nodes reflects the degree of complexity of the system. Simple systems are those in which the nodes are linearly linked with occasional feedback or feedforward loops modulating system throughput in highly predictable ways. By contrast, complex systems are nodes that are linked in more complicated, nonlinear networks; the behavior of these systems by definition is inherently more difficult to predict owing to the nature of the interacting links, the dependence of the system’s behavior on its initial conditions, and the inability to measure the overall state of the system at any specific time with great precision. Complex systems can be depicted as a network of lower-complexity interacting components or modules, each of which can be reduced further to simpler analyzable canonical motifs (such as feedback and feedforward loops, or negative and positive autoregulation); however, a central property of complex systems is that simplifying their structures by identifying and characterizing the individual nodes and links or even simpler substructures does not necessarily yield a predictable understanding of a system’s behavior. Thus, the functioning system is greater than (or different from) the sum of its individual, tractable parts. Defined in this way, most biologic systems are complex systems that can be represented as networks whose behaviors are not readily predictable from simple reductionist principles. The nodes, for example, can be metabolites that are linked by the enzymes that cause their transformations, transcription factors that are linked by the genes whose expression they influence, or proteins in an interaction network that are linked by cofactors that facilitate interactions or by thermodynamic forces that facilitate their physical association. Biologic systems typically are organized as scale-free, rather than stochastic, networks of nodes. Scale-free networks are those in which a few nodes have many links to other nodes (highly linked nodes, or hubs) but most nodes have only a few links (weakly linked nodes). The term scale-free refers to the fact that the connectivity of nodes in the network is invariant with respect to the size of the network. This is quite different from two other common network architectures: random (Poisson) and exponential distributions. Scale-free networks can be mathematically described by a power law that defines the probability of the number of Chapter 87e Network Medicine: Systems Biology in Health and Disease FIGuRE 87e-1 Network representations and their distributions. A random network is depicted on the left, and its Poisson distribution of the number of nodal connections (k) is shown in the graph below it. A scale-free network is depicted on the right, and its power law distribution of the number of nodal connections (k) is shown in the graph below it. Highly connected nodes (hubs) are lightly shaded. links per node (P[k] = k−[γ], where k is the number of links per node and γ is the slope of the log P[k] versus log[k] plot); this unique property of most biologic networks is a reflection of their self-similarity or fractal nature (Fig. 87e-1). There are unique properties of scale-free biologic systems that reflect their evolution and promote their adaptability and survival. Biologic networks likely evolved one node at a time in a process in which new nodes are more likely to link to a highly connected node than to a sparsely connected node. Furthermore, scale-free networks can become sparsely linked to one another, yielding more complex, modular scale-free topologies. This evolutionary growth of biologic networks has three important properties that affect system function and survival. First, this scale-free addition of new nodes promotes system redundancy, which minimizes the consequences of errors and accommodates adverse perturbations to the system robustly with minimal effects on critical functions (unless the highly connected nodes are the focus of the perturbation). Second, this resulting network redundancy provides a survival advantage to the system. In complex gene networks, for example, mutations or polymorphisms in weakly linked genes account for biodiversity and biologic variability without disrupting the critical functions of the system; only mutations in highly linked (essential) genes (hubs) can shut down the system and cause embryonic lethality. Third, scale-free biologic systems facilitate the flow of information (e.g., metabolite flux) across the system compared with randomly organized biologic systems; this so-called “small-world” property of the system (in which the clustered nature of the highly linked hubs defines a local neighborhood within the network that communicates through weaker, less frequent links to other clusters) minimizes the energy cost for the dynamic action of the system (e.g., minimizes the transition time between states in a metabolic network). These basic organizing principles of complex biologic systems lead to three unique properties that require emphasis. First, biologic systems are robust, which means that they are quite stable in response to most changes in external conditions or internal modification. Second, a corollary to the property of robustness is that complex biologic systems are sloppy, which means that they are insensitive to changes in external conditions or internal modification except under certain uncommon conditions (i.e., when a hub is involved in the change). Third, complex biologic systems exhibit emergent properties, which means that they manifest behaviors that cannot be predicted from the reductionist principles used to characterize their component parts. Examples of emergent behavior in biologic systems include spontaneous, self-sustained oscillations in glycolysis; spiral and scroll waves of depolarization in cardiac tissue that cause reentrant arrhythmias; and self-organizing patterns in biochemical systems governed by diffusion and chemical reaction. The principles of systems biology have been applied to complex pathologic processes with some early successes. The key to these applications is the identification of emergent properties of the system under study in order to define novel, otherwise unpredictable (i.e., from the reductionist perspective) methods for regulating the system’s response. Systems biology approaches have been used to characterize epidemics and ways to control them, taking advantage of the scale-free properties of the network of infected individuals that constitute the epidemic. Through the use of a systems analysis of a neural protein-protein interaction network, unique disease-modifying proteins have been identified that are common to a wide range of cerebellar neurodegenerative disorders causing inherited ataxias. Systems analysis and disease network construction of a pulmonary arterial hypertension network led to the identification of a unique disease module involving a pathway governed by microRNA21. Systems biology models have been used to dissect the dynamics of the inflammatory response using oscillatory changes in the transcription factor nuclear factor (NF) κB as the system output. Systems biology principles also have been used to predict the development of an idiotypy–anti-idiotypy antibody network, describe the dynamics of species growth in microbial biofilms, and analyze the innate immune response. In each of these examples, a systems (patho)biology approach provided insights into the behavior of these complex systems that could not have been recognized with conventional scientific reductionism. A unique application of systems biology to biomedicine is in the area of drug development. Conventional drug development involves identifying a potential target protein and then designing or screening compounds to identify those that inhibit the function of that target. This reductionist analysis has identified many potential drug targets and drugs, yet only when a drug is tested in animal models or humans are the systems consequences of the drug’s action revealed; not uncommonly, so-called off-target effects may become apparent and be sufficiently adverse for researchers to cease development of the agent. A good example of this problem is the unexpected outcomes of the vitamin B–based regimens for lowering homocysteine levels. In these trials, plasma homocysteine levels were reduced effectively; however, there was no effect of this reduction on clinical vascular endpoints. One explanation for this outcome is that one of the B vitamins in the regimen, folate, has a panoply of effects on cell proliferation and metabolism that probably offset its homocysteine-lowering benefits, promoting progressive atherosclerotic plaque growth and its consequences for clinical events. In addition to these types of unexpected outcomes exerted through pathways that were not considered ab initio, conventional approaches to drug development typically do not take into consideration the possibility of emergent behaviors of the organism or the metabolic pathway or the transcriptional network of interest. Thus, a systems-based analysis of potential drugs (drug-target network analysis) can benefit the development paradigm both by enhancing the likelihood that a compound of interest will not manifest unforeseen adverse effects and by promoting novel analytic methods for identifying unique control points or pathways in metabolic or genetic networks that would benefit from drug-based modulation. SYSTEMS PATHOBIOLOGY AND HuMAN DISEASE CLASSIFICATION: NETWORK MEDICINE Perhaps most important, systems pathobiology can be used to revise and refine the definition of human disease. The classification of human disease used in this and all medical textbooks derives from the correlation between pathologic analysis and clinical syndromes that began in the nineteenth century. Although this approach has been very successful, serving as the basis for the development of many effective therapies, it has major shortcomings. Those shortcomings include a lack of sensitivity in defining preclinical disease, a primary focus on overtly manifest disease, failure to recognize different and potentially differentiable causes of common late-stage pathophenotypes, and a limited ability to incorporate the growing body of molecular and genetic determinants of pathophenotype into the conventional classification scheme. Two examples will illustrate the weakness of simple correlation analyses grounded in the reductionist principle of simplification (Occam’s razor) in defining human disease. Sickle cell anemia, the “classic” Mendelian disorder, is caused by a Val6Gln substitution in the β chain of hemoglobin. If conventional genetic teaching holds, this single mutation should lead to a single phenotype in patients who harbor it (genotype-phenotype correlation). This assumption is, however, false, as patients with sickle cell disease manifest a variety of pathophenotypes, including hemolytic anemia, stroke, acute chest syndrome, bony infarction, and painful crisis, as well as an overtly normal phenotype. The reasons for these different phenotypic presentations include the presence of disease-modifying genes or gene products (e.g., hemoglobin F, hemoglobin C, glucose-6-phosphate dehydrogenase), exposure to adverse environmental factors (e.g., hypoxia, dehydration), and the genetic and environmental determinants of common intermediate pathophenotypes (i.e., variations in those generic pathologic mechanisms underlying all human disease—inflammation, thrombosis/hemorrhage, fibrosis, cell proliferation, apoptosis/necrosis, immune response). A second example of note is familial pulmonary arterial hypertension. This disorder is associated with over 100 different mutations in three members of the transforming growth factor β (TGF-β) super-family: bone morphogenetic protein receptor-2 (BMPR-2), activin receptor-like kinase-1 (Alk-1), and endoglin. All these different genotypes are associated with a common pathophenotype, and each leads to that pathophenotype by molecular mechanisms that range from haploinsufficiency to dominant negative effects. As only approximately one-fourth of individuals in families that harbor these mutations manifest the pathophenotype, other disease-modifying genes (e.g., 87e-3 the serotonin receptor 5-HT2B, the serotonin transporter 5-HTT), genomic and environmental determinants of common intermediate pathophenotypes, and environmental exposures (e.g., hypoxia, infective agents [HIV], anorexigens) probably account for the incomplete penetrance of the disorder. On the basis of these and many other related examples, one can approach human disease from a systems pathobiology perspective in which each “disease” can be depicted as a network that includes the following modules: the primary disease-determining elements of the genome (or proteome, if posttranslationally modified), the disease-modifying elements of the genome or proteome, environmental determinants, and genomic and environmental determinants of the generic intermediate pathophenotypes. Figure 87e-2 graphically depicts these genotype-phenotype relationships as modules for the six common disease types with specific examples for each type. Figure 87e-3 shows a network-based depiction of sickle cell disease using this kind of modular approach. Goh and colleagues developed the concept of a human disease network (Fig. 87e-4) in which they used a systems approach to characterize the disease-gene associations listed in the Online Mendelian Inheritance in Man database. Their analysis showed that genes linked to similar disorders are more likely to have products that physically associate and greater similarity between their transcription profiles than do genes not associated with similar disorders. In addition, proteins associated with the same pathophenotype are significantly more likely to interact with one another than with other proteins not associated with the pathophenotype. Finally, these authors showed that the great majority of disease-associated genes are not highly connected genes (i.e., not hubs) and are typically weakly linked nodes within the functional periphery of the network in which they operate. This type of analysis validates the potential importance of defining disease on the basis of its systems pathobiologic determinants. Clearly, doing this will require a more careful dissection of the molecular elements in the relevant pathways (i.e., more precise molecular pathophenotyping), less reliance on overt manifestations of disease for their classification, and an understanding of the dynamics (not just the static architecture) of the pathobiologic networks that underlie pathophenotypes defined in this way. Figure 87e-5 illustrates the elements of a molecular network within which a disease module is contained. This network is first identified by determining the interactions (physical or regulatory) among the proteins or genes that comprise it (the “interactome”). These interactions then define a topologic module within which exists functional modules (pathways) and disease modules. One approach to constructing this module is illustrated in Fig. 87e-6. Examples of the use of this approach in defining novel determinants of disease are given in Table 87e-1. Chapter 87e Network Medicine: Systems Biology in Health and Disease Hereditary ataxias Many ataxia-causing Lim et al: Cell 125:801proteins share interact-814, 2006 ing partners that affect neurodegeneration Diabetes mellitus Metabolite-protein Wang-Sattler et al: Mol network analysis links Syst Biol 8:615, 2012 three unique metabolite abnormalities in prediabetics to seven type 2 diabetes genes through four enzymes Ebstein-Barr virus infec-Viral proteome exerts its Gulbahce et al: PLoS tion effects through linking One 8:e1002531, 2012 to host interactome Pulmonary arterial Network analysis indi-Parikh et al: Circulation hypertension cates adaptive role for 125:1520-1532, 2012 microRNA 21 in suppressing rho kinase pathway 87e-4 Classsic mendelian disorder: Classic mendelian disorder: Classic mendelian disorder: Example:? Polygenic disorder: Single phenotype Example: Essential hypertension Polygenic disorder: Multiple phenotypes Example: Ischemic heart disease Example: Subacute bacterial endocarditis FIGuRE 87e-2 Examples of modular representations of human disease. D, secondary human disease genome or proteome; E, environmental determinants; G, primary human disease genome or proteome; I, intermediate phenotype; P, pathophenotype. (Reproduced with permission from J Loscalzo et al: Molec Syst Biol 3:124, 2007.) As yet another potential consideration, one can argue that disease reflects the later-stage consequences of the predilection of an organ system to manifest a particular intermediate pathophenotype in response to injury. This paradigm reflects a reverse causality view in which a disease is defined as a tendency to heightened inflammation, thrombosis, or fibrosis after an injurious perturbation. Where the process is manifest (i.e., the organ in which it occurs) is less important than that it occurs (with the exception of the organ-specific pathophysiologic consequences that may require acute attention). For example, from this perspective, acute myocardial infarction (AMI) and its consequences are a reflection of thrombosis (in the coronary artery), inflammation (in the acutely injured myocardium), and fibrosis (at the site or sites of cardiomyocyte death). In effect, the major therapies for AMI address these intermediate pathophenotypes (e.g., antithrombotics, statins) rather than any organ-specific disease-determining process. This paradigm would argue for a systems-based analysis that would first identify the intermediate pathophenotypes to which a person is predisposed, then determine how and when to intervene to attenuate that adverse predisposition, and finally limit the likelihood that a major organ-specific event will occur. Evidence for the validity of this approach is found in the work of Rzhetsky and colleagues, who reviewed 1.5 million patient records and 161 diseases and found that these disease phenotypes form a network of strong pairwise correlations. This result is consistent with the notion that underlying genetic predispositions to intermediate pathophenotypes form the predicate basis for conventionally defined end organ diseases. Regardless of the specific nature of the systems pathobiologic approach used, these analyses will lead to a drastic revision of the way human disease is defined and treated, establishing the discipline of network medicine. This will be a lengthy and complicated process but ultimately will lead to better disease prevention and therapy and probably do so from an increasingly personalized perspective. The analysis of pathobiology from a systems-based perspective is likely to help define specific subsets of patients more likely to respond to particular interventions based on shared disease mechanisms. Although it is unlikely that the extreme of “individualized medicine” will ever be practical (or even desirable), complex diseases can be mechanistically subclassified and interventions may be tailored to those settings in which they are more likely to work. G5,...,GnG1 G3 G2 G4 D5,...,DnD1 D3 D2 D4 I5 ... InI1 I3 I2 I4 E5 ... EnE1 E3 E2 E4 PS5 ... PSnPS1 PS3 PS2 PS4 P5 ... PnP1 P3 P2 P4 Primary disease genome Intermediate pathophenotype Pathophenome HbSTGF˜HbC˜-ThalHbFImmuneresponseInflammationHemolyticanemiaAplasticanemiaStrokePainfulcrisisDisease-modifyinggenesIntermediatephenotypesPathophenotypeG6PDApoptosis/necrosisEnvironmentaldeterminantsHypoxiaDehydrationAcutechestsyndromeBoneinfarctThrombosisInfectiveagentFIGuRE 87e-3 A. Theoretical human disease network illustrating the relationships among genetic and environmental determinants of the pathophenotypes. Key: D, secondary disease genome or proteome; E, environmental determinants; G, primary disease genome or proteome; I, intermediate phenotype; PS, pathophysiologic states leading to P, pathophenotype. B. Example of this theoretical construct applied to sickle cell disease. Key: Red, primary molecular abnormality; gray, disease-modifying genes; yellow, intermediate phenotypes; green, environmental determinants; blue, pathophenotypes. (Reproduced with permission from J Loscalzo et al: Molec Syst Biol 3:124, 2007.) Chapter 87e Network Medicine: Systems Biology in Health and Disease GNAS Developmental Ear, nose, throat Endocrine FIGuRE 87e-4 A. Human disease network. Each node corresponds to a specific disorder colored by class (22 classes, shown in the key to B). The size of each node is proportional to the number of genes contributing to the disorder. Edges between disorders in the same disorder class are colored with the same (lighter) color, and edges connecting different disorder classes are colored gray, with the thickness of the edge proportional to the number of genes shared by the disorders connected by it. B. Disease gene network. Each node is a single gene, and any two genes are connected if implicated in the same disorder. In this network map, the size of each node is proportional to the number of specific disorders in which the gene is implicated. (Reproduced with permission from KI Goh et al: Proc Natl Acad Sci USA 104:8685, 2007.) FIGuRE 87e-5. The elements of the interactome. The interactome includes topologic modules (genes or gene products that are closely associated with one another through direct interactions), functional modules (genes or gene products that work together to define a pathway), and disease modules (genes or gene products that interact to yield a pathophenotype). (Reproduced with permission from AL Barabasi et al: Nat Rev Genet 12:56, 2011.) i. Interactome reconstruction iii. Disease module identification iv. Pathway identification v. Validation/prediction Potential sources: (i) OMIM (ii) GWAS ii. Disease gene (seed) identification Disease1 protein Disease2 protein Overlappingprotein Known disease2 protein Predicted disease2 protein Chapter 87e Network Medicine: Systems Biology in Health and Disease FIGuRE 87e-6. Approaches to identifying disease modules in molecular networks. A strategy for defining disease modules involves (i) reconstructing the interactome; (ii) ascertaining potential seed (disease) genes from the curated literature, the Online Mendelian Inheritance in Man (OMIM) database, or genomic analyses (genome-wide association studies [GWAS] or transcriptional profiling); (iii) identifying the disease module using different modeling or statistical approaches; (iv) identifying pathways and the role of disease genes or modules in those pathways; and (v) disease module validation and prediction. (Reproduced with permission from AL Barabasi et al: Nat Rev Genet 12:56, 2011.) http://vip.persianss.ir Stem Cell Biology Minoru S. H. Ko Stem cell biology is a rapidly expanding field that explores the charac-teristics and possible clinical applications of a variety of stem cells that serve as the progenitors of more differentiated cell types. In addition to potential therapeutic applications (Chap. 90e), patient-derived 88 PART 4: Regenerative Medicine stem cells can also be used as disease models and as a means of testing drug efficacy. Stem cells and their niche are a major focus of medical research because they play central roles in tissue and organ homeostasis and repair, which are important aspects of aging and disease. IDENTIFICATION, ISOLATION, AND DERIVATION OF STEM CELLS Resident Stem Cells The definition of stem cells remains elusive. Stem cells were originally postulated as unspecified or undifferentiated cells that provide a source of renewal of skin, intestine, and blood cells throughout life. These resident stem cells have been identified in a variety of organs (e.g., epithelia of the skin and digestive system, bone marrow, blood vessels, brain, skeletal muscle, liver, testis, and pancreas) based on their specific locations, morphology, and biochemical markers. Isolated Stem Cells Unequivocal identification of stem cells requires their separation and purification, usually based on a combination of specific cell-surface markers. These isolated stem cells (e.g., hematopoietic stem [HS] cells) can be studied in detail and used in clinical applications, such as bone marrow transplantation (Chap. 89e). However, the lack of specific cell-surface markers for other types of stem cells has made it difficult to isolate them in large quantities. This challenge has been partially addressed in animal models by genetically marking different cell types with green-fluorescent protein driven by cell-specific promoters. Alternatively, putative stem cells have been isolated from a variety of tissues as side population (SP) cells using fluorescence-activated cell sorting after staining with the Hoechst 33342 dye. Cultured Stem Cells It is desirable to culture and expand stem cells in vitro to obtain a sufficient quantity for analysis and potential therapeutic use. Although the derivation of stem cells in vitro has been a major obstacle in stem cell biology, the number and types of cultured stem cells have increased progressively (Table 88-1). Cultured stem cells derived from resident stem cells are often called adult stem cells or somatic stem cells to distinguish them from embryonic stem (ES) and embryonic germ (EG) cells. However, considering the existence of embryo-derived, tissue-specific stem cells (e.g., trophoblast stem [TS] cells) and the possible derivation of similar cells from an embryo/fetus (e.g., neural stem [NS] cells), it is more appropriate to use the term, tissue stem cells. Successful derivation of cultured stem cells (both embryonic and tissue stem cells) often requires the identification of necessary growth factors and culture conditions, mimicking the microenvironment or niche of the resident stem cells. Recently, long-term maintenance of tissue stem cells in vitro is increasingly possible by growing them as three-dimensional (3D) organoids, which contain both stem cells and niche cells (Chap. 92e). For example, intestinal stem cells can now be cultured as “epithelial mini-guts” in the presence of R-spondin, epidermal growth factor (EGF), and noggin on Matrigel. Similarly, lung stem cells can be cultured as self-renewing “alveolospheres.” A growing list of cultured stem cells, although not comprehensive, is shown in Table 88-1. Please note that the establishment of cultured stem cells is often under dispute due to the difficulties in assessing the characteristics of these cells. SELF-RENEWAL AND PROLIFERATION OF STEM CELLS Symmetric and Asymmetric Cell Division The most widely accepted stem cell definition is a cell with a unique capacity to produce unaltered daughter cells (self-renewal) and to generate specialized cell types (potency). Self-renewal can be achieved in two ways. Asymmetric cell division produces one daughter cell that is identical to the parental cell and one daughter cell that is different from the parental cell and is a progenitor or differentiated cell. Asymmetric cell division does not increase the number of stem cells. Symmetric cell division produces two Embryonic stem cells (ES, ESC) Blastocysts or immunosurgically isolated inner cell mass (ICM) from blastocysts Embryonic germ cells (EG, EGC) Primordial germ cells (PGCs) from embryos at E8.5–E12.5 (m); gonadal tissues from 5–11 week postfertilization embryo/fetus (h) Trophoblast stem cells (TS, TSC) Trophectoderm of E3.5 blastocysts, extraembryonic ectoderm of E6.5 embryos, and chorionic ectoderm of E7.5 embryos (m) Embryonal carcinoma cells (EC) Teratocarcinoma—a type of cancer that develops in the testes and ovaries (m, h) Mesenchymal stem cells (MS, MSC) Bone marrow, muscle, adipose tissue, peripheral blood, and umbilical cord blood (m, h) Multipotent adult stem cells (MAPC) Bone marrow mononuclear cells (m, h); postnatal muscle and brain (m) Spermatogonial stem cells (SS, SSC) Newborn testis (m) Germline stem cells (GS, GSC) Neonatal testis (m) Multipotent adult germline stem cells Adult testis (m) (maGSC) Neural stem cells (NS, NSC) Fetal and adult brain (subventricular zone, ventricular zone, and hippo-campus) (m, h) Unrestricted somatic stem cells Mononuclear fraction of cord blood (USSC) (h) Epistem cells (EpiSC) Early postimplantation epiblast (m) Induced pluripotent stem cells (iPS, Variety of terminally differentiated iPSC) cells and tissue stem cells (m, h) Lung stem cells Lung (m, h) Amniotic fluid-derived stem (AFS) Amniotic fluid (m, h) cells Umbilical cord blood stem cells Umbilical cord (h) Adipose stem cells (AST) Fat (m, h) Cardiac stem cells Heart (m, h) Renal stem cells Renal papilla (m, h) Crypt stem cells Intestine (m, h) Colon stem cells (CoSC) Colon (m, h) Hepatic stem cells Liver (m, h) Dental pulp stem cells (DPSC) Dental pulp (m, h) Hair follicle stem cells Hair (m, h) Abbreviations: h, human; m, mouse. identical daughter cells. For stem cells to proliferate in vitro, they must divide symmetrically. Unlimited Expansion In Vitro Resident stem cells are often quiescent and divide infrequently. However, once the stem cells are successfully cultured in vitro, they often acquire the capacity to divide continuously and the ability to proliferate beyond the normal passage limit typical of primary cultured cells (sometimes called immortality). These features are primarily seen in ES cells but have also been demonstrated for tissue stem cells, such as NS cells and mesenchymal stem (MS) cells, thereby enhancing the potential of these cells for therapeutic use (Table 88-1). Stability of Genotype and Phenotype The capacity to actively proliferate is often associated with the accumulation of chromosomal abnormalities and mutations. Mouse ES cells appear to be an exception to this rule and tend to maintain their euploid karyotype and genome integrity. By contrast, human ES cells appear to be more susceptible to mutations after long-term culture. However, it is also important to note that even euploid mouse ES cells can form teratomas when injected into immunosuppressed animals, raising concerns about the possible formation of tumors after transplanting actively dividing stem cells. POTENCY AND DIFFERENTIATION OF STEM CELLS Developmental Potency The term potency is used to indicate a cell’s ability to differentiate into multiple specialized cell types. The current lack of knowledge about the molecular nature of potency requires the experimental manipulation of stem cells to demonstrate their potency. For example, in vivo testing can be done by injecting stem cells into mouse blastocysts or immunosuppressed adult mice and determining how many different cell types are formed from the injected cells. However, these in vivo assays are not applicable to human stem cells. In vitro testing can be performed by differentiating cells in various culture conditions to determine how many different cell types are formed from the cells. The formal test of self-renewal and potency is performed by demonstrating that a single cell possesses such abilities in vitro (clonality). Cultured stem cells are tentatively grouped according to their potency (Fig. 88-1). Only some examples are shown, because many cultured stem cells, especially human cells, lack definitive information about their developmental potency. From Totipotency to Unipotency Totipotent cells can form an entire organism autonomously. Only a fertilized egg (zygote) possesses this feature. Pluripotent cells (e.g., ES cells) can form almost all of the body’s cell lineages (endoderm, mesoderm, and ectoderm), including germ cells. Multipotent cells (e.g., HS cells) can form multiple cell lineages FIGURE 88-1 Potency and source developmental stage of cultured stem cells. For abbreviations of stem cells, see Table 88-1. Note that stem cells are often abbreviated with or without “cells,” e.g., ES cells or ESCs for embryonic stem cells. h, human; m, mouse. but cannot form all of the body’s cell lineages. Oligopotent cells (e.g., NS cells) can form more than one cell lineage but are more restricted than multipotent cells. Oligopotent cells are sometimes called progenitor cells or precursor cells; however, these terms are often more strictly used to define partially differentiated or lineage-committed cells (e.g., myeloid progenitor cells) that can divide into different cell types but lack self-renewing capacity. Unipotent cells or monopotent cells (e.g., spermatogonial stem [SS] cells) can form a single differentiated cell lineage. Nuclear Reprogramming Development naturally progresses from totipotent fertilized eggs to pluripotent epiblast cells to multipotent cells and, finally, to terminally differentiated cells. According to Waddington’s epigenetic landscape, this is analogous to a ball moving down a slope. The reversal of the terminally differentiated cells to totipotent or pluripotent cells (called nuclear reprogramming) can thus be seen as an uphill gradient. Nuclear reprogramming has been achieved using nuclear transplantation, or nuclear transfer (NT), procedures (often called “cloning”), where the nucleus of a differentiated cell is transferred into an enucleated oocyte. Although this is an error-prone procedure with a very low success rate, live animals have been produced using adult somatic cells as donors in sheep, mice, and other mammals. In mice, it has been demonstrated that ES cells derived from blastocysts made by somatic cell NT are indistinguishable from normal ES cells. NT can potentially be used to produce patient-specific ES cells carrying a genome identical to that of the patient, although such strategies have not been pursued due to ethical issues and technical challenges. Recent success in generating human ES cells by NT has rekindled an interest in this area; however, the limited supply of human oocytes will still be a major problem for clinical applications of NT. An alternative approach that has become a method of choice is the direct conversion of terminally differentiated cells into ES-like cells (called induced pluripotent stem [iPS] cells) by overexpressing a combination of key transcription factors (TFs). The original method was to infect mouse embryonic fibroblast cells with retrovirus vectors carrying four TFs [Pou5f1 (Oct4), Sox2, Klf4, and Myc] and to identify rare ES-like cells in culture. This approach was soon adapted to human cells, followed by a more refined procedure (e.g., the use of fewer TFs, different cell types, and different gene-delivery methods). Because a clinical trial using iPS cells is imminent, the safety of iPS-based therapy is a major concern and a variety of measures are being taken to ensure the safety. For example, it has now become a standard to use footprint-free methods such as an episomal vector, Sendai virus vector, and synthetic mRNAs to deliver reprogramming factors into cells, resulting in the production of patient-specific iPS cells with minimal alteration of their genetic makeup. In addition to cell replacement therapy, disease-specific iPS cells are expected to play a role in modeling human disease in vitro and in screening drugs for personalized medicine. It has also become possible to convert one type of terminally differentiated cell (e.g., fibroblast cell) into another type of terminally differentiated cell (e.g., cardiac muscle, neuron, or hepatocyte) by overexpressing specific sets of TFs (called direct reprogramming). Direct reprogramming can bypass the step of making iPS cells, possibly providing the safer route to desired cell types for therapy; however, the technology is currently limited by its low efficiency. Stem Cell Plasticity, Transdifferentiation, and Facultative Stem Cells The prevailing paradigm in developmental biology is that once cells are differentiated, their phenotypes are stable. However, more recent studies show that tissue stem cells, which have traditionally been thought to be lineage-committed multipotent cells, possess the capacity to differentiate into cell types outside their lineage restrictions (called transdifferentiation). For example, HS cells may be converted into neurons as well as germ cells. This feature may provide a means to use tissue stem cells derived directly from a patient for therapeutic purposes, thereby eliminating the need to use embryonic stem cells or elaborate procedures such as nuclear reprogramming of a patient’s somatic cells. However, more strict criteria and rigorous validation are required to establish tissue stem cell plasticity. For example, observations of transdifferentiation may reflect cell fusion, contamination with progenitor cells from other cell lineages, or persistence of pluripotent embryonic cells in adult organs. Therefore, the assignment of potency to each cultured stem cell in Fig. 88-1 should be considered with caution. Whether transdifferentiation exists and can be used for therapeutic purposes remains to be determined conclusively. A similar, but distinct, concept is the facultative stem cell, which is defined as a unipotent cell or a terminally differentiated cell that can function as a stem cell upon tissue injury. The presence of such cells has been proposed for some organs such as liver, intestine, pancreas, and testis, but is still debated. Directed Differentiation of Stem Cells Pluripotent stem cells (e.g., ES and iPS cells) can differentiate into multiple cell types, but in culture, they normally differentiate into heterogeneous cell populations in a stochastic manner. However, for therapeutic uses, it is desirable to direct stem cells into specific cell types (e.g., insulin-secreting beta cells). This is an active area of stem cell research, and protocols are being developed to achieve this goal. In any of these directed cell differentiation systems, the cell phenotype must be evaluated critically. Alternatively, the heterogeneity of the cell population derived from pluripotent stem cells can be actively exploited, as different types of cells interact with each other in culture and further enhance their own differentiation. In some instances, e.g., optic cup, self-organizing tissue morphogenesis has been demonstrated in 3D culture. MOLECULAR CHARACTERIZATION OF STEM CELLS Genomics and Proteomics In addition to standard molecular biological approaches, high-throughput genomics and proteomics have been extensively applied to the analysis of stem cells. For example, DNA microarray analyses have revealed the expression levels of essentially all genes and identified specific markers for some stem cells. Chromatin immunoprecipitation coupled with next-generation sequencing technologies, capable of producing billions of sequence reads in a single run, has revealed chromatin modifications (“epigenetic marks”) relevant to stem cell properties. Similarly, the protein profiles of stem cells have been assessed by using mass spectrometry. These methods are beginning to provide a novel means to characterize and classify various stem cells and the molecular mechanisms that give them their unique characteristics. ES Cell Regulation It is important to identify genes involved in the regulation of stem cell function and to examine the effects of altered gene expression on ES and other stem cells. For example, core networks of TFs such as Pou5f1 (Oct4), Nanog, and Sox2, govern key gene regulatory pathways/networks for the maintenance of self-renewal and pluripotency of mouse and human ES cells. These TF networks are modulated by specific external factors through signal transduction pathways, such as leukemia inhibitory factor (Lif)/Stat3, mitogenactivated protein kinase 1/3 (Mapk1/3), the transforming growth factor β (TGFβ) superfamily, and Wnt/glycogen synthase kinase 3 beta (Gsk3b). Inhibitors of Mapk1/3 and Gsk3b signaling enhance the derivation of ES cells and help maintain ES cells in full pluripotency (“ground” or “naive state”). Recent data also indicate that 20–25 nucleotide RNAs, called microRNAs (miRNAs), play an important role in regulating stem cell function by repressing the translation of their target genes. For example, it has been shown that miR-21 regulates cell cycle progression in ES cells and miR-128 prevents the differentiation of hematopoietic progenitor cells. These types of analyses should provide molecular clues about the function of stem cells and lead to a more effective means to manipulate stem cells for future therapeutic use. Hematopoietic Stem Cells would ensue. In the blood, mature cells have variable average life spans, ranging David T. Scadden, Dan L. Longo from 7 h for mature neutrophils to a few months for red blood cells to All of the cell types in the peripheral blood and some cells in every tissue of the body are derived from hematopoietic (hemo: blood; poiesis: creation) stem cells. If the hematopoietic stem cell is damaged and can no longer function (e.g., due to a nuclear accident), a person would survive 2–4 weeks in the absence of extraordinary support measures. With the clinical use of hematopoietic stem cells, tens of thousands of lives are saved each year (Chap. 139e). Stem cells produce hundreds of billions of blood cells daily from a stem cell pool that is estimated to be only in the tens of thousands. How stem cells do this, how they persist for many decades despite the production demands, and how they may be better used in clinical care are important issues in medicine. The study of blood cell production has become a paradigm for how other tissues may be organized and regulated. Basic research in hematopoiesis includes defining stepwise molecular changes accompanying functional changes in maturing cells, aggregating cells into functional subgroups, and demonstrating hematopoietic stem cell regulation by a specialized microenvironment; these concepts are worked out in hematology, but they offer models for other tissues. Moreover, these concepts may not be restricted to normal tissue function but extend to malignancy. Stem cells are rare cells among a heterogeneous population of cell types, and their behavior is assessed mainly in experimental animal models involving reconstitution of hematopoiesis. Thus, much of what we know about stem cells is imprecise and based on inferences from genetically manipulated animals. All stem cell types have two cardinal functions: self-renewal and differentiation (Fig. 89e-1). Stem cells exist to generate, maintain, and repair tissues. They function successfully if they can replace a wide variety of shorter-lived mature cells over prolonged periods. The process of self-renewal (see below) assures that a stem cell population can be sustained over time. Without self-renewal, the stem cell pool would become exhausted and tissue maintenance would not be possible. The process of differentiation leads to production of the effectors of tissue function: mature cells. Without proper differentiation, the integrity of FIGURE 89e-1 Signature characteristics of the stem cell. Stem cells have two essential features: the capacity to differentiate into a variety of mature cell types and the capacity for self-renewal. Intrinsic factors associated with self-renewal include expression of Bmi-1, Gfi-1, PTEN, STAT5, Tel/Atv6, p21, p18, MCL-1, Mel-18, RAE28, and HoxB4. Extrinsic signals for self-renewal include Notch, Wnt, SHH, and Tie2/Ang-1. Based mainly on murine studies, hematopoietic stem cells express the following cell surface molecules: CD34, Thy-1 (CD90), c-Kit receptor (CD117), CD133, CD164, and c-Mpl (CD110, also known as the thrombopoietin receptor). many years for memory lymphocytes. However, the stem cell pool is the central, durable source of all blood and immune cells, maintaining a capacity to produce a broad range of cells from a single cell source, yet keeping itself vigorous over decades of life. As an individual stem cell divides, it has the capacity to accomplish one of three division outcomes: two stem cells, two cells destined for differentiation, or one stem cell and one differentiating cell. The former two outcomes are the result of symmetric cell division, whereas the latter indicates a different outcome for the two daughter cells—an event termed asymmetric cell division. The relative balance for these types of outcomes may change during development and under particular kinds of demands on the stem cell pool. During development, blood cells are produced at different sites. Initially, the yolk sac provides oxygen-carrying red blood cells, and then the placenta and several sites of intraembryonic blood cell production become involved. These intraembryonic sites engage in sequential order, moving from the genital ridge at a site where the aorta, gonadal tissue, and mesonephros are emerging to the fetal liver and then, in the second trimester, to the bone marrow and spleen. As the location of stem cells changes, the cells they produce also change. The yolk sac provides red cells expressing embryonic hemoglobins while intraembryonic sites of hematopoiesis generate red cells, platelets, and the cells of innate immunity. The production of the cells of adaptive immunity occurs when the bone marrow is colonized and the thymus forms. Stem cell proliferation remains high, even in the bone marrow, until shortly after birth, when it appears to dramatically decline. The cells in the bone marrow are thought to arrive by the bloodborne transit of cells from the fetal liver after calcification of the long bones has begun. The presence of stem cells in the circulation is not unique to a time window in development; however, hematopoietic stem cells appear to circulate throughout life. The time that cells spend freely circulating appears to be brief (measured in minutes in the mouse), but the cells that do circulate are functional and can be used for transplantation. The number of stem cells that circulate can be increased in a number of ways to facilitate harvest and transfer to the same or a different host. Cells entering and exiting the bone marrow do so through a series of molecular interactions. Circulating stem cells (through CD162 and CD44) engage the lectins (carbohydrate binding proteins) Pand E-selectin on the endothelial surface to slow the movement of the cells to a rolling phenotype. Stem cell integrins are then activated and accomplish firm adhesion between the stem cell and vessel wall, with a particularly important role for stem cell VCAM-1 engaging endothelial VLA-4. The chemokine CXCL12 (SDF1) interacting with stem cell CXCR4 receptors and ionic calcium interacting with the calcium sensing receptor appear to be important in the process of stem cells getting from the circulation to where they engraft in the bone marrow. This is particularly true in the developmental move from fetal liver to bone marrow. However, the role for CXCR4 in adults appears to be more related to retention of stem cells in the bone marrow rather than the process of getting them there. Interrupting that retention process through either specific molecular blockers of the CXCR4/CXCL12 interaction, cleavage of CXCL12, or downregulation of the CXCR4 receptor can all result in the release of stem cells into the circulation. This process is an increasingly important aspect of recovering stem cells for therapeutic use as it has permitted the harvesting process to be done by leukapheresis rather than bone marrow punctures in the operating room. Granulocyte colony-stimulating factor and plerixafor, a macrocyclic compound that can block CXCR4, are both used clinically to mobilize marrow hematopoietic stem cells for transplant. Refining our knowledge of how stem cells get into and out of the bone marrow may improve our ability to obtain stem cells and make them more efficient at finding their way to the specific sites for blood cell production, the so-called stem cell niche. The concept of a specialized microenvironment, or stem cell niche, was first proposed to explain why cells derived from the bone marrow of one animal could be used in transplantation and again be found in the bone marrow of the recipient. This niche is more than just a housing site for stem cells, however. It is an anatomic location where regulatory signals are provided that allow the stem cells to thrive, to expand if needed, and to provide varying amounts of descendant daughter cells. In addition, unregulated growth of stem cells may be problematic based on their undifferentiated state and self-renewal capacity. Thus, the niche must also regulate the number of stem cells produced. In this manner, the niche has the dual function of serving as a site of nurture but imposing limits for stem cells: in effect, acting as both a nutritive and constraining home. The niche for blood stem cells changes with each of the sites of blood production during development, but for most of human life it is located in the bone marrow. Within the bone marrow, the perivascular space particularly in regions of trabecular bone serves as a niche. The mesenchymal and endothelial cells of the marrow microvessels produce kit ligand and CXCL12, both known to be important for hematopoietic stem cells. Other cell types, such as sympathetic neurons, nonmyelinating Schwann cells, macrophages, osteoclasts, and osteoblasts, have been shown to regulate stem cells, but it is unclear whether their effects are direct or indirect. Extracellular matrix proteins like osteopontin also affect stem cell function. The endosteal region is particularly important for transplanted cells, suggesting that there may be distinctive features of that region that are yet to be defined that are important mediators of stem cell engraftment. The functioning of the niche as a supportive context for stem cells is of obvious importance for maintaining hematopoiesis and in transplantation. An active area of study involves determining whether the niche is altered in disease and whether drugs can modify niche function to improve transplantation or normal stem cell function in hematologic disease. In the absence of disease, one never runs out of hematopoietic stem cells. Indeed, serial transplantation studies in mice suggest that sufficient stem cells are present to reconstitute several animals in succession, with each animal having normal blood cell production. The fact that allogeneic stem cell transplant recipients also never run out of blood cells in their life span, which can extend for decades, argues that even the limiting numbers of stem cells provided to them are sufficient. How stem cells respond to different conditions to increase or decrease their mature cell production remains poorly understood. Clearly, negative feedback mechanisms affect the level of production of most of the cells, leading to the normal tightly regulated blood cell counts. However, many of the regulatory mechanisms that govern production of more mature progenitor cells do not apply or apply differently to stem cells. Similarly, most of the molecules shown to be able to change the size of the stem cell pool have little effect on more mature blood cells. For example, the growth factor erythropoietin, which stimulates red blood cell production from more mature precursor cells, has no effect on stem cells. Similarly, granulocyte colony-stimulating factor drives the rapid proliferation of granulocyte precursors but has little or no effect on the cell cycling of stem cells. Rather, it changes the location of stem cells by indirect means, altering molecules such as CXCL12 that tether stem cells to their niche. Molecules shown to be important for altering the proliferation, self-renewal, or survival of stem cells, such as cyclin-dependent kinase inhibitors, transcription factors like Bmi-1, or microRNA-processing enzymes like Dicer, have little or different effects on progenitor cells. Hematopoietic stem cells have governing mechanisms that are distinct from the cells they generate. Hematopoietic stem cells sit at the base of a branching hierarchy of cells culminating in the many mature cell types that compose the blood and immune system (Fig. 89e-2). The maturation steps leading to terminally differentiated and functional blood cells take place both as a consequence of intrinsic changes in gene expression and niche-directed and cytokine-directed changes in the cells. Our knowledge of the details remains incomplete. As stem cells mature to progenitors, precursors, and, finally, mature effector cells, they undergo a series of functional changes. These include the obvious acquisition of functions defining mature blood cells, such as phagocytic capacity or hemoglobin synthesis. They also include the progressive loss of plasticity (i.e., the ability to become other cell types). For example, the myeloid progenitor can make all cells in the myeloid series but none in the lymphoid series. As common myeloid progenitors mature, they become precursors for either monocytes and granulocytes or erythrocytes and megakaryocytes, but not both. Some amount of reversibility of this process may exist early in the differentiation cascade, but that is lost beyond a distinct stage in normal physiologic conditions. With genetic interventions, however, blood cells, like other somatic cells, can be reprogrammed to become a variety of cell types. As cells differentiate, they may also lose proliferative capacity (Fig. 89e-3). Mature granulocytes are incapable of proliferation and only increase in number by increased production from precursors. The exceptions to the rule are some resident macrophages, which appear capable of proliferation, and lymphoid cells. Lymphoid cells retain the capacity to proliferate but have linked their proliferation to the recognition of particular proteins or peptides by specific antigen receptors on their surface. Like many tissues with short-lived mature cells such as the skin and intestine, blood cell proliferation is largely accomplished by a more immature progenitor population. In general, cells within the highly proliferative progenitor cell compartment are also relatively short-lived, making their way through the differentiation process in a defined molecular program involving the sequential activation of particular sets of genes. For any particular cell type, the differentiation program is difficult to speed up. The time it takes for hematopoietic progenitors to become mature cells is ~10–14 days in humans, evident clinically by the interval between cytotoxic chemotherapy and blood count recovery in patients. Although hematopoietic stem cells are generally thought to have the capacity to form all cells of the blood, it is becoming clear that individual stem cells may not be equal in their differentiation potential. That is, some stem cells are “biased” to become mature cells of a particular type. In addition, the general concept of cells having a binary choice of lymphoid or myeloid differentiation is not entirely accurate. A cell population with limited myeloid (monocyte and granulocyte) and lymphoid potential is now added to the commitment steps stem cells may undergo. The hematopoietic stem cell must balance its three potential fates: apoptosis, self-renewal, and differentiation. The proliferation of cells is generally not associated with the ability to undergo a self-renewing division except among memory T and B cells and among stem cells. Self-renewal capacity gives way to differentiation as the only option after cell division when cells leave the stem cell compartment, until they have the opportunity to become memory lymphocytes. In addition to this self-renewing capacity, stem cells have an additional feature characterizing their proliferation machinery. Stem cells in many mature adult tissues may be heterogeneous with some being deeply quiescent, serving as a deep reserve, whereas others are more proliferative and replenish the short-lived progenitor population. In the hematopoietic system, stem cells are generally cytokine-resistant, remaining dormant even when cytokines drive bone marrow progenitors to proliferation rates measured in hours. Stem cells, in contrast, are thought to divide at far longer intervals, measured in months to years, for the most quiescent cells. This quiescence is difficult to overcome in vitro, limiting the ability to effectively expand human hematopoietic stem Multipotent Progenitor IKAROS PU1 IL7 IL7 IL7 IL7 IL7 IL4 IL2 IL15 IL5 FLT-3 Ligand FLT-3 Ligand M-CSF G-CSF IL3, SCF EPOEPO TPO IL3, SCF TPO GM-CSF SCF TPO TPO Hox, Pbx1, SCL, GATA2, NOTCH Common Lymphoid Progenitor B Cell Progenitor T Cell Progenitor NK Cell Progenitor Monocyte Progenitor Granulocyte Monocyte Progenitor Granulocyte Progenitor Erythrocyte Progenitor Megakaryocyte Progenitor Megakaryocyte Erythroid Progenitor Common Myeloid Progenitor T/NK Cell Progenitor LEF1, E2A, EBF, PAX-5 NOTCH1 NOTCH1 Aiolos, PAX-5, AML-1 E2A, NOTCH1, GATA3 IKAROS, NOTCH,CBF1 Id2, Ets-1 RelB, ICSBP, ld2 Egn1, Myb GATA1 C/EBP˜C/EBP°Fli-1 AML-1 GATA1, FOG NF-E2, SCL Rbtn2 FIGURE 89e-2 Hierarchy of hematopoietic differentiation. Stem cells are multipotent cells that are the source of all descendant cells and have the capacity to provide either long-term (measured in years) or short-term (measured in months) cell production. Progenitor cells have a more limited spectrum of cells they can produce and are generally a short-lived, highly proliferative population also known as transient amplifying cells. Precursor cells are cells committed to a single blood cell lineage but with a continued ability to proliferate; they do not have all the features of a fully mature cell. Mature cells are the terminally differentiated product of the differentiation process and are the effector cells of specific activities of the blood and immune system. Progress through the pathways is mediated by alterations in gene expression. The regulation of the differentiation by soluble factors and cell-cell communications within the bone marrow niche are still being defined. The transcription factors that characterize particular cell transitions are illustrated on the arrows; the soluble factors that contribute to the differentiation process are in blue. This picture is a simplification of the process. Active research is revealing multiple discrete cell types in the maturation of B cells and T cells and has identified cells that are biased toward one lineage or another (rather than uncommitted) in their differentiation. EPO, erythropoi- etin; RBC, red blood cell; SCF, stem cell factor; TPO, thrombopoietin. cells. The process may be controlled by particularly high levels of cyclin-dependent kinase inhibitors like p57 or CDKN1c that restrict entry of stem cells into the cell cycle, blocking the G1-S transition. Exogenous signals from the niche also appear to enforce quiescence, including the activation of the tyrosine kinase receptor Tie2 on stem cells by angiopoietin 1 on niche cells. The regulation of stem cell proliferation also appears to change with age. In mice, the cyclin-dependent kinase inhibitor p16INK4a accumulates in stem cells in older animals and is associated with a change in five different stem cell functions, including cell cycling. Lowering expression of p16INK4a in older animals improves stem cell cycling and capacity to reconstitute hematopoiesis in adoptive hosts, making them similar to younger animals. Mature cell numbers are unaffected. Therefore, molecular events governing the specific functions of stem cells are being gradually made clear and offer the potential of new approaches to changing stem cell function for therapy. One critical stem cell function that remains poorly defined is the molecular regulation of self-renewal. For medicine, self-renewal is perhaps the most important function of stem cells because it is critical in regulating the number of stem cells. Stem cell number is a key limiting parameter for both autologous and allogeneic stem cell transplantation. Were we to have the ability to use fewer stem cells or expand limited numbers of stem cells ex vivo, it might be possible to reduce the morbidity and expense of stem cell harvests and enable use of other stem cell sources. Specifically, umbilical cord blood is a rich source of stem cells. However, the volume of cord blood units is extremely small, and therefore, the total number of hematopoietic stem cells that can be obtained in any single cord blood unit is generally only sufficient to transplant an individual of <40 kg. This limitation restricts what would otherwise be an extremely promising source of stem cells. Two features of cord blood stem cells are particularly important. (1) They are derived from a diversity of individuals that far exceeds the adult donor pool and therefore can overcome the majority of immunologic cross-matching obstacles. (2) Cord blood stem cells have a large number of T cells associated with them, but (paradoxically) they appear to be associated with a lower FIGURE 89e-3 Relative function of cells in the hematopoietic hierarchy. The boxes represent distinct functional features of cells in the myeloid (upper box) versus lymphoid (lower box) lineages. incidence of graft-versus-host disease when compared with similarly mismatched stem cells from other sources. If stem cell expansion by self-renewal could be achieved, the number of cells available might be sufficient for use in larger adults. An alternative approach to this problem is to improve the efficiency of engraftment of donor stem cells. Graft engineering is exploring methods of adding cell components that may enhance engraftment. Furthermore, at least some data suggest that depletion of host NK (natural killer) cells may lower the number of stem cells necessary to reconstitute hematopoiesis. Some limited understanding of self-renewal exists and, intriguingly, implicates gene products that are associated with the chromatin state, a high-order organization of chromosomal DNA that influences transcription. These include members of the polycomb family, a group of zinc finger–containing transcriptional regulators that interact with the chromatin structure, contributing to the accessibility of groups of genes for transcription. One member, Bmi-1, is important in enabling hematopoietic stem cell self-renewal through modification of cell cycle regulators such as the cyclin-dependent kinase inhibitors. In the absence of Bmi-1 or of the transcriptional regulator, Gfi-1, hematopoietic stem cells decline in number and function. In contrast, dysregulation of Bmi-1 has been associated with leukemia; it may promote leukemic stem cell self-renewal when it is overexpressed. Other transcription regulators have also been associated with self-renewal, particularly homeobox, or “hox,” genes. These transcription factors are named for their ability to govern large numbers of genes, including those determining body patterning in invertebrates. HoxB4 is capable of inducing extensive self-renewal of stem cells through its DNA-binding motif. Other members of the hox family of genes have been noted to affect normal stem cells, but they are also associated with leukemia. External signals that may influence the relative self-renewal versus differentiation outcomes of stem cell cycling include specific Wnt ligands. Intracellular signal transducing intermediates are also implicated in regulating self-renewal. They include PTEN, an inhibitor of the AKT pathway, and STAT5, both of which are downstream of activated growth factor receptors and necessary for normal stem cell functions including self-renewal, at least in mouse models. The connections between these molecules remain to be defined, and their role in physiologic regulation of stem cell self-renewal is still poorly understood. The relationship of stem cells to cancer is an important evolving dimension of adult stem cell biology. Cancer may share principles of organization with normal tissues. Cancer cells are heterogeneous even within a given patient and may have a hierarchical organization of cells with a base of stem-like cells capable of the signature stem cell features: self-renewal and differentiation. These stem-like cells might be the basis for perpetuation of the tumor and represent a slowly dividing, rare population with distinct regulatory mechanisms, including a relationship with a specialized microenvironment. A subpopulation of self-renewing cells has been defined for some, but not all, cancers. A more sophisticated understanding of the stem cell organization of cancers may lead to improved strategies for developing new therapies for the many common and difficult-to-treat types of malignancies that have been relatively refractory to interventions aimed at dividing cells. Does the concept of cancer stem cells provide insight into the cellular origin of cancer? The fact that some cells within a cancer have stem cell–like properties does not necessarily mean that the cancer arose in the stem cell itself. Rather, more mature cells could have acquired the self-renewal characteristics of stem cells. Any single genetic event is unlikely to be sufficient to enable full transformation of a normal cell to a frankly malignant one. Rather, cancer is a multistep process, and for the multiple steps to accumulate, the cell of origin must be able to persist for prolonged periods. It must also be able to generate large numbers of daughter cells. The normal stem cell has these properties and, by virtue of its having intrinsic self-renewal capability, may be more readily converted to a malignant phenotype. This hypothesis has been tested experimentally in the hematopoietic system. Taking advantage of the cell-surface markers that distinguish hematopoietic cells of varying maturity, stem cells, progenitors, precursors, and mature cells can be isolated. Powerful transforming gene constructs were placed in these cells, and it was found that the cell with the greatest potential to produce a malignancy was dependent on the transforming gene. In some cases, it was the stem cell, but in others, the progenitor cell functioned to initiate and perpetuate the cancer. This shows that cells can acquire stem cell–like properties in malignancy. WHAT ELSE CAN HEMATOPOIETIC STEM CELLS DO? Some experimental data have suggested that hematopoietic stem cells or other cells mobilized into the circulation by the same factors that mobilize hematopoietic stem cells are capable of playing a role in healing the vascular and tissue damage associated with stroke and myocardial infarction. These data are controversial, and the applicability of a stem cell approach to nonhematopoietic conditions remains experimental. However, reprogramming technology offers the potential for using the readily obtained hematopoietic stem cell as a source for cells with other capabilities. The stem cell, therefore, represents a true dual-edged sword. It has tremendous healing capacity and is essential for life. Uncontrolled, it can threaten the life it maintains. Understanding how stem cells function, the signals that modify their behavior, and the tissue niches that modulate stem cell responses to injury and disease are critical for more effectively developing stem cell–based medicine. That aspect of medicine will include the use of the stem cells and the use of drugs to target stem cells to enhance repair of damaged tissues. It will also include the careful balance of interventions to control stem cells where they may be dysfunctional or malignant. Applications of Stem Cell Biology in Clinical Medicine John A. Kessler Damage to an organ initiates a series of events that lead to the recon-struction of the damaged tissue, including proliferation, differentiation, 90e stem cells Undifferentiated stem cells Erythropoietin Dopaminergic neurons Erythrocytes Hematopoietic stem cells Into striatum Into heart Intravenous FIGURE 90e-1 Strategies for transplantation of stem cells. 1. Undifferentiated or partially differentiated stem cells may be injected directly into the target organ or intravenously. 2. Stem cells may be differentiated ex vivo before injection into the target organ. 3. Growth factors or other drugs may be injected to stimulate endogenous stem cell populations. and migration of various cell types; release of cytokines and chemokines; and remodeling of the extracellular matrix. Endogenous stem and progenitor cells are among the cell populations that are involved in these injury responses. In normal steady-state conditions, an equilibrium is maintained in which endogenous stem cells intrinsic to the tissue replenish dying cells. After tissue injury, stem cells in organs such as the liver and skin have a remarkable ability to regenerate the organ, whereas other stem cell populations, such as those in the heart and brain, have a much more limited capability for self-repair. In rare circumstances, circulating stem cells may contribute to regenerative responses by migrating into a tissue and differentiating into organ-specific cell types. The goal of stem cell therapies is to promote cell replacement in organs that are damaged beyond their ability to self-repair. At least three different therapeutic concepts for cell replacement can be envisaged (Fig. 90e-1). One therapeutic approach involves direct administration of stem cells. The cells may be injected directly into the damaged organ, where they can differentiate into the desired cell type. Alternatively, stem cells may be injected systemically since they have the capacity to home in on damaged tissues by following gradients of cytokines and chemokines released by the diseased organ. A second approach involves transplantation of differentiated cells derived from stem cells. For example, pancreatic islet cells can be generated from stem cells before transplantation into diabetic patients, 90e-1 and cardiomyocytes can be generated to treat ischemic heart disease. A third approach involves stimulation of endogenous stem cells to facilitate repair. This goal might be accomplished by administration of appropriate growth factors and drugs that amplify the number of endogenous stem/progenitor cells and/or direct them to differentiate into the desired cell types. Therapeutic stimulation of precursor cells is already a clinical reality in the hematopoietic system, where factors such as erythropoietin, granulocyte colony-stimulating factor, and granulocyte-macrophage colony-stimulating factor are used to increase production of specific blood elements. In addition to these strategies for cell replacement, a number of other approaches could involve stem cells for ex vivo or in situ generation of tissues, a process termed tissue engineering (Chap. 92e). Stem cells are also excellent candidates as vehicles for cellular gene therapy (Chap. 91e). Finally, transplanted stem cells may exert paracrine effects on damaged tissues without the differentiation and replacement of lost cells. Stem cell transplantation is not a new concept but rather is already part of established medical practice. Hematopoietic stem cells (Chap. 89e) are responsible for the long-term repopulation of all blood elements in recipients of bone marrow transplants, and hematopoietic stem cell transplantation is the gold standard against which other stem cell transplantation therapies will be measured. Transplantation of differentiated cells is also a clinical reality, and donated organs and tissues are often used to replace damaged tissues. However, the need for transplantable tissues and organs far outweighs the available supply, and organ transplantation has limited potential for some tissues, such as the brain. Stem cells offer the possibility of a renewable source of replacement cells for virtually all organs. A variety of different types of stem cells (Chap. 88) could be used in regenerative strategies, including embryonic stem (ES) cells, induced pluripotent stem (iPS) cells, umbilical-cord blood stem cells (USCs), organ-specific somatic stem cells (e.g., neural stem cells for treatment of the brain), and somatic stem cells that generate cell types specific for the target organ rather than the donor organ (e.g., bone marrow mesenchymal stem cells or CD34+ hematopoietic stem cells for cardiac repair). Although each cell type has potential advantages and disadvantages, there are a number of generic problems in developing any of these cell types into a useful and reliable clinical tool. Embryonic Stem Cells Embryonic stem cells have the potential to generate all the cell types in the body; thus, in theory, there are no restrictions on the organs that could be regenerated. ES cells can self-renew endlessly, so that a single cell line with carefully characterized traits potentially could generate almost limitless numbers of cells. In the absence of moral or ethical constraints (see “Ethical Issues,” below), unused human blastocysts from fertility clinics could be used to derive new ES cell lines that are matched immunologically with potential transplant recipients. Alternatively, somatic cell nuclear transfer (“therapeutic cloning”) could be used to create ES cell lines that are genetically identical to those of the patient, although this endeavor has been technically refractory for human cells. However, human ES cells are difficult to culture and grow slowly. Techniques for differentiating them into specific cell types are just beginning CHAPTER 90e Applications of Stem Cell Biology in Clinical Medicine to be developed. Cells tend to develop abnormal karyotypes and other abnormalities with increased time in culture, and ES cells have the potential to form teratomas if all cells are not committed to the desired cell types before transplantation. Further, human ES cells are ethically controversial and, on these grounds, would be unacceptable to some patients and physicians despite their therapeutic potential. Nevertheless, there have been limited clinical trials of ES-derived cells in a number of disorders, including macular degeneration, myopia, and spinal cord injury. Induced Pluripotent Stem Cells The field of stem cell biology was transformed by the discovery that adult somatic cells can be converted (“reprogrammed”) into pluripotent cells through the overexpression of four transcription factors normally expressed in pluripotent cells (Chap. 88). These iPS cells share most properties with ES cells, although there are distinct differences in gene expression between ES and iPS cells. The initial use of viruses to insert the transcription factors into somatic cells made the resulting cells unsuitable for clinical use. However, a number of strategies have since been developed to circumvent this problem, including the insertion of modified mRNAs, proteins, or microRNAs rather than cDNAs; the use of non-integrating viruses such as Sendai virus; the insertion of transposons with the programming factors, followed by their subsequent removal; and the use of floxed viral constructs, followed by treatment with Cre recombinase to excise those constructs. The safety of iPS cells in humans remains to be demonstrated, but clinical trials in macular degeneration and other disorders are planned. Potential advantages of iPS cells are that somatic cells from patients would generate pluripotent cells genetically identical to those of the patient and that these cells are not subject to the same ethical constraints as ES cells. It is not clear whether the differences in gene expression between ES and iPS cells will have any impact on their potential clinical utility, and studies of both cell types will be essential to resolve this issue. Umbilical-Cord Stem Cells Umbilical-cord blood stem/progenitor cells (USCs) are widely and readily available. These cells appear to be associated with less graft-versus-host disease than are some other cell types, such as marrow stem cells. They have less human leukocyte antigen restriction than adult marrow stem cells and are less likely to be contaminated with herpesvirus. However, it is unclear how many different cell types can be generated from USCs, and methods for differentiating these cells into nonhematopoietic phenotypes are largely lacking. Nevertheless, there are ongoing clinical trials of these cells in dozens of disorders, including cirrhosis, cardiopathies, multiple sclerosis, burns, stroke, autism, and critical limb ischemia. Organ-Specific Multipotent Stem Cells Organ-specific multipotent stem cells have the advantage of already being somewhat specialized so that the inducement of desired cell types may be easier. Cells potentially could be obtained from the patient and amplified in cell culture, circumventing the problems associated with immune rejection. Stem cells are relatively easy to harvest from some tissues, such as bone marrow and blood, but are difficult to harvest from other tissues, such as heart and brain. Moreover, these populations of cells are more limited in potentiality than are pluripotent ES or iPS cells, and they may be difficult to obtain in large quantities from many organs. Therefore, substantial efforts have been devoted to developing techniques for using more easily obtainable stem cell populations, such as bone marrow mesenchymal stem cells (MSCs), CD34+ hematopoietic stem cells (HSCs), cardiac mesenchymal cells, and adipose-derived stem cells (ASCs), for use in regenerative strategies. Tissue culture evidence suggests that these stem cell populations may be able to generate differentiated cell types unrelated to their organ source (including myocytes, chondrocytes, tendon cells, osteoblasts, cardiomyocytes, adipocytes, hepatocytes, and neurons) in a process known as transdifferentiation. However, it is still unclear whether these stem cells are capable of generating differentiated cell types that integrate into organs, survive, and function after transplantation in vivo. A number of early studies of MSCs transplanted into heart, liver, and other organs suggested that the cells had differentiated into organ-specific cell types with beneficial effects in animal models of disease. Unfortunately, subsequent studies revealed that the stem cells had simply fused with cells resident in the organs and that the observed beneficial effects were due to paracrine release of trophic and anti-inflammatory cytokines. Further studies will be necessary to determine whether transdifferentiation of MSCs, ASCs, or other stem cell populations occurs at a high enough frequency to make these cells useful for stem cell replacement therapy. Despite the remaining issues, clinical trials of MSCs, autologous HSCs, USCs, and ASCs are being performed in many disorders, including ischemic cardiac disease, cardiomyopathy, diabetes, stroke, cirrhosis, and muscular dystrophy. Regardless of the source of the stem cells used in regenerative strategies, a number of generic problems must be overcome for the development of successful clinical applications. These problems include the devising of methods to reliably generate large numbers of specific cell types, to minimize the risk of tumor formation or proliferation of inappropriate cell types, to ensure the viability and function of the engrafted cells, to overcome immune rejection when autografts are not used, and to facilitate revascularization of regenerated tissue. Each organ system will also pose tissue-specific problems for stem cell therapies. DISEASE-SPECIFIC APPLICATIONS OF STEM CELLS Ischemic Heart Disease and Cardiomyocyte Regeneration Because of the high prevalence of ischemic heart disease, extensive efforts have been devoted to the development of strategies for stem cell replacement of cardiomyocytes. Historically, the adult heart has been viewed as a terminally differentiated organ without the capacity for regeneration. However, recent studies have demonstrated that the heart has the capacity for low levels of cardiomyocyte regeneration (Chap. 265e). This regeneration appears to be accomplished by cardiac stem cells resident in the heart and possibly also by cells originating in the bone marrow. The heart might be an ideal source of stem cells for therapeutic use, but techniques for isolating, characterizing, and amplifying large numbers of these cells have not yet been perfected. For successful myocardial repair, stem cell therapy must deliver cells either systemically or locally, and the cells must survive, engraft, and differentiate into functional cardiomyocytes that couple mechanically and electrically with the recipient myocardium. The optimal method for cell delivery is not clear, and various experimental and clinical studies have successfully employed intramyocardial, transendocardial, intravenous, intracoronary, and retrograde coronary venous injections. In experimental myocardial infarction, functional improvements have been achieved after transplantation of a variety of different cell types, including ES cells, HSCs, MSCs, USCs, and ASCs. Early studies suggested that each of these cell types might have the potential to engraft and generate cardiomyocytes. However, most investigators have found that the generation of new cardiomyocytes by these cells is at best a rare event and that graft survival over long periods is poor. The preponderance of evidence suggests that the observed beneficial effects of most experimental therapies were not derived from direct stem cell generation of cardiomyocytes but rather from indirect effects of the stem cells on resident cells. It is not clear whether these effects reflect the release of soluble trophic factors, the induction of angiogenesis, the release of anti-inflammatory cytokines, or another mechanism. A wide variety of cell delivery methods, cell types, and cell doses have been used in a progressively enlarging series of clinical trials, but the fate of the cells and the mechanisms by which they alter cardiac function are still open questions. In aggregate, however, these studies have shown a small but measurable improvement in cardiac function and, in some cases, reduction in infarct size. In short, the available evidence suggests that the beneficial clinical impact reflects an indirect effect of the transplanted cells rather than genuine cell replacement. Diabetes Successes with islet cell and pancreas transplantation have provided proof of concept for cell-based therapies for type 1 diabetes. However, the demand for donor pancreases far exceeds the number available, and maintenance of long-term graft survival is a problem. The search for a renewable source of stem cells capable of regenerating pancreatic islets has therefore been intensive. Pancreatic beta cell turnover occurs even in the normal pancreas, although the source of the new beta cells remains controversial. This persistent turnover suggests that, in principle, it should be possible to develop strategies for reconstituting the beta cell population in diabetics. Attempts to devise techniques for promoting endogenous regenerative processes by using combinations of growth factors, drugs, and gene therapy have failed thus far, but this remains a potentially viable approach. A number of different cell types are candidates for use in stem cell replacement strategies, including iPS cells, ES cells, hepatic progenitor cells, pancreatic ductal progenitor cells, and MSCs. Successful therapy will depend on the development of a source of cells that can be amplified to produce large numbers of progeny with the ability to synthesize, store, and release insulin when it is required, primarily in response to changes in the ambient level of glucose. The proliferative capacity of the replacement cells must be tightly regulated to avoid excessive expansion of beta cell numbers and the consequent development of hyperinsulinemia/hypoglycemia; moreover, the cells must withstand immune rejection. Although it has been reported that ES and iPS cells can be differentiated into cells that produce insulin, these cells have a low content of insulin and a high rate of apoptosis and generally lack the capacity to normalize blood glucose levels in diabetic animals. Thus, ES and iPS cells have not yet been useful for the large-scale production of differentiated islet cells. During embryogenesis, the pancreas, liver, and gastrointestinal tract are all derived from the anterior endoderm, and transdifferentiation of pancreas to liver and vice versa has been observed in a number of pathologic conditions. There is also substantial evidence that multipotential stem cells reside within gastric glands and intestinal crypts. These observations suggest that hepatic, pancreatic, and/or gastrointestinal precursor cells may be reasonable candidates for cell-based therapy for diabetes, although it is unclear whether insulin-producing cells derived from pancreatic stem cells or liver progenitors can be expanded in vitro to clinically useful numbers. MSCs and neural stem cells both reportedly have the capacity to generate insulin-producing cells, but there is no convincing evidence that either cell type will be clinically useful. Clinical trials of MSCs, USCs, HSCs, and ASCs in both type 1 and type 2 diabetes are ongoing. Nervous System Substantial progress has been made in the development of methodologies for generating neural cells from different stem cell populations. Human ES or iPS cells can be induced to generate cells with the properties of neural stem cells, and these cells in turn give rise to neurons, oligodendroglia, and astrocytes. Reasonably large numbers of these cells can be transplanted into the rodent brain with formation of appropriate cell types and no tumor formation. Multipotent stem cells present in the adult brain also can be easily amplified in number and used to generate all the major neural cell types, but the need for invasive procedures to obtain autologous cells is a major limitation. Fetal neural stem cells derived from miscarriages or abortions are an alternative but raise ethical concerns. Nevertheless, clinical trials of fetal neural stem cells have commenced in amyotrophic lateral sclerosis (ALS), stroke, and several other disorders. Transdifferentiation of MSCs and ASCs into neural stem cells, and vice versa, has been reported by numerous investigators, and clinical trials of such cells have begun for a number of neurologic diseases. Clinical trials of a conditionally immortalized human cell line and of USCs in stroke are also in progress. Because of the incapacitating nature of neural disorders and the limited endogenous repair capacity of the nervous system, clinical trials of stem cells in neurologic disorders have been particularly numerous, including trials in spinal cord injury, multiple sclerosis, epilepsy, Alzheimer’s disease, ALS, acute and chronic stroke, numerous genetic disorders, traumatic brain injury, Parkinson’s disease, and others. In diseases such as ALS, possible benefits are more likely to be due to indirect trophic effects than to neuron replacement. In Parkinson’s disease, the major motor features of the disorder result from the loss of a single cell population: dopaminergic neurons within the substantia nigra; this circumstance suggests that cell replacement should be relatively straightforward. However, two clinical trials of fetal nigral transplantation failed to meet their primary endpoint and were complicated by the development of dyskinesia. Transplantation of stem cell–derived dopamine-producing cells offers a number of 90e-3 potential advantages over the fetal transplants, including the ability of stem cells to migrate and disperse within tissue, the potential for engineering regulatable release of dopamine, and the ability to engineer cells to produce factors that will enhance cell survival. Nevertheless, the experience with fetal transplants points out the difficulties that may be encountered. At least some of the neurologic dysfunction after spinal cord injury reflects demyelination, and both ES cells and MSCs can facilitate remyelination after experimental spinal cord injury (SCI). Clinical trials of MSCs in this disorder have commenced in a number of countries, and SCI was the first disorder targeted for the clinical use of ES cells. However, the ES cell trial in SCI was terminated early for nonmedical reasons. At present, no population of transplanted stem cells has been shown to have the capacity to generate neurons that extend axons over long distances to form synaptic connections (as would be necessary for replacement of upper motor neurons in ALS, stroke, or other disorders). For many injuries, including SCI, the balance between scar formation and tissue repair/regeneration may prove to be an important consideration. For example, it may ultimately prove necessary to limit scar formation so that axons can reestablish connections. Liver Liver transplantation is currently the only successful treatment for end-stage liver diseases, but the shortage of liver grafts limits its application. Clinical trials of hepatocyte transplantation demonstrate its potential as a substitute for organ transplantation, but this approach is limited by the paucity of available cells. Potential sources of stem cells for regenerative strategies include endogenous liver stem cells (such as oval cells), ES cells, MSCs, and USCs. Although a series of studies in humans as well as animals suggested that transplanted MSCs and HSCs can generate hepatocytes, fusion of the transplanted cells with endogenous liver cells, giving the erroneous appearance of new hepatocytes, appears to be the underlying event in most circumstances. The available evidence suggests that transplanted HSCs and MSCs can generate hepatocyte-like cells in the liver only at a very low frequency, but there are beneficial consequences presumably related to indirect paracrine effects. ES cells can be differentiated into hepatocytes and transplanted in animal models of liver failure without the formation of teratomas. Clinical trials are in progress in cirrhosis with numerous cell types, including MSCs, USCs, HSCs, and ASCs. Other Organ Systems and the Future The use of stem cells in regenerative strategies has been studied for many other organ systems and cell types, including skin, eye, cartilage, bone, kidney, lung, endometrium, vascular endothelium, smooth muscle, and striated muscle, and clinical trials in these and other organs are ongoing. In fact, the potential for stem cell regeneration of damaged organs and tissues is virtually limitless. However, numerous obstacles must be overcome before stem cell therapies can become a widespread clinical reality. Only HSCs have been adequately characterized by surface markers so that they can be unambiguously identified, a prerequisite for reliable clinical applications. The pathways for differentiating stem cells into specific cellular phenotypes are largely unknown, and the ability to control the migration of transplanted cells or predict the response of the cells to the environment of diseased organs is presently limited. Some strategies may employ the coadministration of scaffolding, artificial extracellular matrix, and/or growth factors to orchestrate differentiation of stem cells and their organization into appropriate constituents of the organ. There is currently no way to image stem cells in vivo after transplantation into humans, and it will be necessary to develop techniques to do so. Fortunately, stem cells can be engineered before transplantation to contain a contrast agent that may make their in vivo imaging feasible. The potential for tumor formation and the problems associated with immune rejection are impediments, and it will also be necessary to develop techniques for ensuring vascularization of regenerated tissues. There already are many strategies for supporting cell replacement, including coadministration of vasoactive endothelial growth factor to foster vascularization of the transplant. Some strategies also include the genetic engineering of stem cells with an inducible suicide gene so that the cells can be easily eradicated in CHAPTER 90e Applications of Stem Cell Biology in Clinical Medicine the event of tumor formation or another complication. The potential for stem cell therapies to revolutionize medical care is extraordinary, and disorders such as myocardial infarction, diabetes, and Parkinson’s disease, among many others, are potentially curable by such therapies. However, stem cell–based therapies are still at a very early stage of development, and perfection of techniques for clinical transplantation of predictable, well-characterized cells is going to be a difficult and lengthy undertaking. Stem cell therapies raise ethical and socially contentious issues that must be addressed in parallel with the scientific and medical opportunities. Society has great diversity with respect to religious beliefs, concepts of individual rights, tolerance for uncertainty and risk, and boundaries for how scientific interventions should be used to alter the outcome of disease. In the United States, the federal government has authorized research using existing human ES cell lines but still restricts the use of federal funds for developing new human ES cell lines. Ongoing studies of existing lines have indicated that they develop abnormalities with time in culture and that they may be contaminated with mouse proteins. These findings highlight the need to develop new human ES cell lines. The development of iPS cell technology may lessen the need for deriving new ES cell lines, but it is still not clear whether the differences in gene expression by ES and iPS cells are important for potential clinical use. In considering ethical issues associated with the use of stem cells, it is helpful to draw from experience with other scientific advances, such as organ transplantation, recombinant DNA technology, implantation of mechanical devices, neuroscience and cognitive research, in vitro fertilization, and prenatal genetic testing. These and other precedents have pointed to the importance of understanding and testing fundamental biology in the laboratory setting and in animal models before applying new techniques in carefully controlled clinical trials. When these trials occur, they must include full informed consent and careful oversight by external review groups. Ultimately, there will be medical interventions that are scientifically feasible but ethically or socially unacceptable to some members of a society. Stem cell research raises fundamentally difficult questions about the definition of human life, and it has raised deep fears about the ability to balance issues of justice and safety with the needs of critically ill patients. Health care providers and experts with backgrounds in ethics, law, and sociology must help guard against the premature or inappropriate application of stem cell therapies and the inappropriate involvement of vulnerable population groups. However, these therapies offer important new strategies for the treatment of otherwise irreversible disorders. An open dialogue among the scientific community, physicians, patients and their advocates, lawmakers, and the lay population is critically important to raise and address important ethical issues and balance the benefits and risks associated with stem cell transfer. by many factors, including an intense desire to develop therapies for 91e-1 Gene Therapy in Clinical Medicine hitherto untreatable diseases, lack of understanding of risks, and, in some cases, undisclosed financial conflicts of interest. After a teenager Katherine A. High died of complications related to vector infusion, the field underwent a Gene transfer is a novel area of therapeutics in which the active agent is a nucleic acid sequence rather than a protein or small molecule. Because delivery of naked DNA or RNA to a cell is an inefficient process, most gene transfer is carried out using a vector, or gene delivery vehicle. These vehicles have generally been engineered from viruses by deleting some or all of the viral genome and replacing it with the therapeutic gene of interest under the control of a suitable promoter (Table 91e-1). Gene transfer strategies can thus be described in terms of three essential elements: (1) a vector; (2) a gene to be delivered, sometimes called the transgene; and (3) a physiologically relevant target cell to which the DNA or RNA is delivered. The series of steps in which the vector and donated DNA enter the target cell and express the transgene is referred to as transduction. Gene delivery can take place in vivo, in which the vector is directly injected into the patient, or, in the case of hematopoietic and some other target cells, ex vivo, with removal of the target cells from the patient, followed by return of the gene-modified autologous cells to the patient after manipulation in the laboratory. The latter approach effectively combines gene transfer techniques with cellular therapies (Chap. 90e). Gene transfer is one of the most powerful concepts in modern molecular medicine and has the potential to address a host of diseases for which there are currently no available treatments. Clinical trials of gene therapy have been under way since 1990; a recent landmark in the field was the licensing, in 2012, of the first gene therapy product approved in Europe or the United States (see below). Given that vector-mediated gene therapy is arguably one of the most complex therapeutics yet developed, consisting of both a nucleic acid and a protein component, this time course from first clinical trial to licensed product is noteworthy for being similar to that seen with other novel classes of therapeutics, including monoclonal antibodies or bone marrow transplantation. Over 5000 subjects have been enrolled in gene transfer studies, and serious adverse events have been rare. Some of the initial trials were characterized by an overabundance of optimism and a failure to be appropriately critical of preclinical studies in animals; in addition, it was in some contexts not fully appreciated that animal studies are only a partial guide to safety profiles of products in humans (e.g., insertional mutagenesis). Initial exuberance was driven retrenchment; continued efforts led to a more nuanced understanding of the risks and benefits of these new therapies and more sophisticated selection of disease targets. Currently, gene therapies are being developed for a wide variety of disease entities (Fig. 91e-1). Gene transfer strategies for genetic disease generally involve gene addition therapy, an approach characterized by transfer of the missing gene to a physiologically relevant target cell. However, other strategies are possible, including supplying a gene that achieves a similar biologic effect through an alternative pathway (e.g., factor VIIa for hemophilia A); supplying an antisense oligonucleotide to splice out a mutant exon if the sequence is not critical to the function of the protein (as has been done with the dystrophin gene in Duchenne’s muscular dystrophy); or downregulating a harmful effect through a small interfering RNA (siRNA). Two distinct strategies are used to achieve long-term gene expression: one is to transduce stem cells with an integrating vector, so that all progeny cells will carry the donated gene; and the other is to transduce long-lived cells, such as skeletal muscle or neurons. In the case of long-lived cells, integration into the target cell genome is unnecessary. Instead, because the cells are nondividing, the donated DNA, if stabilized in an episomal form, will give rise to expression for the life of the cell. This approach thus avoids problems related to integration and insertional mutagenesis. IMMUNODEFICIENCY DISORDERS: PROOF OF PRINCIPLE Early attempts to effect gene replacement into hematopoietic stem cells (HSCs) were stymied by the relatively low transduction efficiency of retroviral vectors, which require dividing target cells for integration. Because HSCs are normally quiescent, they are a formidable transduction target. However, identification of cytokines that induced cell division without promoting differentiation of stem cells, along with technical improvements in the isolation and transduction of HSCs, led to modest but real gains in transduction efficiency. The first convincing therapeutic effect from gene transfer occurred with X-linked severe combined immunodeficiency disease (SCID), which results from mutations in the gene (IL2RG) encoding the γc subunit of cytokine receptors required for normal development of Immune responses to vector Theoretical risk of insertional mutagenesis (occurred in multiple cases) Elicits few inflammatory responses, nonpathogenic 8.5 kb In need of a stable packaging system DNA RNA Large packaging Limited immune capacity with responses against persistent gene the vector transfer Residual cytotox-Transduced gene icity with neuron expression is specificity transient Abbreviations: AAV, adeno-associated virus; HSV, herpes simplex virus; SV, sarcoma virus. vector, so that errant clones can be quickly ablated, or using “insulator” elements in the cassette, which can limit the activation of genes surrounding the insertion site. Lentiviral vectors, which can efficiently transduce nondividing target cells, are also likely to be safer than retroviral vectors, based on patterns of integration; the field is thus gradually moving toward these to replace retroviral vectors. More clear-cut success has been achieved in a gene therapy trial for another form of SCID, adenosine deaminase (ADA) deficiency (Chap. 374). ADASCID is clinically similar to X-linked SCID, although it can be treated by enzyme replacement therapy with a pegylated form of the enzyme (PEG-ADA), which leads to immune reconstitution but not always to normal T cell counts. Enzyme replacement therapy is expensive (annual costs: $200,000–$300,000 in U.S. dollars). The initial trials of gene therapy for Number of trials (year 2013) ADA-SCID were unsuccessful, but modifications of FIGURE 91e-1 Indications in gene therapy clinical trials. The bar graph classifies this protocol to include the use of HSCs rather than clinical gene transfer studies by disease. A majority of trials have addressed cancer, T cells as the target for transduction; discontinuation with monogenic disorders, infectious diseases, and cardiovascular diseases the next of PEG-ADA at the time of vector infusion, so that the largest categories. (Adapted from SL Ginn et al: J Gene Med 15:65-77, 2013. Published transduced cells have a proliferative advantage over online in Wiley Online.) T and natural killer (NK) cells (Chap. 374). Affected infants present in the first few months of life with overwhelming infections and/or failure to thrive. In this disorder, it was recognized that the transduced cells, even if few in number, would have a proliferative advantage compared to the nontransduced cells, which lack receptors for the cytokines required for lymphocyte development and maturation. Complete reconstitution of the immune system, including documented responses to standard childhood vaccinations, clearing of infections, and remarkable gains in growth occurred in most of the treated children. However, among 20 children treated in two separate trials, five eventually developed a syndrome similar to T cell acute lymphocytic leukemia, with splenomegaly, rising white counts, and the emergence of a single clone of T cells. Molecular studies revealed that, in most of these children, the retroviral vector had integrated within a gene, LMO-2 (LIM only-2), which encodes a component of a transcription factor complex involved in hematopoietic development. The retroviral long terminal repeat increases the expression of LMO-2, resulting in T cell leukemia. The X-linked SCID studies were a watershed event in the evolution of gene therapy. They demonstrated conclusively that gene therapy could cure disease; of the 20 children eventually treated in these trials, 18 achieved correction of the immunodeficiency disorder. Unfortunately, 5 of the 20 patients later developed a leukemia-like disorder, and one died of this complication; the rest are alive and free of complications at time periods ranging up to 14 years after initial treatment. These studies demonstrated that insertional mutagenesis leading to cancer was more than a theoretical possibility (Table 91e-2). As a result of the experience in these trials, all protocols using integrating vectors in hematopoietic cells must include a plan for monitoring sites of insertion and clonal proliferation. Strategies to overcome this complication have included using a “suicide” gene cassette in the Gene silencing – repression of promoter Phenotoxicity – complications arising from overexpression or ectopic expression of the transgene Immunotoxicity – harmful immune response to either the vector or transgene Risks of horizontal transmission – shedding of infectious vector into environment Risks of vertical transmission – germline transmission of donated DNA the nontransduced; and the use of a mild conditioning regimen to facilitate engraftment of the transduced cells have led to success without the complications seen in the X-linked SCID trials. There have been no complications in the 10 children treated on the Milan protocol, with a median follow-up of >8 years. ADA-SCID, then, is an example where gene therapy has changed therapeutic options for patients. For those with a human leukocyte antigen (HLA)-identical sibling, bone marrow transplantation is still the best treatment option, but this includes only a minority of those affected. For those without an HLA-identical match, gene therapy has comparable efficacy to PEG-ADA, does not require repetitive injections, and does not run the risk of neutralizing antibodies to the bovine enzyme. NEURODEGENERATIVE DISEASES: EXTENSION OF PRINCIPLE The SCID trials gave support to the hypothesis that gene transfer into HSCs could be used to treat any disease for which allogeneic bone marrow transplantation was therapeutic. Moreover, the use of genetically modified autologous cells carried several advantages including no risk of graft-versus-host disease, guaranteed availability of a “donor” (unless the disease itself damages the stem cell population of the patient), and low likelihood of failure of engraftment. Cartier and Aubourg capitalized on this realization to conduct the first trial of lentiviral vector transduction of HSCs for a neurodegenerative disorder, X-linked adrenoleukodystrophy (ALD). X-linked ALD is a fatal demyelinating disease of the central nervous system caused by mutations in the gene encoding an adenosine triphosphate–binding cassette transporter. Deficiency of this protein leads to accumulation of verylong-chain fatty acids in oligodendrocytes and microglia, disrupting myelin maintenance by these cells. Affected boys present with clinical and neuroradiographic evidence of disease at age 6–8 and usually die before adolescence. Following lentiviral transduction of autologous HSCs in young boys with the disease, dramatic stabilization of disease occurred, demonstrating that stem cell transduction could work for neurodegenerative as well as immunologic disorders. Investigators in Milan carried this observation one step further to develop a treatment for another neurodegenerative disorder that has previously responded poorly to bone marrow transplantation. Metachromatic leukodystrophy is a lysosomal storage disorder caused by mutations in the gene encoding arylsulfatase A (ARSA). The late infantile form of the disease is characterized by progressive motor and cognitive impairment, and death within a few years of onset, due to accumulation of the ARSA substrate sulfatide in oligodendrocytes, microglia, and some neurons. Recognizing that endogenous levels of production of ARSA were too low to provide cross-correction by allogeneic transplant, Naldini and colleagues engineered a lentiviral vector that directed supraphysiologic levels of ARSA expression in transduced cells. Transduction of autologous HSCs from children born with the disease, at a point when they were still presymptomatic, led to preservation and continued acquisition of motor and cognitive milestones at time periods as long as 32 months after affected siblings had begun to lose milestones. These results illustrate that the ability to engineer levels of expression can allow gene therapy approaches to succeed where allogeneic bone marrow transplantation cannot. It is likely that a similar approach will be used in other neurodegenerative conditions. Transduction of HSCs to treat the hemoglobinopathies is an obvious extension of studies already conducted but represents a higher hurdle in terms of the extent of transduction required to achieve a therapeutic effect. Trials are now under way for thalassemia and for a number of other hematologic disorders, including Wiskott-Aldrich syndrome, and chronic granulomatous disease. LONG-TERM EXPRESSION IN GENETIC DISEASE: IN VIVO GENE TRANSFER WITH RECOMBINANT ADENO-ASSOCIATED VIRAL VECTORS Recombinant adeno-associated viral (AAV) vectors have emerged as attractive gene delivery vehicles for genetic disease. Engineered from a small replication-defective DNA virus, they are devoid of viral coding sequences and trigger very little immune response in experimental animals. They are capable of transducing nondividing target cells, and the donated DNA is stabilized primarily in an episomal form, thus minimizing risks arising from insertional mutagenesis. Because the vector has a tropism for certain long-lived cell types, such as skeletal muscle, the central nervous system (CNS), and hepatocytes, long-term expression can be achieved even in the absence of integration. These features of AAV were used to develop the first licensed gene therapy product in Europe, an AAV vector for treatment of the autosomal recessive disorder lipoprotein lipase (LPL) deficiency. This rare disorder (1–2/million) is due to loss-of-function mutations in the gene encoding LPL, an enzyme normally produced in skeletal muscle and required for the catabolism of triglyceride-rich lipoproteins and chylomicrons. Affected individuals have lipemic serum and may have eruptive xanthomas, hepatosplenomegaly, and in some cases, recurrent bouts of acute pancreatitis. Clinical trials demonstrated the safety of intramuscular injection of AAV-LPL and its efficacy in reducing frequency of pancreatitis episodes in affected individuals, leading to drug approval in Europe. Additional clinical trials currently under way that use AAV vectors in the setting of genetic disease include those for muscular dystrophies, α1 antitrypsin deficiency, Parkinson’s disease, Batten’s disease, hemophilia B, and several forms of congenital blindness. Hemophilia (Chap. 78) has long been considered a promising disease model for gene transfer, because the gene product does not require precise regulation of expression and biologically active clotting factors can be synthesized in a variety of tissue types, permitting latitude in the choice of target tissue. Moreover, raising circulating factor levels from <1% (levels seen in those severely affected) into the range of 5% greatly improves the phenotype of the disease. Preclinical studies with recombinant AAV vectors infused into skeletal muscle or liver have resulted in long-term (>5 years) expression of factor VIII or factor IX in the hemophilic dog model. Administration to skeletal muscle of an AAV vector expressing factor IX in patients with hemophilia B was safe and resulted in long-term expression as measured on muscle biopsy, but circulating levels never rose to >1% for sustained periods, and a large number of IM injections (>80–100) was required to access a large muscle mass. Intravascular vector delivery has been used to access large areas of skeletal muscle in animal models of hemophilia and will likely be tested for this and other disorders in upcoming trials. The first trial of an AAV vector expressing factor IX delivered to the liver in humans with hemophilia B resulted in therapeutic circulating levels at the highest dose tested, but expression at these levels (>5%) lasted for only 6–10 weeks before declining to baseline (<1%). A memory T cell response to viral capsid, present in humans but not 91e-3 in other animal species (which are not natural hosts for the virus), likely led to the loss of expression (Table 91e-2). In response to these findings, a second trial included a short course of prednisolone, to be administered if factor IX levels began to decline. This approach resulted in long-term expression of factor IX, in the range of 2–5%, in men with severe hemophilia B. Current efforts are focused on expanding these trials, and extending the approach to hemophilia A. A logical conclusion from the early experience with AAV in liver in the hemophilia trial was that avoidance of immune responses was key to long-term expression. Thus immunoprivileged sites such as the retina began to attract substantial interest as therapeutic targets. This inference has been elegantly confirmed in the setting of the retinal degenerative disease Leber’s congenital amaurosis (LCA). Characterized by early-onset blindness, LCA is not currently treatable and is caused by mutations in several different genes; ~15% of cases of LCA are due to a mutation in a gene, RPE65, encoding a retinal pigment epithelial-associated 65-kDa protein. In dogs with a null mutation in RPE65, sight was restored after subretinal injection of an AAV vector expressing RPE65. Transgene expression appears to be stable, with the first animals treated >10 years ago continuing to manifest electroretinal and behavioral evidence of visual function. As is the case for X-linked SCID, gene transfer must occur relatively early in life to achieve optimal correction of the genetic disease, although the exact limitations imposed by age have not yet been defined. AAV-RPE65 trials carried out in both the United States and the United Kingdom have shown restoration of visual and retinal function in over 30 subjects, with the most marked improvement occurring in the younger subjects. Trials for other inherited retinal degenerative disorders such as choroideremia are under way, as are studies for certain complex acquired disorders such as age-related macular degeneration, which affects several million people worldwide. The neovascularization that occurs in age-related macular degeneration can be inhibited by expression of vascular endothelial growth factor (VEGF) inhibitors such as angiostatin or through the use of RNA interference (RNAi)-mediated knockdown of VEGF. Early-phase trials of siRNAs that target VEGF RNA are under way, but these require repeated intravitreal injection of the siRNAs; an AAV vector–mediated approach, which would allow long-term inhibition of the biological effects of VEGF through a soluble VEGF receptor, is now in clinical testing. The majority of clinical gene transfer experience has been in subjects with cancer (Fig. 91e-1). As a general rule, a feature that distinguishes gene therapies from conventional cancer therapeutics is that the former are less toxic, in some cases because they are delivered locally (e.g., intratumoral injections), and in other cases because they are targeted specifically to elements of the tumor (immunotherapies, antiangiogenic approaches). Because cancer is a disease of aging, and many elderly are frail, the development of therapeutics with milder side effects is an important goal. Cancer gene therapies can be divided into local and systemic approaches (Table 91e-3). Some of the earliest cancer gene therapy trials focused on local delivery of a prodrug or a suicide gene that would increase sensitivity of tumor cells to cytotoxic drugs. A frequently used strategy has been intratumoral injection of an adenoviral vector expressing the thymidine kinase (TK) gene. Cells that take up and express the TK gene can be killed after the administration of ganciclovir, which is phosphorylated to a toxic nucleoside by TK. Because cell division is required for the toxic nucleoside to affect cell viability, this strategy was initially used in aggressive brain tumors (glioblastoma multiforme) where the cycling tumor cells were affected but the nondividing normal neurons were not. More recently, this approach has been explored for locally recurrent prostate, breast, and colon tumors, among others. Another local approach uses adenoviral-mediated expression of the tumor suppressor p53, which is mutated in a wide variety of cancers. This strategy has resulted in complete and partial responses in squamous cell carcinoma of the head and neck, esophageal cancer, and non-small-cell lung cancer after direct intratumoral injection of the vector. Response rates (~15%) are comparable to those of other single agents. The use of oncolytic viruses that selectively replicate in tumor cells but not in normal cells has also shown promise in squamous cell carcinoma of the head and neck and in other solid tumors. This approach is based on the observation that deletion of certain viral genes abolishes their ability to replicate in normal cells but not in tumor cells. An advantage of this strategy is that the replicating vector can proliferate and spread within the tumor, facilitating eventual tumor clearance. However, physical limitations to viral spread, including fibrosis, intermixed normal cells, basement membranes, and necrotic areas within the tumor, may limit clinical efficacy. Oncolytic viruses are licensed and available in some countries but not in the United States. Because metastatic disease rather than uncontrolled growth of the primary tumor is the source of mortality for most cancers, there has been considerable interest in developing systemic gene therapy approaches. One strategy has been to promote more efficient recognition of tumor cells by the immune system. Approaches have included transduction of tumor cells with immune-enhancing genes encoding cytokines, chemokines, or co-stimulatory molecules; and ex vivo manipulation of dendritic cells to enhance the presentation of tumor antigens. Recently, considerable success has been achieved using lentiviral transduction of autologous lymphocytes with a cDNA encoding a chimeric antigen receptor (CAR). The CAR moiety consists of a tumor antigen-binding domain (e.g., an antibody to the B cell antigen CD19) fused to an intracellular signaling domain that allows T cell activation. The transduced lymphocytes can then recognize and destroy cells bearing the antigen. This CAR–T cell approach has proven extraordinarily successful in the setting of refractory chronic lymphocytic leukemia and pre-B-cell acute lymphoblastic leukemia. Infusion of gene-modified T cells engineered to recognize the B cell antigen CD19 has resulted in >1000-fold expansion in vivo, trafficking of the T cells to the bone marrow, and complete remission in a subset of patients who had failed multiple chemotherapy regimens. The cells persist as memory CAR+ T cells, providing ongoing antitumor functionality. Some patients experience a delayed tumor lysis syndrome requiring intensive medical management. This approach also causes an on-target toxicity, leading to B cell aplasia that necessitates lifelong IgG infusions. Current results indicate that long-lasting remissions can be achieved and the strategy can theoretically be extended to other tumor types if a tumor antigen can be identified. Gene transfer strategies have also been developed for inhibiting tumor angiogenesis. These have included constitutive expression of angiogenesis inhibitors such as angiostatin and endostatin; use of siRNA to reduce levels of VEGF or VEGF receptor; and combined approaches in which autologous T cells are genetically modified to recognize antigens specific to tumor vasculature. These studies are still in early-phase testing. Another novel systemic approach is the use of gene transfer to protect normal cells from the toxicities of chemotherapy. The most extensively studied of these approaches has been transduction of hematopoietic cells with genes encoding resistance to chemotherapeutic agents, including the multidrug resistance gene MDRI or the gene encoding O6-methylguanine DNA methyltransferase (MGMT). Ex vivo transduction of hematopoietic cells, followed by autologous transplantation, is being investigated as a strategy for allowing administration of higher doses of chemotherapy than would otherwise be tolerated. The third major category addressed by gene transfer studies is cardiovascular disease. Initial experience was in trials designed to increase blood flow to either skeletal (critical limb ischemia) or cardiac muscle (angina/myocardial ischemia). First-line treatment for both of these groups includes mechanical revascularization or medical management, but a subset of patients are not candidates for or fail these approaches. These patients formed the first cohorts for evaluation of gene transfer to achieve therapeutic angiogenesis. The major transgene used has been VEGF, attractive because of its specificity for endothelial cells; other transgenes have included fibroblast growth factor (FGF) and hypoxia-inducible factor 1, α subunit (HIF-1α). The design of most of the trials has included direct IM (or myocardial) injection of either a plasmid or an adenoviral vector expressing the transgene. Both of these vectors are likely to result in only short-term expression of VEGF, which may be adequate because there is no need for continued transgene expression once the new vessels have formed. Direct injection favors local expression, which should help to avoid systemic effects such as retinal neovascularization or new vessel formation in a nascent tumor. Initial trials of adeno-VEGF or plasmid-VEGF injection resulted in improvement over baseline in angiographically detectable vasculature, but no change in amputation frequency or cardiovascular mortality. Studies using different routes of administration or different transgenes are currently under way. More recent studies have used AAV vectors to develop a therapeutic approach for individuals with refractory congestive heart failure. In preclinical studies, a vector encoding sarcoplasmic reticulum Ca2+ ATPase (SERCA2a) demonstrated positive left ventricular inotropic effects in a swine model of volume-overloaded heart failure. Results of a phase II study in which vector was infused via the coronary arteries in patients with congestive heart failure demonstrated safety and some indications of efficacy; larger studies are now planned. This chapter has focused on gene addition therapy, in which a normal gene is transferred to a target tissue to drive expression of a gene product with therapeutic effects. Another powerful technique under development is genome editing, in which a mutation is corrected in situ, generating a wild-type copy under the control of the endogenous regulatory signals. This approach makes use of novel reagents including zinc finger nucleases, TALENs and CRISPR, which introduce double-stranded breaks into the DNA near the site of the mutation and then rely on a donated repair sequence and cellular mechanisms for repair of double-strand breaks to reconstitute a functioning gene. Another strategy recently introduced into clinical trials is the use of siRNAs or short hairpin RNAs as transgenes to knock down expression of deleterious genes (e.g., mutant huntingtin in Huntington’s disease or genes of the hepatitis C genome in infected individuals). The power and versatility of gene transfer approaches are such that there are few serious disease entities for which gene transfer therapies are not under development. The development of new classes of therapeutics typically takes two to three decades; monoclonal antibodies and recombinant proteins are recent examples. Gene therapeutics, which entered clinical testing in the early 1990s, traversed the same time course. Examples of clinical success are now abundant, and gene therapy approaches are likely to become increasingly important as a Elements of History for Subjects Enrolled in Gene Transfer Trials 1. What vector was administered? Is it predominantly integrating (retroviral, lentiviral, herpesvirus [latency and reactivation]) or nonintegrating (plasmid, adenoviral, adeno-associated viral)? 2. What was the route of administration of the vector? 3. What was the target tissue? 4. What gene was transferred in? A disease-related gene? A marker? 5. Were there any adverse events noted after gene transfer? 1. Has a new malignancy been diagnosed? 2. Has a new neurologic/ophthalmologic disorder, or exacerbation of a preexisting disorder, been diagnosed? 3. Has a new autoimmune or rheumatologic disorder been diagnosed? 4. Has a new hematologic disorder been diagnosed? aFactors influencing long-term risk include: integration of the vector into the genome, vector persistence without integration, and transgene-specific effects. therapeutic modality in the twenty-first century. A central question to 91e-5 be addressed is the long-term safety of gene transfer, and regulatory agencies have mandated a 15-year follow-up for subjects enrolled in gene therapy trials (Table 91e-4). Realization of the therapeutic benefits of modern molecular medicine will depend on continued progress in gene transfer technology. Tissue Engineering Anthony Atala Tissue engineering is a field that applies principles of regenerative medicine to restore the function of various organs by combining cells with biomaterials. It is multidisciplinary, often combining the skills of physicians, cell biologists, bioengineers, and material scientists, to 92e recapitulate the native three-dimensional architecture of an organ, the appropriate cell types, and the supportive nutrients and growth factors that allow normal cell growth, differentiation, and function. Tissue engineering is a relatively new field, originating in the late 1970s. Early studies focused on efforts to create skin substitutes using biomaterials and epithelial skin cells with a goal of providing barrier protection for patients with burns. The early strategies employed a tissue biopsy, followed by ex vivo expansion of cells seeded on scaffolds. The cell– scaffold composite was later implanted back into the same patient, where the new tissue would mature. However, there were many hurdles to overcome. The three major challenges in the field of tissue engineering involved: (1) the ability to grow and expand normal primary human cells in large quantities; (2) the identification of appropriate biomaterials; and (3) the requirement for adequate vascularization and innervation of the engineered constructs. The original model for tissue engineering focused largely on the isolation of tissue from the organ of interest, the growth and expansion of the tissue-specific cells, and the seeding of these cells onto three-dimensional scaffolds. Just a few decades ago, most primary cultures of human cells could not be grown and expanded in large quantities, representing a major impediment to the engineering of human tissues. However, the identification of specific tissue progenitor cells in the 1990s allowed expansion of multiple cell types, and progress has occurred steadily since then. Some cell types are more amenable to expansion than others, reflecting in part their native regenerative capacity but also varying requirements for nutrients, growth factors, and cell–cell contacts. As an example of progress, after years of effort, protocols for the growth and expansion of human cardiomyocytes are now available. However, there are still many tissue-specific cell types that cannot be expanded from tissue sources, including the pancreas, liver, and nerves. The discovery of pluripotent or highly multipotent stem cells (Chap. 88) may ultimately allow most human cell types to be used for tissue engineering. The stem cell characteristics depend on their origin and their degree of plasticity, with cells from the earliest developmental stages, such as embryonic stem cells, having the greatest plasticity. Induced pluripotent stem cells have the advantage that they can be derived from individual patients, allowing autologous transplants. They can also be differentiated, in vitro, along cell-specific lineages, although these protocols are still at an early stage of development. Human embryonic and induced pluripotent stem cells have a very high replicative potential, but they also have the potential for rejection and tumor formation (e.g., teratomas). The more recently described amniotic fluid and placental stem cells have a high replicative potential but without an apparent propensity for tumor formation. Moreover, they have the potential to be used in an autologous manner without rejection. Adult stem cells, such as those derived from bone marrow, also have less propensity for tumor formation and, if used in an autologous manner, will not be rejected, but their replicative potential is limited, especially for endoderm and ectoderm cells. Stem cells can be derived from autologous or heterologous sources. Heterologous cells can be used when only temporary coverage is needed, such as replacing skin after a burn or wound. However, if a more permanent construct is required, autologous cells are preferred to avoid rejection. There are also practical issues related to tissue sources. For example, if a patient presents with end-stage heart disease, obtaining a cardiac tissue biopsy for cell expansion is unlikely to be feasible, and bone marrow–derived mesenchymal cells may provide an alternative. The biomaterials used to create the scaffolds for tissue engineering require specific properties to enhance the long-term success of the implanted constructs. Ideally, the biomaterials should be biocompatible; elicit minimal inflammatory responses; have appropriate biomechanical properties; and promote cell attachment, viability, proliferation, and differentiated function. Ideally, the scaffolds should replicate the biomechanical and structural properties of the tissue being replaced. In addition, biodegradation should be controlled such that the scaffold retains its structural integrity until the cells deposit their own matrix. If the scaffolds degrade too quickly, the constructs may collapse. If the scaffolds degrade too slowly, fibrotic tissue may form. Also, the degradation of the scaffolds should not alter the local environment unfavorably, because this can impair the function of cells or newly formed tissue. The first scaffolds designed for tissue regeneration were naturally derived materials, such as collagen. The first artificially derived material for tissue engineering used a biodegradable scaffold made of polyglycolic acid. Naturally derived scaffolds have properties very similar to the native matrix, but there is an inherent batch-to-batch variability, whereas the production of artificially derived biomaterials can be better controlled, allowing for more uniform results. More recently, combination scaffolds, made of both naturally and artificially derived biomaterials, have been used for tissue engineering. An emerging area is the use of peptide nanostructures to facilitate tissue engineering. Some of these are self-assembling peptide amphiphiles that allow scaffolds to form in vivo, for example at sites of spinal cord injury where they have been used experimentally to prevent scar formation and facilitate nerve and blood vessel regeneration. Peptide nanostructures can be combined with other biomaterials, and they can be linked to growth factors, antibodies, and various signaling molecules that can modulate cell behavior during organ regeneration. Implanted tissue-engineered constructs require adequate vascularity and innervation. Judah Folkman, a pioneer in the field of angiogenesis, made the observation that cells could survive in volumes up to 3 mm3 via nutrient diffusion alone, but larger cell volumes required vascularization for survival. Adequate vascularity was also essential for normal innervation to occur. This was a major challenge in the field of tissue engineering, which largely depended on the patient’s native angiogenesis and innervation. Even if sufficient cell quantities are available, there is a theoretical limit on the types of tissue constructs that could be created. In response to this challenge, material scientists designed scaffolds with much greater porosity and architecture. Scaffold designs included the creation of thin, porous sponges comprised of 95% air, markedly increasing the surface area for the resident cells. These properties promoted increased vascularity and innervation. The addition of growth factors, such as vascular endothelial growth factor and nerve growth factor, has been used to enhance angiogenesis and innervation. All human tissues are complex. However, from an architectural aspect, tissues can be categorized under four levels. Flat tissue structures, such as skin, are the least complex (level 1), comprised predominantly of a single epithelial cell type. Tubular structures, such as blood vessels and the trachea, are more complex architecturally (level 2) and must be constructed to ensure that the structure does not collapse over time. These tissues typically have two major cell types. They are designed to act as a conduit for air or fluid at a steady state within a defined physiologic range. Hollow nontubular organs, such as the stomach, bladder, or uterus, are more complex architecturally (level 3). The cells are functionally more complex, and these cell types often have a functional interdependence. By far, the most complex are the solid organs (level 4), because the amount of cells per cm2 are exponentially greater than any of the other tissue types. For the first three tissue levels (1–3), when the constructs are initially implanted, the cell layering on the scaffolds is thin, not unlike that seen in tissue culture matrices. The cell layering continues to mature, in concert with the recipient’s native angiogenesis and neoinnervation. For level 4 (solid) organs, the vascularity requirements are substantial, and native tissue angiogenesis is not sufficient. The engineering strategies for tissues vary according to their complexity level. The basic principles of tissue engineering involve the use of the relevant cell populations, where the cell biology is well understood and the cells can be reproducibly retrieved and expanded, and the use of optimized biomaterials and scaffold designs. Cell seeding can be performed using various techniques, including static or flow-based systems that use bioreactors. Most techniques for the engineering of tissues fall under one of five strategies (Fig. 92e-1): 1. Scaffolds can be used alone, without cells, and implanted, where they depend on native cell migration onto the scaffold from the adjacent tissue for regeneration. The first use of decellularized scaffolds for tissue regeneration was for urethral reconstruction. These techniques are most optimal when the size of the defect is relatively small, usually <0.5 cm from each tissue edge. Larger defects tend to heal by scarring, due to the deposition of fibroblasts, and eventual fibrosis. Scaffolds alone have also been used for other applications, including for wound coverage, soft tissue coverage after joint surgery, urogynecologic applications for sling surgery, and as materials for hernia repair. 2. A more recent strategy in tissue engineering involves the use of proteins, cytokines, genes, or small molecules that induce in situ Level 4 Heart, kidney, liver, lung Strategies for tissue and organ engineering. tissue regeneration, either alone or with the use of scaffolds. For example, gene transcription factors used in the mouse pancreas led to tissue regeneration. Surgically implanted decellularized heart valve scaffolds, coated with proteins that attract vascular stem cells, led to the creation of in situ cell-seeded functional heart valves in sheep. Drugs that induce muscle regeneration are being tested clinically. Small molecules that induce tissue regeneration are currently under investigation for multiple applications, including growth of skin and hair and for musculoskeletal applications. 3. The most common strategy for the engineering of tissues uses scaffolds seeded with cells. The most direct and established type of tissue engineering uses flat scaffolds, either artificial or naturally derived, that are seeded with cells and used for the replacement or repair of flat tissue structures. The flat scaffolds can also be sized and molded at the time of surgical implantation, or they can be shaped prior to cell seeding, for example, for tubular organs such as blood vessels or nontubular hollow tissues such as bladders. Bioreactors are often used to expose the cell–scaffold construct to mechanical forces, such as, stress, strain, and pulsatile flow that aid in the normal development of the cells into tissues (Video 92e-1, engineered heart valve in a pulsatile bioreactor showing the valves opening and closing). This strategy is the most common method used for tissue regeneration to date, and tissues and organs, such as skin, blood vessels, urethras, tracheas, vaginas, and bladders, have been engineered and implanted in patients using these techniques. 4. The fourth strategy in tissue engineering is applicable for solid organs, where discarded organs are exposed to mild detergents and are decellularized, leaving behind a three-dimensional scaffold that preserves its vascular tree. The scaffold can then be reseeded with the patient’s own expanded vascular and tissue-specific cells. This strategy was used initially to create solid phallic structures in rabbits that were functional and able to produce offspring. Similar strategies were also used to recellularize miniature heart, liver, and kidney structures, with limited functionality to date, but with an established proof of concept (Video 92e-2, a dye is injected through the portal artery of a decellularized liver showing an intact vascular tree). These techniques are currently under investigation and have not been used clinically to date. 5. The fifth strategy for tissue engineering involves the use of bioprinting. These technologies arose through the use of modified desktop inkjet printers over a decade ago. The inkjet cartridges were filled with a cell–hydrogel combination instead of ink. A rudimentary three-dimensional elevator was lowered each time the cartridge deposited the cells and hydrogel, thus building miniature solid structures, such as two-chambered heart organoids, one layer at a time. More sophisticated bioprinters have now been built that have additional computer-aided design (CAD) and three-dimensional printing technologies. The information to print the organ can be personalized using the patient’s own imaging studies that help to define the size and shape of the particular tissue (Video 92e-3, a modified inkjet printer shows the three-dimensional construction of a two-chambered heart and how the structure beats with the cardiomyocytes in synchrony). Bioprinting is a tool that allows a scale-up option for the production of engineered tissues. Its use is still experimental and has not been applied clinically to date. A number of engineered tissues, including architecturally flat, tubular, and hollow nontubular organs, have been implanted in patients dating back to the 1990s. These include 92e-3 bladders, blood vessels, urethras, vaginal organs, tracheas, and skin for permanent replacement (Table 92e-1). Various types of skin substitutes, which were used as temporary “living wound dressings” to cover burn areas until skin grafts could be obtained from the same patient, were implanted starting in the 1990s. However, the use of engineered skin as a permanent replacement occurred only recently. Many engineered tissues are still being used in patients under regulatory guidelines for clinical trials. To date, solid organs have not yet been engineered for clinical use. Tissue engineering is a rapidly evolving field where new technologies are continuously being applied to achieve success. The field still has many challenges ahead, including the long regulatory timelines required for the approval of widespread use, the need for improved scale-up production technologies, and the cost of the technologies, which include multiple processes involving biologics. Nonetheless, the list of tissues and organs being implanted in patients keeps growing, and the ability of these technologies to improve health has been demonstrated. More patients should be able to benefit from these technologies in the coming years. showing the valves opening and closing. VIDEO 92e-2 A dye is injected through the portal artery of a decellularized liver showing an intact vascular tree. VIDEO 92e-3 A modified inkjet printer shows the three-dimensional construction of a two-chambered heart and how the structure beats with the cardiomyocytes in synchrony. increase from 7 to 14% of the total population, and the United StatesWorld Demography of aging will soon have completed this same increase in 69 years. But in countries that started the transition later, the process is occurring much Richard M. Suzman, John G. Haaga more rapidly: Japan took 26 years to go from 7 to 14% age 65 and older, Population aging is transforming the world in dramatic and fundamental ways. The age distributions of populations have changed and will continue to change radically, due to long-term declines in fertility rates and improvements in mortality rates (Table 93e-1). This transformation, known as the Demographic Transition, is also accompanied by an epidemiologic transition, in which noncommunicable chronic diseases are becoming the major causes of death and contributors to the burden of disease and disability. A concomitant of population aging is the change in key ratios expressing “dependency” of one form or another— the ratio of adults in the workforce to those typically out of the workforce, such as infants, children, retired “young old” (those still active but in ways other than paid work), and the oldest old. Global aging will affect economic growth, migration, patterns of work and retirement, family structures, pension and health systems, and even trade and the relative standing of nations. Both absolute numbers (the size of an age group) and ratios (the ratio of those in working ages to dependents such as the young or retired, or the ratio of children to older people) are important. The size of age groups might affect the number of hospital beds needed, whereas the ratio of children to older people could affect the relative demand for pediatricians and geriatricians. Although the increase in life expectancy, resulting from a series of social, economic, public health, and medical victories over disease, might very well be considered the crowning achievement of the past century and a half, the increased length of life coupled with the shifts in dependency ratios present formidable long-term challenges. The pace of the change is accelerating. In countries where the Demographic Transition began earlier, the process was slower: it took France 115 years for the proportion of the age group 65 and older to while China and Brazil are projected to require just 24 years. Sometime around the year 2020, for the first time ever, the number of people age 65 and older in the world is expected to exceed that of children under the age of 5. Around the middle of the twentieth century, the under-5 age group constituted almost 15% of the total population and the over-65 age group 5%. It took about 70 years for these two to reach equal proportions. But demographers predict it will take only another 25–30 years for the 65 and older age group to equal about 15% and be about double the number of children under age 5. By the middle of their careers, medical students in most countries should expect to be practicing in far older populations. Preparations for these changes need to begin decades in advance, and the costs and penalties for delay can be very high. Although some governments have started planning for the long term, many, if not most, have yet to begin. Population aging around the world in recent decades has followed a broadly similar pattern, starting with a decline in infant and childhood mortality that precedes a decline in fertility; at later stages, mortality at older ages declines as well. Declining fertility began as early as the beginning of the nineteenth century in the United States and France and extended to the rest of Europe and North America and parts of East Asia by the middle of the twentieth century. Since World War II, fertility declines have started in all other world regions. In fact, more than half the world’s population now lives in countries or provinces with fertility rates below the replacement level of just over two live births per woman. Mortality rates also began to change, relatively slowly at first, in Western Europe and North America during the nineteenth century. At first, changes were most evident at the youngest ages. Chapter 93e World Demography of Aging taBLe 93e-1 SeLeCteD InDICatorS of popuLatIon agIng, eStImateS for 2009, anD projeCtIonS to 2050; SeLeCteD regIonS anD CountrIeS aUN Population Division defines Old Age Support Ratio as the number of people age 15 to 64 years for every person age 65 or older. bThe UN includes all European regions in its overall statistics; life expectancy at birth for males ranges from 63.8 years in Eastern Europe to 77.4 years in Western Europe. For women it ranges from 74.8 to 83.1 years in Western Europe. Source: United Nations Population Division, World Population Ageing 2012. Improvements in water supply and sewage handling, as well as in nutrition and housing, accounted for most of the improvement before the 1940s, when antibiotics and vaccines and increasing education of mothers began to make a major impact. Since the middle of the twentieth century, the “Child Survival Revolution” has spread to all parts of the world. Children almost everywhere in the world are much more likely to reach late middle age now than in previous generations. Especially since around 1960, mortality at older ages has improved steadily. This improvement has been primarily due to advances in care of heart disease and stroke and in control of conditions like hypertension and hypercholesterolemia that lead to circulatory diseases. In some parts of the world, smoking rates have declined, and these declines have led to lower incidence of many cancers, heart disease, and stroke. The initial decline in fertility resulted in older age groups becoming a larger fraction of the total population. Declines in adult and old age mortality contributed to population aging in the later stages of the process. Life expectancy at birth—the average age to which someone is expected to live, under prevailing mortality conditions—has been calculated at around 28 years in ancient Greece, perhaps 30 years in medieval Britain, and less than 25 years in the colony of Virginia in North America. In the United States, life expectancy climbed slowly during the nineteenth century, reaching 49 years for white women by 1900. White men had a life expectancy 2 years lower than that for white women, and black Americans had a life expectancy 14 years lower than did white Americans in 1900. By the early twenty-first century, life expectancy in the United States had improved dramatically for all, with the sex gap wider and the racial gaps narrower than at the beginning of the century: 76 years for white men in 2006; 81 years for white women; and 70 and 76 years for black men and women, respectively. However, although the United States had a relatively high life expectancy compared to other high-income countries around 1980, almost all such countries have in the interim exceeded the United States in life expectancy. Female life expectancy, especially for whites in the United States, has done particularly poorly, and this has been attributed to relatively high rates of lifetime smoking. At later stages of the demographic transition, mortality declines at the oldest ages, leading to increases in the 65 and older population, and the oldest old, those older than age 85 years. Migration can also affect population aging. An influx of young migrants with high birth rates can slow (though not stop) the process, as it has in the United States and Canada; or the out-migration of the young leaving older people behind can accelerate aging at the population level, as it has in many rural areas of the world. Regions of the world are at very different stages of the demographic transition (Fig. 93e-1). Of a world population of 6.8 billion in 2012, approximately 11% were older than age 60 years, with Japan (32%) and Europe (22%) being the oldest regions (Germany and Italy 27% each) and the United States having 19%. The percentage of the population older than age 60 years in the United States has remained lower than in Europe, due both to modestly higher fertility rates and to higher rates of immigration. Asia has about 10% older than age 60 years, with the population giants close to the average—China (12%), Indonesia (9%), and India (7%). Middle Eastern and African countries have the lowest proportions of older people (5% or lower). 30-35 Based on estimates from the United Nations 25-30 Population Division, 809 million people were 20-25 15-20 age 60 years or older in 2012, of whom 279 mil 5-10 million in less developed countries (as classified < 5 by the United Nations). The countries with the largest populations of those age 60 and older Population projections make use of expected fertility, mortality, and migration rates and should be regarded as uncertain when applied 40 or more years in the future. However, the population that will be age 60 and older in 2050 have all been born and survived childhood in 2014, so uncertainty about their numbers (as distinct from their proportion of the total population) is not great. Comparing the maps of the world in 2010 (Fig. 93e-1) and 2050 (Fig. 93e-2), it is apparent that the middleand low-income countries in Latin America, Asia, and much of Africa will soon be joining the “oldest” category. In less than four decades between 2012 and 2050, the United Nations Population Division projects that the world population age 60 and older will more than double to 2.03 billion, with the least developed regions more than quadrupling. China’s 60+ population is projected to reach 439 million, India’s 323 million, and the United States’s 107 million. Over the same period, the median age of the world’s population is expected to increase by 10 years. Current global life expectancy at birth is estimated to be 65.4 for men and 69.8 for women, with the comparable figures for the more developed region being 73.6 and 80.5 years. Life expectancy in the least developed countries averaged only 57.2 for women and 54.7 for men. Life expectancy at birth is heavily influenced by infant and child mortality, which is considerably higher in poor countries. At older ages, the gap between rich and poor nations is narrower; so while women who have reached age 60 in wealthy countries can expect 23.7 more years of life on average, women at age 60 in poor countries live 16.8 years on average—a significant difference but not so stark as the difference in life expectancy at birth. At the lowest levels of per capita gross national product (GNP), life expectancy shows a powerful positive association with this measure of economic development, but then the slope of the relationship flattens out; for countries with average incomes above about $20,000 per year, life expectancy is not closely related to income. At each level of economic development, there is significant variation in life expectancy, indicating that many other factors influence life expectancy. Japan, France, Italy, and Australia currently have some of the highest life expectancies in the world, while the United States has lagged behind other high-income countries since about 1980, especially in the case of white women. The causes of this lag are being explored, but the cumulative number of years that people have smoked tobacco by the time they reach older ages and the prevalence of obesity appear to play important roles. A modern feature of population aging has been the almost explosive growth of the age group known as the oldest old, variously defined as those over age 80 or age 85. This is the age group with the highest burden of noncommunicable degenerative disease and related disability. Thirty years ago, this group attracted little attention because they were hidden within the overall older population in most statistical reports; for example, the U.S. Census Bureau merged them into a 65+ category. The reduction of mortality at older ages coupled with larger birth were China (181 million), India (100 million), FIGURE 93e-1 Percentages of national populations age 60+, in 2010. (From the U.S. and the United States (60 million). Census Bureau, International Database. StatPlanet Mapping Software.) are especially likely to understand and take 93e-3 advantage of measures to reduce infection. The effects of continuing progress will likely be seen in coming decades as well, because educational attainment is associated with improved health and survival at older ages. Countries vary in 30-35 25-30 the extent to which the “future elderly” cohorts 20-25 will be more educated. China in particular will15-20 tion in 2050 (with more than two-thirds of the school) than it did in 2000 (when only 10% of older people had a secondary education). In FIGURE 93e-2 Percentages of national populations age 60 +, in 2050 (projections). the United States and other rich nations, this (From the U.S. Census Bureau, International Database. StatPlanet Mapping Software.) Chapter 93e World Demography of Aging cohorts surviving into old age led to the rapid growth of the oldest old. This age group is predicted to grow at a significantly higher rate than the 60+ population, and one estimate has the current 102 million age 80+ increasing to almost 400 million by 2050 (Table 93e-2). Projected increases are astounding: China’s 80+ population might increase from 20 to 96 million, India from 8 to 43 million, the United States from 12 to 32 million, and Japan from 9 to 16 million. The numbers of centenarians are increasing at an even faster rate. The members of the population who could potentially become age 80 and older in 2050 are already alive today. The actual numbers of people who will be age 80 and older in 2050 will therefore depend almost solely on adult and old age mortality rates over the next 35 years. The history of the decline of mortality suggests that improvements in the standard of living, including increased and improved education and improved nutrition, coupled with improvements in public health stemming from an understanding of the germ theory of disease initially led to the decline in mortality, with medical achievements such as antibiotics and improved understanding of risk factors for cardiovascular and circulatory diseases becoming factors only in the post–World War II period; the largest strides in cardiovascular disease came only in more recent decades. The improvements in educational attainment of succeeding generations have been credited in large part for improvements changes in educational attainment of the elderly population will be less dramatic. Holding aside the possibility of new infectious diseases ravaging populations as AIDS did in some African countries, debates about future life expectancy revolve around the balance and influence of risk factors such as obesity; the possibility of reducing the deaths from current killers such as cancer, heart disease, and diabetes; whether there is some natural limit to life expectancy; and the distant though nonzero possibility that science will find a way to slow the basic processes of aging. While some have posited natural limits to human life expectancy, the limits have been surpassed with some regularity, and at the very oldest ages in the leading countries with the highest life expectancy, there appears to be little evidence of any approaching asymptote. Indeed a surprising discovery was that life expectancy in the leading country over the last century and a half, with different countries taking the lead in different epochs, could be represented almost perfectly by a straight line, with the increase for females showing a steady and astonishing increase of three months per year or 2.5 years per decade (Fig. 93e-3). No single country kept that pace of improvement the entire time, but this trend calls into question the notion that improvement must slow down, at least in the near future. There remains a great deal of diversity in health conditions both among and within national populations. There is nothing inevitable in child mortality during the past century, because educated mothers eStImateS (2012) anD projeCtIonS (2050) for the popuLatIon ageD 80 YearS anD oLDer: SeLeCteD regIonS about the mortality transition—in several African countries, the preva lence of AIDS has been high enough to cause life expectancy to fall below the levels of 1980. Though none has so far reached a scale to rival the AIDS epidemic, periodic outbreaks of new influenza viruses or “emerging infectious” agents remind us that infectious diseases could again come to the fore. Progress against chronic disease is also reversible: In Russia and some other countries that formed part of the Soviet Union before 1992, life expectancy for men has been declining, now reaching levels below those of men in South Asia. Much of the gap between Russian and Western European men is explainable by much greater heart disease and injuries among the former. Ratios of different age groups provide useful though crude indicators of potential demands on resources and resource availability. One set of ratios, known variously as dependency or support ratios, compare the age groups who are most likely to be in the labor force with the age groups typically dependent on the productive capacity of those work-ing—the young and the old, or just the old. A commonly used ratio is the number of persons age 15–64 per persons age 65 and older. Even though many in some countries do not enter the labor force until significantly older than age 15, retire before age 65, or work past age 65, the ratios do summarize important facts, especially in countries where financial support for the retired comes partially or mainly from those currently in the labor force through either a formal pension system or through informal support from the family. While many countries still have very basic pension systems with incomplete coverage, in Europe public pensions are quite generous, and these countries face dramatic changes in their ratios of working age to older populations. Over the next 40 years, Western Europe faces a drop in the ratio from 4 to 2. In other words, while in crude terms there are today 4 workers supporting the pensions and other costs of each older person, by 2050 there will only be 2. China faces an even steeper drop from 9 persons of working age to only 3, while Japan declines from 3 to just 1. Even in India, projected to become the most populous country, the decline is quite steep from 13 to 5. The dramatically declining number of workers per older person (however determined) is at the crux of the economic challenge of population aging. The extra years of life that can be considered the crowning achievement in medicine and public health of the last 150 years have to be financed. The economic model of the life cycle assumes that people are economically productive for a limited number of years and that the proceeds of their work during those years have to be smoothed over to finance consumption during less economically productive ages, either within families or by institutions such as the state in order to provide for the young, the old, and the infirm. There are only so many ways to meet the challenge of an extended period of dependency, including increasing the productivity of those in the labor force, saving more, reducing consumption, increasing the number of years worked by increasing the age of retirement, increasing the voluntary nonmonetary productive contributions of the retired, and immigration of very large numbers of young workers into the “old” countries. Pressures to increase retirement ages in industrialized countries and to reduce benefits are increasing. But no single one of these measures can bear the full load of adaptation to population aging, since the changes would have to be so severe and disruptive as to be politically impossible. More likely, there will be some combination of these measures. Population health and the ability to function at work and in everyday life interact with these population ratios in significant ways. The physical and cognitive capacity to continue to work at older ages is crucial if the age of retirement is raised. Similarly, caregiving often requires significant physical and emotional stamina. Further, healthier older populations require less caregiving and medical services. Just two decades ago, the prevalent view of aging was highly pessimistic. Epidemiologists held that while modern medicine could keep older people alive, nothing much could be done to prevent, delay, or significantly treat the degenerative chronic diseases of aging. The result would be that more and more older people with chronic diseases FIGURE 93e-4 Disability prevalence, various years 1982–2005, by age group over 65, United States. (Adapted from KG Manton et al: Proc Natl Acad Sci U S A. 103:18374, 2006.) would be kept from dying, with the consequent piling up of the older people disabled by chronic disease. Surprisingly, between 1984 and about 2000, the prevalence of disability in the 65+ population in the United States declined by about 25%, suggesting that in this respect, aging was more plastic than had been previously believed (Fig. 93e-4). All the causes of this significant shift in disability are not yet understood, but rising levels of education, improved treatment of cardiovascular diseases and cataracts, greater availability of assistive devices, and less physically demanding occupations have been found to contribute. One calculation showed that if the rate of improvement could be maintained until 2050, that the numbers of disabled in the older population could be kept constant in the United States despite the aging of the baby boomers and the older population itself growing older. Unfortunately, the rapid increase in obesity rates could slow and perhaps even reverse this most positive trend. Because of the absence of comparable data in other countries, it is less certain whether the same pattern of improvement in disability rates (with recent deceleration) is occurring outside of the United States. Using estimates and projections of disease prevalence from the Global Burden of Disease Study, the global population of those “dependent and in need of care” is projected to rise from about 350 million in 2010 to over 600 million in 2050. Worldwide, about half of the older persons in need of care (two-thirds of the dependent population age 90 and above) suffer from dementia or cognitive impairment. A global network of longitudinal studies on aging, health, and retirement is now providing comparable data that may allow more definitive projections on disease and disability trends in the future. One estimate (World Alzheimer’s Report 2010) projected that the 36 million people with dementia worldwide in 2010 would increase to 115 million by 2050. The largest increases would occur in lowand middle-income countries where about two-thirds already live. The estimated costs were $604 billion in 2010 with 70% occurring in North America and Western Europe. A 2013 study using a nationally representative U.S. sample found that annual dementia costs could be as high as $215 billion. Direct costs of dementia care exceeded the direct costs for either heart disease or cancer. Given the age-associated prevalence of dementia and the expected increase in the older population, coupled with the associated decline in family members able to provide care, countries need to plan for a pandemic of individuals requiring long-term care. Population aging, and related demographic changes including changes in family structure, could affect the “supply side” of long-term care as well as the demand for care and health care. In every country, long-term care of the disabled and the chronically ill relies heavily on informal, typically unpaid caregivers—usually spouses or children; and increasingly in more developed countries, caregivers for the oldest old are in their 60s and early 70s. Although there are many men who provide care, on the population level, informal caregiving is still mainly done by women. Because women live longer than men, lack of a spousal caregiver is especially likely to be a problem for older women. Both men and women have fewer children on whom they can call for informal caregiving, because of the worldwide decline in fertility rates. An increasing proportion of older men in Europe and North America have spent much or all of their adult lives apart from their biological children. Lower fertility rates, delayed marriage, and increasing divorce rates mean that people approaching old age may be less likely to have close ties with daughters and daughters-in-law—the adults who have in the past been the most common caregivers apart from spouses. Adult women who in the past have provided uncompensated care (and much other essential volunteer work) are now more likely than in the past to be working for pay and thus have fewer hours to devote to the unpaid roles. These broad demographic and economic trends do not dictate particular social adaptations or policy responses, of course. One can imagine many different responses to the challenges of caring for the disabled—increased reliance on home health agencies and assisted living communities, “naturally occurring retirement communities” in which neighbors fulfill many of the roles once reserved for close kin; private or even publicly financed direct payments to compensate formerly unpaid family caregivers (a reform that has proved very popular in Germany). These and other responses to the challenge of long-term care are being tested in aging countries, and continued experimentation will no doubt be needed. The secular improvements in ages at death have been accompanied by changes in causes of death. In the broadest terms, the proportion of deaths due to infectious disease and conditions associated with pregnancy and delivery has fallen, and the proportion due to chronic, noncommunicable diseases, such as heart and cerebrovascular diseases, diabetes, cancers, and age-related neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases, has increased steadily and is expected to continue to increase. Figure 93e-5 shows results from an international comparative project that drew on a wide variety Communicable, maternal, perinatal, and nutritional conditions Noncommunicable diseases Injuries FIGURE 93e-5 Leading causes of burden of illness in world regions 2002 and projected for 2030. (Adapted from CD Mathers, D Loncar: PLoS Med 3:e442, 2006.) Note: “Burden of disease” takes into account years of life lost due to death from this cause and also a weighted estimate for years spent with disability, pain, or impairments due to the condition. These estimates were aggregated from many different national reporting systems and special surveys or surveillance systems, with adjustments for incomplete coverage and different reporting schemes as part of the Global Burden of Disease Study 2010 updating previous global estimates. Abbreviation: COPD, chronic obstructive pulmonary disease. Source: CJL Murray et al: Lancet 380:2197, 2013, Fig. 5. of data sources to provide estimates of the global burden of disease at the beginning of this century, with projections to future years based on recent trends in disease prevalence and demographic rates. Burden of disease in these pie charts is a composite measure, one that takes into account both the number of deaths due to a particular disease or condition and the timing of such deaths—an infant death represents a loss of more potential life-years lived than does the death of a very old person. Nor is death the only outcome that matters; most diseases or conditions cause significant disability and suffering even when nonfatal, so this measure of burden captures nonfatal outcomes using statistical weighting. As Table 93e-3 shows, the “modern plagues” of chronic noncommunicable diseases are already among the leading causes of premature death and disability even in low-income countries. This is due to a mix of factors—lower fertility rates mean fewer infants and children at prime ages of susceptibility to infections; more people reaching older ages when chronic disease incidence is high; and often changing incidence rates due to increased exposure to tobacco, Western diets, and inactivity. Noncommunicable diseases, once thought of as “diseases of affluence,” are projected to account for more than half of the disease burden even in lowand middle-income countries by the year 2030 (Fig. 93e-5). Population aging is a global phenomenon with profound shortand long-term implications for health and long-term care needs, and indeed for the economic and social well-being of nations. The timing and context of aging vary across and within world regions and countries; the industrialized nations became wealthy before they aged significantly, while many of the low-resource regions will age before they reach high-income levels. The variation at both the population level and individual levels indicates that there is much flexibility in successful aging, but meeting the challenges will require advance planning and preparation. The extent to which research can find solutions that reduce physical and cognitive disability at older ages will determine how countries cope with this fundamental transformation. Chapter 93e World Demography of Aging 94e 5000 The Biology of Aging Rafael de Cabo, David G. Le Couteur Mortality rate per 100,000 Aging and old age are among the most significant challenges facing medicine this century. The aging process is the major risk factor underlying disease and disability in developed nations, and older people respond differently to therapies developed for younger adults (usually with less effectiveness and more adverse reactions). Modern medicine and healthier lifestyles have increased the likelihood that younger adults will now achieve old age. However, this has led to rapidly increasing numbers of older people, often encumbered with age-related disorders that are predicted to overwhelm health care sys-0 tems. Improved health in old age and further extension of the human healthspan are now likely to result primarily from increased understanding of the biology of aging, age-related susceptibility to disease, and modifiable factors that influence the aging process. Definitions of Aging Aging is easy to recognize but difficult to define. Most definitions of aging indicate that it is a progressive process associated with declines in structure and function, impaired maintenance and repair systems, increased susceptibility to disease and death, and reduced reproductive capacity. There are both statistical and phenotypic components to aging. As recognized by Gompertz in the nineteenth century, aging in humans is associated with an exponential risk of mortality with time (Fig. 94e-1), although it is now realized that this plateaus in extreme old age because of healthy survivor bias. The phenotypic components of aging include structural and functional changes that are separated, somewhat artificially, into either primary aging changes (e.g., sarcopenia, gray hair, oxidative stress, increased peripheral vascular resistance) or age-related disease (e.g., dementia, osteoporosis, arthritis, insulin resistance, hypertension). Definitions of aging rarely acknowledge the possibility that some of those biological and functional changes with aging might be adaptive or even reflect improvement and gain. Nor do they emphasize the effect of aging on responses to medical treatments. Old age is associated with increased vulnerability to many perturbations, including therapeutic interventions. This is a critical issue for clinicians; the problem with aging would be more limited if our disease-specific therapies retained their balance of risk to benefit into old age. Aging and Disease Susceptibility Old age is the major independent risk factor for chronic diseases (and associated mortality) that are most prevalent in developed countries such as cardiovascular disease, cancers, and neurodegenerative disorders (Fig. 94e-2). Consequently, FIGURE 94e-2 The rates of most common chronic diseases and related mortality increase with old age. (Data from USA 2008–2010 CDC.) older people have multiple comorbidities, usually in the range of 5 to 10 illnesses per person. Disease in older people is typically multifactorial with a strong component related to the underlying aging process. For example, in younger patients with dementia, Alzheimer’s disease is a single disorder confirmed by examining brain tissue for plaques and tangles containing amyloid and tau proteins. However, the vast majority of people with dementia are elderly, and here the association between typical Alzheimer’s neuropathology and dementia becomes less definitive. In the oldest-old, the prevalence of Alzheimer’s-type brain pathology is similar in people with and without clinical features of dementia. On the other hand, brains of older people with dementia usually show mixed pathology, with evidence of Alzheimer’s pathology along with features of other dementias such as vascular lesions, Lewy bodies, and non-Alzheimer’s tauopathy. Typical aging changes, such as microvascular dysfunction, oxidative injury, and mitochondrial impairment, underlie many of the pathologic changes. The Longevity Dividend Compression of morbidity refers to the concept that the burden of lifetime illness might be compressed by medical interventions into a shorter period before death without necessarily increasing longevity. However, continuing development of successful therapeutic and preventative interventions focusing on individual diseases is less effective in older people because of multiple comorbidities, complications of overtreatment, and competing causes of death. Therefore, it has been proposed that further gains in healthspan and life expectancy will be achieved by a single intervention that delays aging and age-related disease susceptibility, rather than multiple treat ments each targeting different individual age-related illnesses. This is 10000 called the longevity dividend and is driving an explosion of research into aging biology and, more importantly, interventions (genetic, 8000 pharmaceutical, and nutritional) that influence the rate of aging and delay age-related disease. 4000 At the most basic level, living things have only two approaches to maintain their existence: immortality or reproduction. In a changing2000 environment, reproduction combined with a finite lifespan has proved to be the successful strategy. Of course, finite lifespan is not the same Deaths (rate per 100,000) as aging, although aging, by definition, contributes to a finite lifespan. Many evolutionary theories related to aging are linked by their attempts to explain this interaction between reproduction and longevity (Fig. 94e-3). Most mainstream aging theories stem from the fact that FIGURE 94e-1 The rates of death in the United States (2010) evolution is driven by early reproductive success, whereas there is mini-showing exponential increase in mortality risk with chronologic age. mal selection pressure for late-life reproduction or postreproductive Tissue changes that predispose to disease Immunosenescence, inflammaging Detoxification Endocrine system Vascular system Adaptive aging? Grandmother effect Adaptive senectitude FIGURE 94e-3 Schema linking evolution and cellular and tissue changes with aging. The call-out blue boxes indicate factors that might delay the aging process including nutrient response pathways and, possibly, adaptive evolutionary effects. survival. Aging is seen as the random degeneration resulting from the inability of evolution to prevent it, i.e., the nonadaptive consequence of evolutionary “neglect.” This conclusion is supported by studies that restricted reproduction to later life in the fruit fly, Drosophila melanogaster, thus permitting natural selection to operate on later life traits and leading to an increase in longevity. There are some species of plants and animals that do not appear to age, or at least they undergo an extremely slow aging process, termed “negligible senescence.” The mortality rates of these species are relatively constant with time, and they do not display any obvious phenotypic changes of aging. Conversely, there are some living things that undergo programmed death immediately after reproduction, such as annual plants and semelparous animals (Fig. 94e-4). However, many other living things from yeast to humans undergo a gradual aging process leading to death that is surprisingly similar at the cellular and biochemical level across taxa. Some of the major classical evolutionary theories of aging include the following: Negligible senescence Rapid death postreproduction (semelparous animals, annual plants) Rougheye rockfish Bristlecone pine Pacific salmon Sunflower FIGURE 94e-4 The typical features of aging (aging phenotype and exponential increase in risk of death) are not universal findings in liv-ing things. Some living things (e.g., rockfish and the bristlecone pine, sometimes called the Methuselah tree) undergo negligible senes-cence, whereas others die almost immediately after reproduction is completed (e.g., semelparous animals and annual plants). Programmed death. The first evolutionary theory of aging was proposed by Weissman in 1882. This theory states that aging and death are programmed and have evolved to remove older animals from the population so that environmental resources such as food and water are freed up for younger members of the species. Mutation accumulation. This theory was proposed by Medawar in 1952. Natural selection is most powerful for those traits that influence reproduction in early life, and therefore, the ability of evolution to shape our biology declines with age. Germline mutations that are deleterious in later life can accumulate simply because natural selection cannot act to prevent them. Antagonistic pleiotropy. George C. Williams extended Medawar’s theory when he proposed that evolution can allow for the selection of genes that are pleiotropic, i.e., beneficial for survival and reproduction in early life, but harmful in old age. For example, genes for sex hormones are necessary for reproduction in early life but contribute to the risk of cancer in old age. Life history theory. Evolution is influenced by the way that limited resources are allocated to all aspects of life including development, sexual maturation, reproduction, number of offspring, and senescence and death. Therefore, “trade-offs” occur between these phases of life. For example, in a hostile environment, survival is highest for those species that have large numbers of offspring and short lifespan, whereas in a safe and abundant environment, survival is highest for those species that invest resources in a smaller number of offspring and a longer life. Disposable soma theory. Kirkwood and Holliday in 1979 combined many of these ideas in the disposable soma theory of aging. There are finite resources available for the maintenance and repair of both germ and soma cells, so there must be a trade-off between germ cells (i.e., reproduction) and soma cells (i.e., longevity and aging). The soma cells are disposable from an evolutionary perspective, so they accumulate damage that causes aging while resources are preferentially diverted to the maintenance and repair of the germ cells. For example, the longevity of the nematode worm, Caenorhabditis elegans, is increased when its germ cells are ablated early in life. All of these theories assume that natural selection has negligible or negative influences on aging. Some postmodern ideas propose that aspects of aging might be adaptive and raise the possibility that evolution can act on the aging process in a positive way. These include the following: Grandmother hypothesis. The grandmother hypothesis proposed by Hamilton in 1966 describes how evolution can enhance old age. In some animals, including humans, the survival of multiple, dependent offspring is beyond the capacity and resources of a single parent. In this situation, the presence of a long-lived grandmother who shares in the care of her grandchildren can have a major impact on their survival. These children share some of the genes of their grandmother including those that promoted their grandmother’s longevity. Mother’s curse. Mitochondrial dysfunction is a key component of the aging process. Mitochondria contain their own DNA and are only passed on from mother to child because sperm cells contain almost no mitochondria. Therefore, natural selection can only act on the evolution of mitochondrial DNA in females. The “mother’s curse” of the maternal inheritance of mitochondrial DNA might explain why females live longer and age more slowly than males. Adaptive senectitude. Many traits that are harmful in younger humans such as obesity, hypertension, and oxidative stress paradoxically appear to be associated with greater survival and function in very old people. Perhaps driven by the grandmother effect, this might represent “adaptive senectitude” or “reverse antagonistic pleiotropy,” whereby some traits that are harmful in young people become beneficial in older people. There are many cellular processes that change with aging. These are generally considered to be degenerative and stochastic or random changes that reflect some sort of time-dependent damage (Fig. 94e-3). Whether any of these is the root cause of aging is unknown, but they all contribute to the aging phenotype and disease susceptibility. Oxidative Stress and the Free Radical Theory of Aging Free radicals are chemical species that are highly reactive because they contain unpaired electrons. Oxidants are oxygen-based free radicals that include the hydroxyl free radical, superoxide, and hydrogen peroxide. Most cellular oxidants are waste products generated by mitochondria during the production of ATP from oxygen. More recently, the role of oxidants in cellular signaling and inflammatory responses has been recognized. Unchecked, oxidants can generate chain reactions leading to widespread damage to biological molecules. Cells contain numerous antioxidant defense mechanisms to prevent such oxidative stress including enzymes (superoxide dismutase, catalase, glutathione peroxidase) and chemicals (uric acid, ascorbate). In 1956, Harman proposed the “free radical theory of aging,” whereby oxidants generated by metabolism or irradiation are responsible for age-related damage. It is now well established that old age in most species is associated with increased oxidative stress, for example to DNA (8-hydroxyguanosine derivatives), proteins (carbonyls), lipids (lipoperoxides, malondialdehydes), and prostaglandins (isoprostanes). Conversely, many of the cellular antioxidant defense mechanisms, including the antioxidant enzymes, decline in old age. The free radical theory of aging has spawned numerous studies of supplementation with antioxidants such as vitamin E to delay aging in animals and humans. Unfortunately, meta-analyses of human clinical trials performed to treat and prevent various diseases with antioxidant supplements indicate that they have no effect on, or may even increase, mortality. Mitochondrial Dysfunction Aging is characterized by altered mitochondrial production of ATP and oxygen-derived free radicals. This leads to a vicious cycle mediated by accumulation of oxidative injury to mitochondrial proteins and DNA. With age, the number of mitochondria in cells decreases, and there is an increase in their size (megamitochondria) associated with other structural changes including vacuolization and disrupted cristae. These morphologic aging changes are linked with decreased activity of mitochondrial complexes I, II, and IV and decreased ATP production. Of all of the complexes involved in 94e-3 ATP production, the activity of complex IV (COX) is usually reported to be most impaired in old age. Reduced energy production is linked with generation of hydrogen peroxide and superoxide radicals leading to oxidative injury to mitochondrial DNA and accumulation of carbonylated mitochondrial proteins and mitochondrial lipoperoxides. As well as being implicated in the aging process, common geriatric syndromes including sarcopenia, frailty, and cognitive impairment are associated with mitochondrial dysfunction. Telomere Shortening and Replicative Senescence Cells that are isolated from animal tissue and grown in culture only divide for a certain number of times before entering a senescent phase. This number of divisions is known as the Hayflick limit and tends to be less in cells isolated from older animals compared to younger animals. It has been suggested that aging in vivo might in part be secondary to some cells ceasing to divide because they have reached their Hayflick limit. One mechanism for replicative senescence relates to telomeres. Telomeres are repeat sequences of DNA at the end of linear chromosomes that shorten by around 50–200 base pairs during each cell division by mitosis. Once telomeres become too short, cell division can no longer occur. This mechanism contributes to the Hayflick limit and has been called the cellular clock. There are some studies that suggest that the length of telomeres in circulating leukocytes (leukocyte telomere length [LTL]) decreases with age in humans. However, the aging process also occurs in tissues that do not undergo repeated cell division such as neurons. Altered Gene Expression, Epigenetics, and microRNA There are changes in the expression of many genes and proteins during the aging process. These changes are complicated and vary between species and tissue. Such heterogeneity reflects increasing dysregulation of gene expression with age while appearing to exclude a programmed and/or uniform response. With old age, there are often reductions in the expression of genes and proteins associated with mitochondrial function and increased expression of those involved with inflammation, genome repair, and oxidative stress. There are several factors controlling the regulation of gene and protein expression that change with aging. These include the epigenetic state of the chromosomes (e.g., DNA methylation and histone acetylation) and microRNAs (miRNA). DNA methylation correlates with age, although the pattern of change is complex. Histone acetylation is regulated by many enzymes including SIRT1, a protein that has marked effects on aging and the response to dietary restriction in many species. miRNAs are a very large group of noncoding lengths of RNA (18–25 nucleotides) that inhibit translation of multiple different mRNAs through binding their 3′ untranslated regions (UTRs). The expression of miRNAs usually decreases with aging and is altered in some age-related diseases. Specific miRNAs linked with aging pathways include miR-21 (associated with target of rapamycin pathway) and miR-1 (associated with insulin/insulin-like growth factor 1 pathway). Impaired Autophagy There are a number of ways that cells can remove damaged macromolecules and organelles, often generating cellular energy as a byproduct. Intracellular degradation is undertaken by the lysosomal system and the ubiquitin proteasomal system. Both are impaired with aging, leading to the accumulation of waste products that alter cellular functions. Such waste products include lipofuscin, a brown autofluorescent pigment found within lysosomes of most cells in old age and often considered to be one of the most characteristic histologic features of aging cells. They also include aggregated proteins characteristic of age-related neurodegenerative diseases (e.g., tau, β-amyloid, α-synuclein). Lysosomes are organelles that contain proteases, lipases, glycases, and nucleotidases that degrade intracellular macromolecules, membrane components, organelles, and some pathogens through a process called autophagy. The lysosomal process most impaired with aging is macroautophagy, which is regulated by numerous autophagy-related genes (ATGs). Old age is associated with some impairment in chaperone-mediated autophagy, whereas the effect of aging on the third lysosomal process, microautophagy, is unclear. CHAPTER 94e The Biology of Aging Aging changes in some tissues increase susceptibility to age-related disease as a secondary or downstream phenomenon (Fig. 94e-3). In humans, this includes, but is not limited to, the immune system (leading to increased infections and autoimmunity), hepatic detoxification (leading to increased exposure to disease-inducing endobiotics and xenobiotics), the endocrine system (leading to hypogonadism and bone disease), and the vascular system (leading to segmental or global ischemic changes in many tissues). Inflammaging and Immunosenescence Old age is associated with increased background levels of inflammation including blood measurements of C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and cytokines such as interleukin 6 (IL-6) and tumor necrosis factor α (TNF-α). This has been termed inflammaging. T cells (particularly naïve T cells) are less numerous because of age-related atrophy of the thymus, whereas B cells overproduce autoantibodies, leading to the age-related increase in autoimmune diseases and gammopathies. Thus older people are generally considered to be immunocompromised and have reduced responses to infection (fever, leukocytosis) with increased mortality. Detoxification and the Liver Old age is associated with impaired detoxification of various disease-causing endobiotics (e.g., lipoproteins) and xenobiotics (e.g., neurotoxins, carcinogens), leading to increased systemic exposure. In humans, the liver is the major organ for the clearance of such toxins. Hepatic clearance of many substrates is reduced in old age as a consequence of reduced hepatic blood flow, impaired hepatic microcirculation, and in some cases, reduced expression of xenobiotic metabolizing enzymes. These changes in hepatic detoxification also increase the likelihood of increased blood levels of, and adverse reactions to, medications. Endocrine System Hormonal changes with aging have been a focus of aging research for over a century, partly because of the erroneous belief that supplementation with sex hormones will delay aging and rejuvenate older people. There are age-related reductions in sex steroids secondary to hypogonadism and, in females, menopause. Age-related declines in growth hormone and dehydroepiandrosterone (DHEA) are well established, as is the increase in insulin levels and associated insulin resistance. These hormonal changes contribute to some features of aging such as sarcopenia and osteoporosis, which may be delayed by hormonal supplementation. However, adverse effects of long-term hormonal supplementation outweigh any potential beneficial effects on lifespan. Vascular Changes There is a continuum from vascular aging through to atherosclerotic disease, present in many, but not all, older people. Vascular aging changes overlap with the early stages of hypertension and atherosclerosis, with increasing arterial stiffness and vascular resistance. This contributes to myocardial ischemia and strokes but also appears to be associated with geriatric conditions such as dementia, sarcopenia, and osteoporosis. In these conditions, impaired exchange between blood and tissues is a common pathogenic factor. For example, the risk of Alzheimer’s disease and dementia is increased in patients with risk factors for vascular disease, and there is pathologic evidence for microvascular changes in postmortem studies of brains of people with established Alzheimer’s disease. Similarly, strong epidemiologic links have been found between osteoporosis and standard vascular risk factors, whereas there are significant age-associated changes in the microcirculation of osteoporotic bone. Sarcopenia might also be related to the effects of age on the muscle vasculature, which is altered in old age. The sinusoidal microcirculation of the liver becomes markedly altered during aging (pseudocapillarization), which influences hepatic uptake of lipoproteins and other substrates. In fact, it has often been overlooked that in his original exposition of the free radical theory of aging, Harman proposed that the primary target of oxidative stress was the vasculature and that many aging changes were secondary to impaired exchange across the damaged blood vessels. There is variability in aging and lifespan in populations of genetically identical species such as mice that are housed in the same environment. Moreover, the heritability of lifespan in human twin studies is estimated to be only 25% (although there is stronger hereditary contribution to extreme longevity). These two observations indicate that the cause of aging is unlikely to lie only within the DNA code. On the other hand, genetic studies initially undertaken in the nematode worm C. elegans and, more recently, in models from yeast to mice have shown that manipulating genes can have profound effects on the rate of aging. Perhaps surprisingly, this can often be generated by variability in single genes, and for some genetic mechanisms, there is very strong evolutionary conservation. Genetic Progeroid Syndromes There are a few very rare, genetic premature aging conditions that are called progeroid syndromes. These conditions recapitulate some, but not all, age-related diseases and senescent phenotypes. They are mostly caused by impairment of genome and nuclear maintenance. These syndromes include the following: Werner’s syndrome. This is an autosomal recessive condition caused by a mutation in the WRN gene. This gene codes for a RecQ helicase, which unwinds DNA for both repair and replication. It is typically diagnosed in teen years, and there is premature onset of atherosclerosis, osteoporosis, cancers, and diabetes, with death by age of 50 years. Hutchinson-Gilford progeria syndrome (HGPS). This usually occurs as a de novo, noninherited mutation in the lamin A gene (LMNA), leading to an abnormal protein called progerin. LMNA is required for the nuclear lamina, which provides structural support to the nucleus. There are marked development changes obvious in infancy with subsequent onset of atherosclerosis, kidney failure, and scleroderma-like features and death during the teen years. Cockayne syndrome. This includes a number of autosomal recessive disorders with features such as impaired neurologic growth, photosensitivity (xeroderma pigmentosa), and death during childhood years. These disorders are caused by mutations in the genes for DNA excision repair proteins, ERCC-6 and ERCC-8. Gene Studies in Long-Lived Humans The main genes that have been consistently associated with increased longevity in human candidate gene studies are APOE and FOXO3A. ApoE is an apoprotein found in chylomicrons, whereas the ApoE4 isoform is a risk factor for Alzheimer’s disease and cardiovascular disease, which might explain its association with reduced lifespan. FOXO3A is a transcription factor involved in the insulin/IGF-I pathway, and its homolog in C. elegans, daf16, has a substantial impact on aging in these nematodes. Genome-wide association studies (GWAS) of centenarians have confirmed the association of longevity with APOE. GWAS have been used to identify a range of other single nucleotide polymorphisms (SNPs) that might be associated with longevity including SNPs in the sirtuin genes and the progeroid syndrome genes, LMNA and WRN. Gene set analysis of GWAS studies has shown that both the insulin/IGF-I signaling pathway and the telomere maintenance pathway are associated with longevity. Of particular interest are people with Laron-type dwarfism. These people have mutations in the growth hormone receptor, which causes severe growth hormone resistance. In mice, similar knockout of the growth hormone receptor (GHRKO mice, Methuselah mice) is associated with extremely long life. Therefore, subjects with Laron’s syndrome have been carefully studied, and it was found that they have very low rates of cancer and diabetes mellitus and, possibly, longer lives. Nutrient-Sensing Pathways Many living things have evolved to respond to periods of nutritional shortage and famine by increasing cellular resilience and delaying reproduction until food supply becomes abundant once again. This increases the chances of reproductive success and survival of offspring. Lifelong food shortage, often termed caloric restriction (or dietary restriction) increases lifespan and delays aging in many animals, probably as a nonadaptive side effect of this famine response. Many of the genes and pathways that regulate the way that cells respond to nutritional undersupply have been identified, initially in yeast and C. elegans. In general, manipulation of these pathways (through genetic knockout or overexpression or pharmacologic agonists and antagonists) alters the aging benefits of caloric restriction and, in some cases, the lifespan of animals on normal diets. These pathways are all very influential cellular “switches” that control a wide range of key functions including protein translation, autophagy, mitochondrial function and bioenergetics, and the cellular metabolism of fats, proteins, and carbohydrates. The discovery of these nutrient-sensing pathways has led to targets for pharmacologic extension of lifespan. The main nutrient-sensing pathways that influence aging and responses to caloric restriction include the following: SIRT1. The sirtuins are a class of histone deacetylases that inhibit gene expression. The key nutrient-sensing member of this class in mammals is SIRT1. The activity of SIRT1 is regulated by levels of reduced nicotinamide adenine dinucleotide (NAD+), which are increased when cellular energy stores are depleted. Important downstream targets include PGC1a and NRF2, which act on mitochondrial biogenesis. Target of rapamycin (TOR, or mTOR in mammals). mTOR is activated by branched-chain amino acids, providing a link to dietary protein intake. It is a complex that orchestrates two pathways (TORC1 and TORC2). Key downstream targets of mTOR of relevance to aging include the tuberous sclerosis protein (TSC) and 4EBP1, which influence protein production. 5′ Adenosine monophosphate–activated protein kinase (AMPK). AMPK is activated by increased levels of AMP, which reflect cellular energy status. • Insulin signaling and IGF-I/growth hormone. These two pathways are usually considered together because they are the same in lower animals and have diverged only in higher animals. Insulin responds to carbohydrate intake. An important downstream target for this pathway is a transcription factor called daf16 in worms and FOXO in mammals and the fruit fly. Mitochondrial Genes Mitochondrial function is influenced by genes located both in the mitochondria (mtDNA) and the nucleus. mtDNA is considered to have a prokaryotic origin and is highly conserved across taxa. It forms a circular loop of 16,569 nucleotides in humans. Aging is associated with increased frequency of mutations in mtDNA as a consequence of its high exposure to oxygen-derived free radicals and relatively inefficient DNA repair machinery. Nuclear DNA encodes approximately 1000–1500 genes for mitochondrial function, including genes involved with oxidative phosphorylation, mitochondrial metabolic pathways, and enzymes required for biogenesis. These genes are thought to have originated in mtDNA but subsequently translocated to the nucleus, and unlike mtDNA genes, their sequence is stable with aging. Genetic manipulation of mitochondrial genes in animals influences aging and lifespan. In C. elegans, many mutants with defective electron transfer chain function have increased lifespan. The mtDNA “mutator” mice, which lack the mtDNA proofreading enzyme, have increased mtDNA mutations and premature aging, whereas overexpression of mitochondrial uncoupling proteins leads to longer lifespan. In humans, hereditary variability in mtDNA is associated with diseases (mitochondriopathies such as Leigh’s disease) and aging. For example, in Europeans, mitochondrial DNA haplogroup J (haplogroups are combinations of genetic variants that exist in specific populations) is associated with longevity, and haplogroup D is overrepresented in Asian centenarians. Aging is an intrinsic feature of human life whose manipulation has fascinated humans ever since becoming conscious of their own existence. Recent reports and scientific literature are shaping a picture where different dietary restriction regimes and exercise interventions 94e-5 may improve healthy aging in laboratory animals. Several long-term experimental interventions (e.g., resveratrol, rapamycin, spermidine, metformin) may open doors for corresponding pharmacologic strategies. Surprisingly, most of the effective aging interventions proposed converge on only a few molecular pathways: nutrient signaling, mitochondrial proteostasis, and the autophagic machinery. Lifespan is inevitably accompanied by functional decline, a steady increase in a plethora of chronic diseases, and ultimately death. For millennia, it has been a dream of mankind to prolong both lifespan and healthspan. Developed countries have profited from the medical improvements and their transfer to public health care systems, as well as from better living conditions derived from their socioeconomic power, to achieve remarkable increases in life expectancy during the last century. In the United States, the percentage of the population age 65 years or older is projected to increase from 13% in 2010 to 19.3% in 2030. However, old age remains the main risk factor for major life-threatening disorders, and the number of people suffering from age-related diseases is anticipated to almost double over the next two decades. The prevalence of age-related pathologies represents a major threat as well as an economic burden that urgently needs effective interventions. Molecules, drugs, and other interventions that might decelerate aging processes continue to raise interest among both the general public and scientists of all biologic and medical fields. Over the past two decades, this interest has taken root in the fact that many of the molecular mechanisms underlying aging are interconnected and linked with pathways that cause disease, including cancer and cardiovascular and neurodegenerative disorders. Unfortunately, among the many proposed aging interventions, only a few have reached a certain age themselves. Results often lack reproducibility because of a simple inherent problem: interventions in aging research take a lifetime. Experiments lasting the lifetime of animal models are prone to develop artifacts, increasing the possibilities and time windows for experimental discrepancies. Some inconsistencies in the field arise from overinterpreting lifespan-shortening models and scenarios as being related to accelerated aging. Many substances and interventions have been claimed to be antiaging throughout history and into the present. In the following sections, interventions will be restricted to those that meet the following highly selective criteria: (1) promotion of lifespan and/or healthspan, (2) validation in at least three model organisms, and (3) confirmation by at least three different laboratories. These interventions include (1) caloric restriction and fasting regimens, (2) some pharmacotherapies (resveratrol, rapamycin, spermidine, metformin), and (3) exercise. Caloric Restriction One of the most important and robust interventions that delays aging is caloric restriction. This outcome has been recorded in rodents, dogs, worms, flies, yeasts, monkeys, and prokaryotes. Calorie restriction is defined as a reduction in the total caloric intake, usually of about 30% and without malnutrition. Caloric restriction reduces the release of growth factors such as growth hormone, insulin, and IGF-I, which are activated by nutrients and have been shown to accelerate aging and enhance the probability for mortality in many organisms. Yet the effects of caloric restriction on aging were first discovered by McCay in 1935 long before the effects of such hormones and growth factors on aging were recognized. The cellular pathways that mediate this remarkable response have been explored in many experimental models. These include the nutrient-sensing pathways (TOR, AMPK, insulin/IGF-I, sirtuins) and transcription factors (FOXO in D. melanogaster and daf16 in C. elegans). The transcription factor Nrf2 appears to confer most of the anticancer properties of caloric restriction in mice, even though it is dispensable for lifespan extension. Two studies have reported the effects of caloric restriction in monkeys with different outcomes: one study observed prolonged life, while the other did not. However, both studies confirmed that caloric restriction increases healthspan by reducing the risk for diabetes, cardiovascular CHAPTER 94e The Biology of Aging disease, and cancer. In humans, caloric restriction is associated with increased lifespan and healthspan. This is most convincingly demonstrated in Okinawa, Japan, where one of the most long-lived human populations resides. In comparison to the rest of the Japanese population, Okinawan people usually combine an above-average amount of daily exercise with a below-average food intake. However, when Okinawan families move to Brazil, they adopt a Western lifestyle that affects both exercise and nutrition, causing a rise in weight and a reduction in life expectancy by nearly two decades. In the Biosphere II project, where volunteers lived together for 24 months undergoing an unforeseen severe caloric restriction, there were improvements in insulin, blood sugar, glycated hemoglobin, cholesterol levels, and blood pressure—all outcomes that would be expected to benefit lifespan. Caloric restriction changes many aspects of human aging that might influence lifespan such as the transcriptome, hormonal status (especially IGF-I and thyroid hormones), oxidative stress, inflammation, mitochondrial function, glucose homeostasis, and cardiometabolic risk factors. Epigenetic modifications are an emerging target for caloric restriction. It must be noted that maintaining caloric restriction and avoiding malnutrition is not only arduous in humans but is also linked with substantial side effects. For instance, prolonged reduction of calorie intake may decrease fertility and libido, impair wound healing, reduce the potential to combat infections, and lead to amenorrhea and osteoporosis. Although extreme obesity (body mass index [BMI] >35) leads to a 29% increased risk of dying, people with BMI in the overweight range seem to have reduced mortality, at least in population studies of middle-aged and older subjects. People with a BMI in the overweight range seem more able to counteract and respond to disease, trauma, and infection, whereas caloric restriction impairs healing and immune responses. On the other hand, BMI is an insufficient denominator of body and body fat composition. A well-trained athlete may have a similar BMI as an overweight person because of the higher muscle mass density. The waist-to-hip ratio is a much better indicator for body fat and an excellent and stringent predictor of the risk of dying from cardiovascular disease: the lower the waist-to-hip ratio, the lower is the risk. PERIODIC FASTING How can caloric restriction be translated to humans in a socially and medically feasible way? A whole series of periodic fasting regimens are asserting themselves as suitable strategies, among them the alternate-day fasting diet, the “five:two” intermittent fasting diet, and a 48-h fast once or twice each month. Periodic fasting is psychologically more viable, lacks some of the negative side effects, and is only accompanied by minimal weight loss. It is striking that many cultures implement periodic fasting rituals, for example Buddhists, Christians, Hindus, Jews, Muslims, and some African animistic religions. It could be speculated that a selective advantage of fasting versus nonfasting populations is conferred by health-promoting attributes of religious routines that periodically limit caloric intake. Indeed, several lines of evidence indicate that intermittent fasting regimens exert antiaging effects. For example improved morbidity and longevity were observed among Spanish home nursing residents who underwent alternate-day fasting. Even rats subjected to alternate-day fasting live up to 83% longer than normally fed control animals, and one 24-h fasting period every 4 days is sufficient to generate lifespan extension Repeated fasting and eating cycles may circumvent the negative side effects of sustained caloric restriction. This strategy may even yield effects despite extreme overeating during the nonfasting periods. In a spectacular experiment, mice fed a high-fat diet in a time-restricted manner, i.e., with regular fasting breaks, showed reduced inflammation markers and no fatty liver and were slim in comparison to mice with equivalent total calorie consumption but ad libitum. From an evolutionary point of view, this kind of feeding pattern may reflect mammalian adaptation to food availability: overeating in times of nutrient availability (e.g., after a hunting success) and starvation in between. This is how some indigenous peoples who have avoided Western lifestyles live today; those who have been investigated show limited signs of age-induced diseases such as cancer, neurodegeneration, diabetes, cardiovascular disease, and hypertension. Fasting exerts beneficial effects on healthspan by minimizing the risk of developing agerelated diseases including hypertension, neurodegeneration, cancer, and cardiovascular diseases. The most effective and rapid repercussion of fasting is reduction in hypertension. Two weeks of water-only fasting resulted in a blood pressure below 120/80 mmHg in 82% of subjects with borderline hypertension. Ten days of fasting cured all hypertensive patients who had been taking antihypertensive medication previously. Periodic fasting dampens the consequences of many age-related neurodegenerative diseases (Alzheimer’s disease, Parkinson’s disease, Huntington’s disease, and frontotemporal dementia, but not amyotrophic lateral sclerosis in mouse models). Fasting cycles are as effective as chemotherapy against certain tumors in mice. In combination with chemotherapy, fasting protected mice against the negative side effects of chemotherapeutic drugs, while it enhanced their efficacy against tumors. Combining fasting and chemotherapy rendered 20–60% of mice cancer-free when inoculated with highly aggressive tumors like glioblastoma or pancreatic tumors, which have 100% mortality even with chemotherapy. This approach has been attempted in people with some indication that toxicities of chemotherapy are reduced. Pharmacologic Interventions to Delay Aging and Increase Lifespan Virtually all obese people know that stable weight reduction will reduce their elevated risk of cardiometabolic disease and enhance their overall survival, yet only 20% of overweight individuals are able to lose 10% of their weight for a period of at least 1 year. Even in the most motivated people (such as the “Cronies” who deliberately attempt long-term caloric restriction in order to extend their lives), long-term caloric restriction is extremely difficult. Thus, focus has been directed at the possibility of developing medicines that replicate the beneficial effects of caloric restriction without the need for reducing food intake (“CR-mimetics,” Fig. 94e-5): • Resveratrol. Resveratrol, an agonist of SIRT1, is a polyphenol that is found in grapes and in red wine. The potential of resveratrol to promote lifespan was first identified in yeast, and it has gathered fame since, at least in part because it might be responsible for the so-called French paradox whereby wine reduces some of the cardiometabolic risks of a high-fat diet. Resveratrol has been reported to increase lifespan in many lower order species such as yeast, fruit flies, worms, and mice on high-fat diets. In monkeys fed a diet high in sugar and fat, resveratrol had beneficial outcomes related to inflammation and cardiometabolic parameters. Some studies in humans have also shown improvements in cardiometabolic function, whereas others have been negative. Gene expression studies in animals and humans reveal that resveratrol mimics some of the metabolic and gene expression changes of caloric restriction. ResveratrolRapamycinSpermidineHOOOOOOOOOHOOOHONMetforminNHNHNNH2NHHOOHOHH2NNH2HNFIGURE 94e-5 Chemical structures of four agents (resveratrol, rapamycin, spermidine, and metformin) that have been shown to delay aging in experimental animal models. Rapamycin. Rapamycin, an inhibitor of mTOR, was originally discovered on Easter Island (Rapa Nui; hence its name) as a bacterial secretion with antibiotic properties. Before its immersion in the antiaging field, rapamycin already had a longstanding career as an immunosuppressant and cancer chemotherapeutic in humans. Rapamycin extends lifespan in all organisms tested so far, including yeast, flies, worms, and mice. However, the potential utility of rapamycin for human lifespan extension is likely to be limited by adverse effects related to immunosuppression, wound healing, proteinuria, and hypercholesterolemia, among others. An alternative strategy may be intermittent rapamycin feeding, which was found to increase mouse lifespan. Spermidine. Spermidine is a physiologic polyamine that induces autophagy-mediated lifespan extension in yeast, flies, and worms. Spermidine levels decrease during the life of virtually all organisms including humans, with the stunning exception of centenarians. Oral administration of spermidine and upregulation of bacterial polyamine production in the gut both lead to lifespan extension in short-lived mouse models. Spermidine has also been found to have beneficial effects on neurodegeneration probably by increasing transcription of genes involved in autophagy. Metformin. Metformin, an activator of AMPK, is a biguanide first isolated from the French lilac that is widely used for the treatment of type 2 diabetes mellitus. Metformin decreases hepatic gluconeogenesis and increases insulin sensitivity. Metformin has other actions including inhibition of mTOR and mitochondrial complex I and activation of the transcription factor SKN-1/Nrf2. Metformin increases lifespan in different mouse strains including female mouse strains predisposed to high incidence of mammary tumors. At a biochemical level, metformin supplementation is associated with reduced oxidative damage and inflammation and mimics some of the gene expression changes seen with caloric restriction. Exercise and Physical Activity In humans and animals, regular exercise reduces the risk of morbidity and mortality. Given that cardiovascular diseases are the dominant cause of aging in humans but not in mice, the effects on human health may be even stronger than those seen in mouse experiments. An increase in aerobic exercise capacity, which declines during aging, is associated with favorable effects on blood pressure, lipids, glucose tolerance, bone density, and depression in older people. Likewise, exercise training protects against aging disorders such as cardiovascular diseases, diabetes mellitus, and osteoporosis. Exercise is the only treatment that can prevent or even reverse sarcopenia (age-related muscle wasting). Even moderate or low levels 94e-7 of exercise (30 min walking per day) have significant protective effects in obese subjects. In older people, regular physical activity has been found to increase the duration of independent living. While clearly promoting health and thus quality of life, regular exercise does not extend lifespan. Furthermore, the combination of exercise with caloric restriction has no additive effect on maximal lifespan in rodents. On the other hand, alternate-day fasting with exercise is more beneficial for the muscle mass than single treatments alone. In nonobese humans, exercise combined with caloric restriction has synergistic effects on insulin sensitivity and inflammation. From the evolutionary perspective, the responses to hunger and exercise are linked: when food is scarce, increased activity is required to hunt and gather. Hormesis The term hormesis describes the, at first sight paradoxic, protective effects conferred by the exposure to low doses of stressors or toxins (or as Nietzsche stated, “What does not kill him makes him stronger”). Adaptive stress responses elicited by noxious agents (chemical, thermal, or radioactive) precondition an organism, rendering it resistant to subsequent higher and otherwise lethal doses of the same trigger. Hormetic stressors have been found to influence aging and lifespan, presumably by increasing cellular resilience to factors that might contribute to aging such as oxidative stress. Yeast cells that have been exposed to low doses of oxidative stress exhibit a marked antistress response that inhibits death following exposure to lethal doses of oxidants. During ischemic preconditioning in humans, short periods of ischemia protect the brain and the heart against a more severe deprivation of oxygen and subsequent reperfusion-induced oxidative stress. Similarly, the lifelong and periodic exposure to various stressors can inhibit or retard the aging process. Consistent with this concept, heat or mild doses of oxidative stress can lead to lifespan extension in C. elegans. Caloric restriction can also be considered to be a type of hormetic stress that results in the activation of antistress transcription factors (Rim15, Gis1, and Msn2/Msn4 in yeast and FOXO in mammals) that enhance the expression of free radical–scavenging factors and heat shock proteins. Clinicians need to understand aging biology in order to better manage people who are elderly now. Moreover there is an urgent need to develop strategies based on aging biology that delay aging, reduce or postpone the onset of age-related disorders, and increase functional life and healthspan for future generations. Interventions related to nutritional interventions and drugs that act on nutrient-sensing pathways are being developed and, in some cases, are already being studied in humans. Whether these interventions are universally effective or species/individual specific needs to be determined. CHAPTER 94e The Biology of Aging 95e-1 CHAPTER 95e Nutrient Requirements and Dietary Assessment Johanna Dwyer Nutrients are substances that are not synthesized in sufficient amounts in the body and therefore must be supplied by the diet. Nutrient requirements for groups of healthy persons have been determined Combinations of plant proteins that complement one another in bio-logic value or combinations of animal and plant proteins can increase biologic value and lower total protein requirements. In healthy people with adequate diets, the timing of protein intake over the course of the day has little effect. Protein needs increase during growth, pregnancy, lactation, and rehabilitation after injury or malnutrition. Tolerance to dietary protein is decreased in renal insufficiency (with consequent uremia) and in liver failure. Normal protein intake can precipitate encephalopathy in patients with cirrhosis of the liver. 95e PART 6: Nutrition and Weight Loss experimentally. The absence of essential nutrients leads to growth impairment, organ dysfunction, and failure to maintain nitrogen balance or adequate status of other nutrients. For good health, we require energy-providing nutrients (protein, fat, and carbohydrate), vitamins, minerals, and water. Requirements for organic nutrients include 9 essential amino acids, several fatty acids, glucose, 4 fat-soluble vitamins, 10 water-soluble vitamins, dietary fiber, and choline. Several inorganic substances, including 4 minerals, 7 trace minerals, 3 electrolytes, and the ultratrace elements, must also be supplied by diet. The amounts of the essential nutrients that are required by individuals differ by age and physiologic state. Conditionally essential nutrients are not required in the diet but must be supplied to individuals who do not synthesize them in adequate amounts, such as those with genetic defects, those with pathologic conditions such as infection or trauma with nutritional implications, and developmentally immature infants. For example, inositol, taurine, arginine, and glutamine may be needed by premature infants. Many other organic and inorganic compounds that are present in foods, such as pesticides and lead, also have health effects. ESSENTIAL NUTRIENT REQUIREMENTS Energy For weight to remain stable, energy intake must match energy output. The major components of energy output are resting energy expenditure (REE) and physical activity; minor components include the energy cost of metabolizing food (thermic effect of food, or specific dynamic action) and shivering thermogenesis (e.g., cold-induced thermogenesis). The average energy intake is ~2600 kcal/d for American men and ~1800 kcal/d for American women, though these estimates vary with body size and activity level. Formulas for roughly estimating REE are useful in assessing the energy needs of an individual whose weight is stable. Thus, for males, REE = 900 + 10m, and for females, REE = 700 + 7m, where is m mass in kilograms. The calculated REE is then adjusted for physical activity level by multiplying by 1.2 for sedentary, 1.4 for moderately active, or 1.8 for very active individuals. The final figure, the estimated energy requirement (EER), provides an approximation of total caloric needs in a state of energy balance for a person of a certain age, sex, weight, height, and physical activity level. For further discussion of energy balance in health and disease, see Chap. 97. Protein Dietary protein consists of both essential and nonessential amino acids that are required for protein synthesis. The nine essential amino acids are histidine, isoleucine, leucine, lysine, methionine/ cystine, phenylalanine/tyrosine, threonine, tryptophan, and valine. Certain amino acids, such as alanine, can also be used for energy and gluconeogenesis. When energy intake is inadequate, protein intake must be increased, because ingested amino acids are diverted into pathways of glucose synthesis and oxidation. In extreme energy deprivation, protein-calorie malnutrition may ensue (Chap. 97). For adults, the recommended dietary allowance (RDA) for protein is ~0.6 g/kg desirable body mass per day, assuming that energy needs are met and that the protein is of relatively high biologic value. Current recommendations for a healthy diet call for at least 10–14% of calories from protein. Most American diets provide at least those amounts. Biologic value tends to be highest for animal proteins, followed by proteins from legumes (beans), cereals (rice, wheat, corn), and roots. Fat and Carbohydrate Fats are a concentrated source of energy and constitute, on average, 34% of calories in U.S. diets. However, for optimal health, fat intake should total no more than 30% of calories. Saturated fat and trans fat should be limited to <10% of calories and polyunsaturated fats to <10% of calories, with monounsaturated fats accounting for the remainder of fat intake. At least 45–55% of total calories should be derived from carbohydrates. The brain requires ~100 g of glucose per day for fuel; other tissues use about 50 g/d. Some tissues (e.g., brain and red blood cells) rely on glucose supplied either exogenously or from muscle proteolysis. Over time, adaptations in carbohydrate needs are possible during hypocaloric states. Like fat (9 kcal/g), carbohydrate (4 kcal/g), and protein (4 kcal/g), alcohol (ethanol) provides energy (7 kcal/g). However, it is not a nutrient. Water For adults, 1–1.5 mL of water per kilocalorie of energy expenditure is sufficient under usual conditions to allow for normal variations in physical activity, sweating, and solute load of the diet. Water losses include 50–100 mL/d in the feces; 500–1000 mL/d by evaporation or exhalation; and, depending on the renal solute load, ≥1000 mL/d in the urine. If external losses increase, intakes must increase accordingly to avoid underhydration. Fever increases water losses by ~200 mL/d per °C; diarrheal losses vary but may be as great as 5 L/d in severe diarrhea. Heavy sweating, vigorous exercise, and vomiting also increase water losses. When renal function is normal and solute intakes are adequate, the kidneys can adjust to increased water intake by excreting up to 18 L of excess water per day (Chap. 404). However, obligatory urine outputs can compromise hydration status when there is inadequate water intake or when losses increase in disease or kidney damage. Infants have high requirements for water because of their large ratio of surface area to volume, their inability to communicate their thirst, and the limited capacity of the immature kidney to handle high renal solute loads. Increased water needs during pregnancy are ~30 mL/d. During lactation, milk production increases daily water requirements so that ~1000 mL of additional water is needed, or 1 mL for each milliliter of milk produced. Special attention must be paid to the water needs of the elderly, who have reduced total body water and blunted thirst sensation and are more likely to be taking medications such as diuretics. Other Nutrients See Chap. 96e for detailed descriptions of vitamins and trace minerals. Fortunately, human life and well-being can be maintained within a fairly wide range for most nutrients. However, the capacity for adaptation is not infinite—too much, as well as too little, intake of a nutrient can have adverse effects or alter the health benefits conferred by another nutrient. Therefore, benchmark recommendations regarding nutrient intakes have been developed to guide clinical practice. These quantitative estimates of nutrient intakes are collectively referred to as the dietary reference intakes (DRIs). The DRIs have supplanted the RDAs—the single reference values used in the United States until the early 1990s. DRIs include the estimated average requirement (EAR) for nutrients as well as other reference values used for dietary planning for individuals: the RDA, the adequate intake (AI), and the tolerable upper Life-Stage Vitamin A Vitamin C Vitamin D Vitamin E Vitamin Thiamin Riboflavin Niacin Vitamin B6 Folate Vitamin B12 Pantothenic Biotin Choline Group (μg/d)a (mg/d) (μg/d)b,c (mg/d)d K (μg/d) (mg/d) (mg/d) (mg/d)e (mg/d) (μg/d)f (μg/d) Acid (mg/d) (μg/d) (mg/d)g Birth to 400* 40* 10 4* 2.0* 0.2* 0.3* 2* 0.1* 65* 0.4* 1.7* 5* 125* 6 mo 6–12 mo 500* 50* 10 5* 2.5* 0.3* 0.4* 4* 0.3* 80* 0.5* 1.8* 6* 150* 1–3 y 300 1515 6 30* 0.5 0.5 6 0.5 150 0.9 2* 8* 200* 4–8 y 400 2515 7 55* 0.6 0.6 8 0.6 200 1.2 3* 12* 250* Males 9–13 y 600 4515 11 60* 0.9 0.9 12 1.0 300 1.8 4* 20* 375* 14–18 y 900 7515 15 75* 1.2 1.3 16 1.3 400 2.4 5* 25* 550* 19–30 y 900 9015 15 120* 1.2 1.3 16 1.3 400 2.4 5* 30* 550* 31–50 y 900 9015 15 120* 1.2 1.3 16 1.3 400 2.4 5* 30* 550* 51–70 y 900 9015 15 120* 1.2 1.3 16 1.7 400 2.4h 5* 30* 550* >70 y 900 9020 15 120* 1.2 1.3 16 1.7 400 2.4h 5* 30* 550* Females 9–13 y 600 4515 11 60* 0.9 0.9 12 1.0 300 1.8 4* 20* 375* 14–18 y 700 6515 15 75* 1.0 1.0 14 1.2 400i 2.4 5* 25* 400* 19–30 y 700 7515 15 90* 1.1 1.1 14 1.3 400i 2.4 5* 30* 425* 31–50 y 700 7515 15 90* 1.1 1.1 14 1.3 400i 2.4 5* 30* 425* 51–70 y 700 7515 15 90* 1.1 1.1 14 1.5 400 2.4h 5* 30* 425* >70 y 700 7520 15 90* 1.1 1.1 14 1.5 400 2.4h 5* 30* 425* Pregnant women 14–18 y 750 8015 15 75* 1.4 1.4 18 1.9 600j 2.6 6* 30* 450* 19–30 y 770 8515 15 90* 1.4 1.4 18 1.9 600j 2.6 6* 30* 450* 31–50 y 770 8515 15 90* 1.4 1.4 18 1.9 600j 2.6 6* 30* 450* Lactating women 14–18 y 1200 115 15 19 75* 1.4 1.6 17 2.0 500 2.8 7* 35* 550* 19–30 y 1300 120 15 19 90* 1.4 1.6 17 2.0 500 2.8 7* 35* 550* 31–50 y 1300 120 15 19 90* 1.4 1.6 17 2.0 500 2.8 7* 35* 550* Note: This table (taken from the DRI reports; see www.nap.edu) presents recommended dietary allowances (RDAs) in bold type and adequate intakes (AIs) in ordinary type followed by an asterisk (*). An RDA is the average daily dietary intake level sufficient to meet the nutrient requirements of nearly all healthy individuals (97–98%) in a group. The RDA is calculated from an estimated average requirement (EAR). If sufficient scientific evidence is not available to establish an EAR and thus to calculate an RDA, an AI is usually developed. For healthy breast-fed infants, an AI is the mean intake. The AI for other life-stage and sex-specific groups is believed to cover the needs of all healthy individuals in those groups, but lack of data or uncertainty in the data makes it impossible to specify with confidence the percentage of individuals covered by this intake. aAs retinol activity equivalents (RAEs). 1 RAE = 1 μg retinol, 12 μg β-carotene, 24 μg α-carotene, or 24 μg β-cryptoxanthin. The RAE for dietary provitamin A carotenoids is twofold greater than the retinol equivalent (RE), whereas the RAE for preformed vitamin A is the same as the RE. bAs cholecalciferol. 1 μg cholecalciferol = 40 IU vitamin D. cUnder the assumption of minimal sunlight. dAs α-tocopherol. α-Tocopherol includes RRR-α-tocopherol, the only form of α-tocopherol that occurs naturally in foods, and the 2R-stereoisomeric forms of α-tocopherol (RRR-, RSR-, RRS-, and RSS-α-tocopherol) that occur in fortified foods and supplements. It does not include the 2S-stereoisomeric forms of α-tocopherol (SRR-, SSR-, SRS-, and SSS-α-tocopherol) also found in fortified foods and supplements. eAs niacin equivalents (NEs). 1 mg of niacin = 60 mg of tryptophan; 0–6 months = preformed niacin (not NE). fAs dietary folate equivalents (DFEs). 1 DFE = 1 μg food folate = 0.6 μg of folic acid from fortified food or as a supplement consumed with food = 0.5 μg of a supplement taken on an empty stomach. gAlthough AIs have been set for choline, there are few data to assess whether a dietary supply of choline is needed at all stages of the life cycle, and it may be that the choline requirement can be met by endogenous synthesis at some of these stages. hBecause 10–30% of older people may malabsorb food-bound B12, it is advisable for those >50 years of age to meet their RDA mainly by consuming foods fortified with B12 or a supplement containing B12. iIn view of evidence linking inadequate folate intake with neural tube defects in the fetus, it is recommended that all women capable of becoming pregnant consume 400 μg of folate from supplements or fortified foods in addition to intake of food folate from a varied diet. jIt is assumed that women will continue consuming 400 μg from supplements or fortified food until their pregnancy is confirmed and they enter prenatal care, which ordinarily occurs after the end of the periconceptional period—the critical time for formation of the neural tube. Source: Food and Nutrition Board, Institute of Medicine, National Academies (http://www.iom.edu/Activities/Nutrition/SummaryDRIs/DRI-Tables.aspx). level (UL). The DRIs also include acceptable macronutrient distribution maintenance of body stores of nutrients; or, if available, the amount ranges (AMDRs) for protein, fat, and carbohydrate. The current DRIs of a nutrient that minimizes the risk of chronic degenerative disease. for vitamins and elements are provided in Tables 95e-1 and 95e-2, Current efforts focus on this last variable, but relevant markers often respectively. Table 95e-3 provides DRIs for water and macronutrients. are not available. EERs are discussed in Chap. 97 on energy balance in health and The EAR is the amount of a nutrient estimated to be adequate for disease. half of the healthy individuals of a specific age and sex. The types of evidence and criteria used to establish nutrient requirements varyEstimated Average Requirement When florid manifestations of the by nutrient, age, and physiologic group. The EAR is not an effectiveclassic dietary-deficiency diseases such as rickets (deficiency of vita-estimate of nutrient adequacy in individuals because it is a medianmin D and calcium), scurvy (deficiency of vitamin C), xerophthalmia requirement for a group; 50% of individuals in a group fall below the(deficiency of vitamin A), and protein-calorie malnutrition were com-requirement and 50% fall above it. Thus, a person with a usual intake at mon, nutrient adequacy was inferred from the absence of their clinical the EAR has a 50% risk of inadequate intake. For these reasons, othersigns. Later, biochemical and other changes were found to be evident standards, described below, are more useful for clinical purposes. long before the deficiency became clinically apparent. Consequently, criteria of adequacy are now based on biologic markers when they Recommended Dietary Allowances The RDA is the average daily dietary are available. Priority is given to sensitive biochemical, physiologic, intake level that meets the nutrient requirements of nearly all healthy or behavioral tests that reflect early changes in regulatory processes; persons of a specific sex, age, life stage, or physiologic condition Birth to 6 200* 0.2* 200* 0.01* 110* 0.27* 30* 0.003* 2* 100* 15* 2* 0.4* 0.12* 0.18* mo 6–12 mo 260* 5.5* 220* 0.5* 130* 11 75* 0.6* 3* 275* 20* 3 0.7* 0.37* 0.57* Children 1–3 y 700 11* 340 0.7* 907 80 1.2* 17 460 203 3.0* 1.0* 1.5* 4–8 y 1000 15* 440 1* 90 10 130 1.5* 22 500 305 3.8* 1.2* 1.9* Males 9–13 y 1300 25* 700 2* 120 8 240 1.9* 34 1250 408 4.5* 1.5* 2.3* 14–18 y 1300 35* 890 3* 150 11 410 2.2* 43 1250 55 11 4.7* 1.5* 2.3* 19–30 y 1000 35* 900 4* 150 8 400 2.3* 45 700 5511 4.7* 1.5* 2.3* 31–50 y 1000 35* 900 4* 150 8 420 2.3* 45 700 5511 4.7* 1.5* 2.3* 51–70 y 1000 30* 900 4* 150 8 420 2.3* 45 700 5511 4.7* 1.3* 2.0* >70 y 1200 30* 900 4* 150 8 420 2.3* 45 700 5511 4.7* 1.2* 1.8* Females 9–13 y 1300 21* 700 2* 120 8 240 1.6* 34 1250 408 4.5* 1.5* 2.3* 14–18 y 1300 24* 890 3* 150 15 360 1.6* 43 1250 559 4.7* 1.5* 2.3* 19–30 y 1000 25* 900 3* 150 18 310 1.8* 45 700 558 4.7* 1.5* 2.3* 31–50 y 1000 25* 900 3* 150 18 320 1.8* 45 700 558 4.7* 1.5* 2.3* 51–70 y 1200 20* 900 3* 150 8 320 1.8* 45 700 558 4.7* 1.3* 2.0* >70 y 1200 20* 900 3* 150 8 320 1.8* 45 700 558 4.7* 1.2* 1.8* Pregnant women 14–18 y 1300 29* 1000 3* 220 27 400 2.0* 50 1250 60 12 4.7* 1.5* 2.3* 19–30 y 1000 30* 1000 3* 220 27 350 2.0* 50 700 6011 4.7* 1.5* 2.3* 31–50 y 1000 30* 1000 3* 220 27 360 2.0* 50 700 6011 4.7* 1.5* 2.3* Lactating women 14–18 y 1300 44* 1300 3* 290 10 360 2.6* 50 1250 70 13 5.1* 1.5* 2.3* 19–30 y 1000 45* 1300 3* 290 9 310 2.6* 50 700 7012 5.1* 1.5* 2.3* 31–50 y 1000 45* 1300 3* 290 9 320 2.6* 50 700 7012 5.1* 1.5* 2.3* Note: This table (taken from the DRI reports; see www.nap.edu) presents recommended dietary allowances (RDAs) in bold type and adequate intakes (AIs) in ordinary type followed by an asterisk (*). An RDA is the average daily dietary intake level sufficient to meet the nutrient requirements of nearly all healthy individuals (97–98%) in a group. The RDA is calculated from an estimated average requirement (EAR). If sufficient scientific evidence is not available to establish an EAR and thus to calculate an RDA, an AI is usually developed. For healthy breast-fed infants, an AI is the mean intake. The AI for other life-stage and sex-specific groups is believed to cover the needs of all healthy individuals in those groups, but lack of data or uncertainty in the data makes it impossible to specify with confidence the percentage of individuals covered by this intake. Sources: Food and Nutrition Board, Institute of Medicine, National Academies (http://www.iom.edu/Activities/Nutrition/SummaryDRIs/DRI-Tables.aspx), based on: Dietary Reference Intakes for Calcium, Phosphorus, Magnesium, Vitamin D, and Fluoride (1997); Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline (1998); Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium, and Carotenoids (2000); and Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc (2001); Dietary Reference Intakes for Water, Potassium, Sodium, Chloride, and Sulfate (2005); and Dietary Reference Intakes for Calcium and Vitamin D (2011). These reports can be accessed via www.nap.edu. (e.g., pregnancy or lactation). The RDA, which is the nutrient-intake based on observed or experimentally determined approximations of goal for planning diets of individuals, is defined statistically as two nutrient intakes in healthy people. In the DRIs, AIs rather than RDAs standard deviations above the EAR to ensure that the needs of any are proposed for nutrients consumed by infants (up to age 1 year) as given individual are met. The online tool at http://fnic.nal.usda.gov/ well as for chromium, fluoride, manganese, sodium, potassium, pantointeractiveDRI/ allows health professionals to calculate individualized thenic acid, biotin, choline, and water consumed by persons of all ages. daily nutrient recommendations for dietary planning based on the Vitamin D and calcium recommendations were recently revised, and DRIs for persons of a given age, sex, and weight. The RDAs are used more precise estimates are now available. to formulate food guides such as the U.S. Department of Agriculture Tolerable Upper Levels of Nutrient Intake Healthy individuals derive no (USDA) MyPlate Food Guide for individuals (www.supertracker.usda established benefit from consuming nutrient levels above the RDA or.gov/default.aspx), to create food-exchange lists for therapeutic diet AI. In fact, excessive nutrient intake can disturb body functions andplanning, and as a standard for describing the nutritional content of cause acute, progressive, or permanent disabilities. The tolerable ULfoods and nutrient-containing dietary supplements. is the highest level of chronic nutrient intake (usually daily) that is The risk of dietary inadequacy increases as intake falls below the unlikely to pose a risk of adverse health effects for most of the popula-RDA. However, the RDA is an overly generous criterion for evaluating tion. Data on the adverse effects of large amounts of many nutrientsnutrient adequacy. For example, by definition, the RDA exceeds the are unavailable or too limited to establish a UL. Therefore, the lack ofactual requirements of all but ~2–3% of the population. Therefore, a UL does not mean that the risk of adverse effects from high intakemany people whose intake falls below the RDA may still be getting is nonexistent. Nutrients in commonly eaten foods rarely exceed theenough of the nutrient. On food labels, the nutrient content in a food UL. However, highly fortified foods and dietary supplements provideis stated by weight or as a percent of the daily value (DV), a variant of more concentrated amounts of nutrients per serving and thus pose athe RDA used on the nutrition facts panel that, for an adult, represents potential risk of toxicity. Nutrient supplements are labeled with sup-the highest RDA for an adult consuming 2000 kcal. plement facts that express the amount of nutrient in absolute units or Adequate Intake It is not possible to set an RDA for some nutrients as the percentage of the DV provided per recommended serving size. that do not have an established EAR. In this circumstance, the AI is Total nutrient consumption, including that in foods, supplements, DiETARy REfERENCE iNTAkEs (DRis): RECommENDED DiETARy AllowANCEs AND ADEquATE iNTAkEs foR ToTAl wATER AND mACRoNuTRiENTs Birth to 6 mo 0.7* 60* NDc 31* 4.4* 0.5* 9.1* 6–12 mo 0.8* 95* ND 30* 4.6* 0.5* 11.0 Children 1–3 y 1.3* 130 19* ND 7* 0.7* 13 4–8 y 1.7* 130 25* ND 10* 0.9* 9–13 y 2.4* 130 31* ND 12* 1.2* 34 14–18 y 3.3* 130 38* ND 16* 1.6* 52 19–30 y 3.7* 130 38* ND 17* 1.6* 56 31–50 y 3.7* 130 38* ND 17* 1.6* 56 51–70 y 3.7* 130 30* ND 14* 1.6* 56 >70 y 3.7* 130 30* ND 14* 1.6* 56 Females 9–13 y 2.1* 130 26* ND 10* 1.0* 34 14–18 y 2.3* 130 26* ND 11* 1.1* 46 19–30 y 2.7* 130 25* ND 12* 1.1* 46 31–50 y 2.7* 130 25* ND 12* 1.1* 46 51–70 y 2.7* 130 21* ND 11* 1.1* 46 >70 y 2.7* 130 21* ND 11* 1.1* 46 Pregnant women 14–18 y 3.0* 175 28* ND 13* 1.4* 71 19–30 y 3.0* 175 28* ND 13* 1.4* 71 31–50 y 3.0* 175 28* ND 13* 1.4* 71 Lactating women 14–18 3.8* 210 29* ND 13* 1.3* 71 19–30 y 3.8* 210 29* ND 13* 1.3* 71 31–50 y 3.8* 210 29* ND 13* 1.3* 71 Note: This table (taken from the DRI reports; see www.nap.edu) presents recommended dietary allowances (RDAs) in bold type and adequate intakes (AIs) in ordinary type followed by an asterisk (*). An RDA is the average daily dietary intake level sufficient to meet the nutrient requirements of nearly all healthy individuals (97–98%) in a group. The RDA is calculated from an estimated average requirement (EAR). If sufficient scientific evidence is not available to establish an EAR and thus to calculate an RDA, an AI is usually developed. For healthy breast-fed infants, an AI is the mean intake. The AI for other life-stage and sex-specific groups is believed to cover the needs of all healthy individuals in those groups, but lack of data or uncertainty in the data make it impossible to specify with confidence the percentage of individuals covered by this intake. aTotal water includes all water contained in food, beverages, and drinking water. bBased on grams of protein per kilogram of body weight for the reference body weight (e.g., for adults: 0.8 g/kg body weight for the reference body weight). cNot determined. Source: Food and Nutrition Board, Institute of Medicine, National Academies (http://www.iom.edu/Activities/Nutrition/SummaryDRIs/DRI-Tables.aspx), based on: Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids (2002/2005) and Dietary Reference Intakes for Water, Potassium, Sodium, Chloride, and Sulfate (2005). These reports can be accessed via www.nap.edu. and over-the-counter medications (e.g., antacids), should not exceed RDA levels. Acceptable Macronutrient Distribution Ranges The AMDRs are not experimentally determined but are rough ranges for energy-providing macronutrient intakes (protein, carbohydrate, and fat) that the Institute of Medicine’s Food and Nutrition Board considers to be healthful. These ranges are 10–35% of calories for protein, 20–35% of calories for fat, and 45–65% of calories for carbohydrate. Alcohol, which also provides energy, is not a nutrient; therefore, no recommendations are not provided. The DRIs are affected by age, sex, rate of growth, pregnancy, lactation, physical activity level, concomitant diseases, drugs, and dietary composition. If requirements for nutrient sufficiency are close to levels indicating excess of a nutrient, dietary planning is difficult. Physiologic Factors Growth, strenuous physical activity, pregnancy, and lactation all increase needs for energy and several essential nutrients. Energy needs rise during pregnancy due to the demands of fetal growth and during lactation because of the increased energy required for milk production. Energy needs decrease with loss of lean body mass, the major determinant of REE. Because lean tissue, physical activity, and health often decline with age, energy needs of older persons, especially those over 70, tend to be lower than those of younger persons. Dietary Composition Dietary composition affects the biologic availability and use of nutrients. For example, the absorption of iron may be impaired by large amounts of calcium or lead; likewise, non-heme iron uptake may be impaired by a lack of ascorbic acid and amino acids in the meal. Protein use by the body may be decreased when essential amino acids are not present in sufficient amounts—a rare scenario in U.S. diets. Animal foods, such as milk, eggs, and meat, have high biologic values, with most of the needed amino acids present in adequate amounts. Plant proteins in corn (maize), soy, rice, and wheat have lower biologic values and must be combined with other plant or animal proteins or fortified with the amino acids that are deficient to achieve optimal use by the body. Route of Intake The RDAs apply only to oral intakes. When nutrients are administered parenterally, similar values can sometimes be used for amino acids, glucose (carbohydrate), fats, sodium, chloride, potassium, and most vitamins because their intestinal absorption rate is nearly 100%. However, the oral bioavailability of most mineral elements may be only half that obtained by parenteral administration. For some nutrients that are not readily stored in the body or that cannot be stored in large amounts, timing of administration may also be important. For example, amino acids cannot be used for protein synthesis if they are not supplied together; instead, they will be used for energy production, although in healthy individuals eating adequate diets, the distribution of protein intake over the course of the day has little effect on health. Disease Dietary deficiency diseases include protein-calorie malnutrition, iron-deficiency anemia, goiter (due to iodine deficiency), rickets and osteomalacia (vitamin D deficiency), and xeropthalmia (vitamin A deficiency), megaloblastic anemia (vitamin B12 or folic acid deficiency), scurvy (vitamin C/ascorbic acid deficiency), beriberi (thiamin deficiency), and pellagra (niacin and tryptophan deficiency) (Chaps. 96e and 97). Each deficiency disease is characterized by imbalances at the cellular level between the supply of nutrients or energy and the body’s nutritional needs for growth, maintenance, and other functions. Imbalances and excesses in nutrient intakes are recognized as risk factors for certain chronic degenerative diseases, such as saturated fat and cholesterol in coronary artery disease; sodium in hypertension; obesity in hormone-dependent endometrial and breast cancers; and ethanol in alcoholism. Because the etiology and pathogenesis of these disorders are multifactorial, diet is only one of many risk factors. Osteoporosis, for example, is associated with calcium deficiency, sometimes secondary to vitamin D deficiency, as well as with risk factors related to environment (e.g., smoking, sedentary lifestyle), physiology (e.g., estrogen deficiency), genetic determinants (e.g., defects in collagen metabolism), and drug use (chronic steroid and aromatase inhibitors) (Chap. 425). In clinical situations, nutritional assessment is an iterative process that involves: (1) screening for malnutrition, (2) assessing the diet and other data to establish either the absence or the presence of malnutrition and its possible causes, (3) planning and implementing the most appropriate nutritional therapy, and (4) reassessing intakes to make sure that they have been consumed. Some disease states affect the bioavailability, requirements, use, or excretion of specific nutrients. In these circumstances, specific measurements of various nutrients or their biomarkers may be required to ensure adequate replacement (Chap. 96e). Most health care facilities have nutrition-screening processes in place for identifying possible malnutrition after hospital admission. Nutritional screening is required by the Joint Commission, which accredits and certifies health care organizations in the United States. However, there are no universally recognized or validated standards. The factors that are usually assessed include abnormal weight for height or body mass index (e.g., BMI <19 or >25); reported weight change (involuntary loss or gain of >5 kg in the past 6 months) (Chap. 56); diagnoses with known nutritional implications (e.g., metabolic disease, any disease affecting the gastrointestinal tract, alcoholism); present therapeutic dietary prescription; chronic poor appetite; presence of chewing and swallowing problems or major food intolerances; need for assistance with preparing or shopping for food, eating, or other aspects of self-care; and social isolation. The nutritional status of hospitalized patients should be reassessed periodically—at least once every week. A more complete dietary assessment is indicated for patients who exhibit a high risk of or frank malnutrition on nutritional screening. The type of assessment varies with the clinical setting, the severity of the patient’s illness, and the stability of the patient’s condition. Acute-Care Settings In acute-care settings, anorexia, various other diseases, test procedures, and medications can compromise dietary intake. Under such circumstances, the goal is to identify and avoid inadequate intake and to assure appropriate alimentation. Dietary assessment focuses on what patients are currently eating, whether or not they are able and willing to eat, and whether or not they experience any problems with eating. Dietary intake assessment is based on information from observed intakes; medical records; history; clinical examination; and anthropometric, biochemical, and functional status evaluations. The objective is to gather enough information to establish 95e-5 the likelihood of malnutrition due to poor dietary intake or other causes in order to assess whether nutritional therapy is indicated (Chap. 98e). Simple observations may suffice to suggest inadequate oral intake. These include dietitians’ and nurses’ notes; observation of a patient’s frequent refusal to eat or the amount of food eaten on trays; the frequent performance of tests and procedures that are likely to cause meals to be skipped; adherence to nutritionally inadequate diet orders (e.g., clear liquids or full liquids) for more than a few days; the occurrence of fever, gastrointestinal distress, vomiting, diarrhea, or a comatose state; and the presence of diseases or use of treatments that involve any part of the alimentary tract. Acutely ill patients with diet-related diseases such as diabetes need assessment because an inappropriate diet may exacerbate these conditions and adversely affect other therapies. Abnormal biochemical values (serum albumin levels <35 g/L [<3.5 mg/dL]; serum cholesterol levels <3.9 mmol/L [<150 mg/dL]) are nonspecific but may indicate a need for further nutritional assessment. Most therapeutic diets offered in hospitals are calculated to meet individual nutrient requirements and the RDA if they are eaten. Exceptions include clear liquids, some full-liquid diets, and test diets (such as those adhered to in preparation for gastrointestinal procedures), which are inadequate for several nutrients and should not be used, if possible, for more than 24 h. However, because as much as half of the food served to hospitalized patients is not eaten, it cannot be assumed that the intakes of hospitalized patients are adequate. Dietary assessment should compare how much and what kinds of food the patient has consumed with the diet that has been provided. Major deviations in intakes of energy, protein, fluids, or other nutrients of special concern for the patient’s illness should be noted and corrected. Nutritional monitoring is especially important for patients who are very ill and who have extended lengths of hospital stay. Patients who are fed by enteral and parenteral routes also require special nutritional assessment and monitoring by physicians and/or dietitians with certification in nutritional support (Chap. 98e). Ambulatory Settings The aim of dietary assessment in the outpatient setting is to determine whether or not the patient’s usual diet is a health risk in itself or if it contributes to existing chronic disease-related problems. Dietary assessment also provides the basis for planning a diet that fulfills therapeutic goals while ensuring patient adherence. The outpatient dietary assessment should review the adequacy of present and usual food intakes, including vitamin and mineral supplements, oral nutritional supplements, medical foods, other dietary supplements, medications, and alcohol, because all of these may affect the patient’s nutritional status. The assessment should focus on the dietary constituents that are most likely to be involved or compromised by a specific diagnosis as well as on any comorbidities that are present. More than one day’s intake should be reviewed to provide a better representation of the usual diet. There are many ways to assess the adequacy of a patient’s habitual diet. These include use of a food guide, a food-exchange list, a diet history, or a food-frequency questionnaire. A commonly used food guide for healthy persons is the USDA’s Choose My Plate, which is useful as a rough guide for avoiding inadequate intakes of essential nutrients as well as likely excesses in the amounts of fat (especially saturated and trans fats), sodium, sugar, and alcohol consumed (Table 95e-4). The Choose My Plate graphic emphasizes a balance between calories and nutritional needs, encouraging increased intake of fruits and vegetables, whole grains, and low-fat milk in conjunction with reduced intake of sodium and high-calorie sugary drinks. The Web version of the guide provides a calculator that tailors the number of servings suggested for healthy patients of different weights, sexes, ages, and life-cycle stages to help them to meet their needs while avoiding excess (http://www.supertracker.usda.gov/default.aspx and www .ChooseMyPlate.gov). Patients who follow ethnic or unusual dietary patterns may need extra instruction on how foods should be categorized and on the appropriate portion sizes that constitute a serving. The process of reviewing the guide with patients helps them transition exchange lists for diabetes and the Academy of Nutrition and Dietetics food-exchange lists for renal disease. Examples of Standard Portion Sizes at Indicated Energy Level Dietary Factor, Unit of Lower: Moderate: Higher: Measure (Advice) 1600 kcal 2200 kcal 2800 kcal Note: Oils (formerly listed with portions of 5, 6, and 8 teaspoons for the lower, moderate, and higher energy levels, respectively) are no longer singled out in Choose My Plate, but rather are included in the empty calories/added sugar category with SOFAS (calories from solid fats and added sugars). The limit is the remaining number of calories in each food pattern above after intake of the recommended amounts of the nutrient-dense foods. aFor example, 1 serving equals 1 slice bread, 1 cup ready-to-eat cereal, or 0.5 cup cooked rice, pasta, or cooked cereal. bFor example, 1 serving equals 1 oz lean meat, poultry, or fish; 1 egg; 1 tablespoon peanut butter; 0.25 cup cooked dry beans; or 0.5 oz nuts or seeds. cFor example, 1 serving equals 1 cup milk or yogurt, 1.5 oz natural cheese, or 2 oz processed cheese. dFormerly called “discretionary calorie allowance.” Portions are calculated as the number of calories remaining after all of the above allotments are accounted for. Abbreviation: oz eq, ounce equivalent. Source: Data from U.S. Department of Agriculture (http://www.Choosemyplate.gov). to healthier dietary patterns and identifies food groups eaten in excess of recommendations or in insufficient quantities. For persons on therapeutic diets, assessment against food-exchange lists may be useful. These include, for example, American Diabetes Association food- Full nutritional status assessment is reserved for seriously ill patients and those at very high nutritional risk when the cause of malnutrition is still uncertain after the initial clinical evaluation and dietary assessment. It involves multiple dimensions, including documentation of dietary intake, anthropometric measurements, biochemical measurements of blood and urine, clinical examination, health history elicitation, and functional status evaluation. Therapeutic dietary prescriptions and menu plans for most diseases are available from most hospitals and from the Academy of Nutrition and Dietetics. For further discussion of nutritional assessment, see Chap. 97. The DRIs (e.g., the EAR, the UL, and energy needs) are esti mates of physiologic requirements based on experimental evidence. Assuming that appropriate adjustments are made for age, sex, body size, and physical activity level, these estimates should be applicable to individuals in most parts of the world. However, the AIs are based on customary and adequate intakes in U.S. and Canadian populations, which appear to be compatible with good health, rather than on a large body of direct experimental evidence. Similarly, the AMDRs represent expert opinion regarding the approximate intakes of energy-providing nutrients that are healthful in these North American populations. Thus these measures should be used with caution in other settings. Nutrient-based standards like the DRIs have also been developed by the World Health Organization/Food and Agricultural Organization of the United Nations and are available on the Web (http://www.who.int/nutrition/topics/nutrecomm/en/index .html). The European Food Safety Authority (EFSA) Panel on Dietetic Products, Nutrition and Allergies periodically publishes its recommendations in the EFSA Journal. Other countries have promulgated similar recommendations. The different standards have many similarities in their basic concepts, definitions, and nutrient recommendation levels, but there are some differences from the DRIs as a result of the functional criteria chosen, environmental differences, the timeliness of the evidence reviewed, and expert judgment. Vitamin and Trace Mineral Deficiency and Excess Robert M. Russell, Paolo M. Suter Vitamins are required constituents of the human diet since they are synthesized inadequately or not at all in the human body. Only small 96e Dietary Level per Day Associated Nutrient Clinical Finding with Overt Deficiency in Adults Contributing Factors to Deficiency amounts of these substances are needed to carry out essential biochemical reactions (e.g., by acting as coenzymes or prosthetic groups). Overt vitamin or trace mineral deficiencies are rare in Western countries because of a plentiful, varied, and inexpensive food supply; food fortification; and use of supplements. However, multiple nutrient deficiencies may appear together in persons who are chronically ill or alcoholic. After gastric bypass surgery, patients are at high risk for multiple nutrient deficiencies. Moreover, subclinical vitamin and trace mineral deficiencies, as diagnosed by laboratory testing, are quite common in the normal population, especially in the geriatric age group. Conversely, because of the widespread use of nutrient supplements, nutrient toxicities are gaining pathophysiologic and clinical importance. Victims of famine, emergency-affected and displaced popula tions, and refugees are at increased risk for protein-energy malnutrition and classic micronutrient deficiencies (vitamin A, iron, iodine) as well as for overt deficiencies in thiamine (beriberi), riboflavin, vitamin C (scurvy), and niacin (pellagra). Body stores of vitamins and minerals vary tremendously. For example, stores of vitamin B12 and vitamin A are large, and an adult may not become deficient until ≥1 year after beginning to eat a deficient diet. However, folate and thiamine may become depleted within weeks among those eating a deficient diet. Therapeutic modalities can deplete essential nutrients from the body; for example, hemodialysis removes water-soluble vitamins, which must be replaced by supplementation. Vitamins and trace minerals play several roles in diseases: (1) Deficiencies of vitamins and minerals may be caused by disease states such as malabsorption. (2) Either deficiency or excess of vitamins and minerals can cause disease in and of itself (e.g., vitamin A intoxication and liver disease). (3) Vitamins and minerals in high doses may be used as drugs (e.g., niacin for hypercholesterolemia). Since they are 96e-1 covered elsewhere, the hematologic-related vitamins and minerals (Chaps. 126 and 128) either are not considered or are considered only briefly in this chapter, as are the bone-related vitamins and minerals (vitamin D, calcium, phosphorus, magnesium; Chap. 423). See also Table 96e-1 and Fig. 96e-1. Thiamine was the first B vitamin to be identified and therefore is referred to as vitamin B1. Thiamine functions in the decarboxylation of α-ketoacids (e.g., pyruvate α-ketoglutarate) and branched-chain amino acids and thus is essential for energy generation. In addition, thiamine pyrophosphate acts as a coenzyme for a transketolase reaction that mediates the conversion of hexose and pentose phosphates. It has been postulated that thiamine plays a role in peripheral nerve conduction, although the exact chemical reactions underlying this function are not known. Food Sources The median intake of thiamine in the United States from food alone is 2 mg/d. Primary food sources for thiamine include yeast, organ meat, pork, legumes, beef, whole grains, and nuts. Milled rice and grains contain little thiamine. Thiamine deficiency is therefore more common in cultures that rely heavily on a rice-based diet. Tea, coffee (regular and decaffeinated), raw fish, and shellfish contain thiaminases, which can destroy the vitamin. Thus, drinking large amounts of tea or coffee can theoretically lower thiamine body stores. Deficiency Most dietary deficiency of thiamine worldwide is the result of poor dietary intake. In Western countries, the primary causes of thiamine deficiency are alcoholism and chronic illnesses such as cancer. Alcohol interferes directly with the absorption of thiamine and with the synthesis of thiamine pyrophosphate, and it increases urinary excretion. Thiamine should always be replenished when a patient with alcoholism is being refed, as carbohydrate repletion without adequate thiamine can precipitate acute thiamine deficiency with lactic acidosis. Other at-risk populations are women with prolonged hyperemesis gravidarum and anorexia, patients with overall poor nutritional status who are receiving parenteral glucose, patients who have had Thiamine Beriberi: neuropathy, muscle weakness and wasting, cardiomegaly, edema, ophthalmoplegia, confabulation Riboflavin Magenta tongue, angular stomatitis, seborrhea, cheilosis Niacin Pellagra: pigmented rash of sun-exposed areas, bright red tongue, diarrhea, apathy, memory loss, disorientation Vitamin B6 Seborrhea, glossitis, convulsions, neuropathy, depression, confusion, microcytic anemia Folate Megaloblastic anemia, atrophic glossitis, depression, Vitamin B12 Megaloblastic anemia, loss of vibratory and position sense, abnormal gait, dementia, impotence, loss of bladder and bowel control, ↑ homocysteine, ↑ methylmalonic acid Vitamin C Scurvy: petechiae, ecchymosis, coiled hairs, inflamed and bleeding gums, joint effusion, poor wound healing, fatigue Vitamin A Xerophthalmia, night blindness, Bitot’s spots, follicular hyperkeratosis, impaired embryonic development, immune dysfunction Vitamin D Rickets: skeletal deformation, rachitic rosary, bowed legs; osteomalacia Vitamin E Peripheral neuropathy, spinocerebellar ataxia, skeletal muscle atrophy, retinopathy Vitamin K Elevated prothrombin time, bleeding <0.3 mg/1000 kcal <0.6 mg <9.0 niacin equivalents <0.2 mg <100 μg/d <1.0 μg/d <2.0 μg/d Not described unless underlying contributing factor is present <10 μg/d Alcoholism, chronic diuretic use, hyperemesis, thiaminases in food Alcoholism, vitamin B6 deficiency, riboflavin deficiency, tryptophan deficiency Alcoholism, isoniazid Alcoholism, sulfasalazine, pyrimethamine, triamterene Gastric atrophy (pernicious anemia), terminal ileal disease, strict vegetarianism, acid-reducing drugs (e.g., H2 blockers), metformin Smoking, alcoholism Fat malabsorption, infection, measles, alcoholism, protein-energy malnutrition Aging, lack of sunlight exposure, fat malabsorption, deeply pigmented skin Occurs only with fat malabsorption or genetic abnormalities of vitamin E metabolism/transport Fat malabsorption, liver disease, antibiotic use FIgurE 96e-1 Structures and principal functions of vitamins associated with human disorders. and peripheral neuritis. Patients with dry beriberi present with a symmetric peripheral neuropathy of the motor and sensory systems, with diminished reflexes. The neuropathy affects the legs most markedly, and patients have difficulty rising from a squatting position. Alcoholic patients with chronic thiamine deficiency also may have central nervous system (CNS) manifestations known as Wernicke’s encephalopathy, which consists of horizontal nystagmus, ophthalmoplegia (due to weakness of one or more extraocular muscles), cerebellar ataxia, and mental impairment (Chap. 467). When there is an additional loss of memory and a confabulatory psychosis, the syndrome is known as Wernicke-Korsakoff syndrome. Despite the typical clinical picture and history, Wernicke-Korsakoff syndrome is underdiagnosed. The laboratory diagnosis of thiamine deficiency usually is made by a functional enzymatic assay of transketolase activity measured before and after the addition of thiamine pyrophosphate. A >25% stimulation in response to the addition of thiamine pyrophosphate (i.e., an activity coefficient of 1.25) is interpreted as abnormal. Thiamine or the phosphorylated esters of thiamine in serum or blood also can be measured by high-performance liquid chromatography to detect deficiency. In acute thiamine deficiency with either cardiovascular or neurologic signs, 200 mg of thiamine three times daily should be given intravenously until there is no further improvement in acute symptoms; oral thiamine (10 mg/d) should subsequently be given until recovery is complete. Cardiovascular and ophthalmoplegic improvement occurs within 24 h. Other manifestations gradually clear, although psychosis in Wernicke-Korsakoff syndrome may be permanent or may persist for several months. Other nutrient deficiencies should be corrected concomitantly. Toxicity Although anaphylaxis has been reported after high intravenous doses of thiamine, no adverse effects have been recorded from either food or supplements at high doses. Thiamine supplements may be bought over the counter in doses of up to 50 mg/d. Riboflavin is important for the metabolism of fat, carbohydrate, and protein, acting as a respiratory coenzyme and an electron donor. Enzymes that contain flavin adenine dinucleotide (FAD) or flavin mononucleotide (FMN) as prosthetic groups are known as flavoenzymes (e.g., succinic acid dehydrogenase, monoamine oxidase, glutathione reductase). FAD is a cofactor for methyltetrahydrofolate reductase and therefore modulates homocysteine metabolism. The vitamin also plays a role in drug and steroid metabolism, including detoxification reactions. Although much is known about the chemical and enzymatic reactions of riboflavin, the clinical manifestations of riboflavin deficiency are nonspecific and are similar to those of other deficiencies of B vitamins. Riboflavin deficiency is manifested principally by lesions of the mucocutaneous surfaces of the mouth and skin. In addition, corneal 96e-4 vascularization, anemia, and personality changes have been described with riboflavin deficiency. Deficiency and Excess Riboflavin deficiency almost always is due to dietary deficiency. Milk, other dairy products, and enriched breads and cereals are the most important dietary sources of riboflavin in the United States, although lean meat, fish, eggs, broccoli, and legumes are also good sources. Riboflavin is extremely sensitive to light, and milk should be stored in containers that protect against photodegradation. Laboratory diagnosis of riboflavin deficiency can be made by determination of red blood cell or urinary riboflavin concentrations or by measurement of erythrocyte glutathione reductase activity, with and without added FAD. Because the capacity of the gastrointestinal tract to absorb riboflavin is limited (~20 mg after one oral dose), riboflavin toxicity has not been described. The term niacin refers to nicotinic acid and nicotinamide and their biologically active derivatives. Nicotinic acid and nicotinamide serve as precursors of two coenzymes, nicotinamide adenine dinucleotide (NAD) and NAD phosphate (NADP), which are important in numerous oxidation and reduction reactions in the body. In addition, NAD and NADP are active in adenine diphosphate–ribose transfer reactions involved in DNA repair and calcium mobilization. Metabolism and requirements Nicotinic acid and nicotinamide are absorbed well from the stomach and small intestine. The bioavailability of niacin from beans, milk, meat, and eggs is high; bioavailability from cereal grains is lower. Since flour is enriched with “free” niacin (i.e., the non-coenzyme form), bioavailability is excellent. Median intakes of niacin in the United States considerably exceed the recommended dietary allowance (RDA). The amino acid tryptophan can be converted to niacin with an efficiency of 60:1 by weight. Thus, the RDA for niacin is expressed in niacin equivalents. A lower-level conversion of tryptophan to niacin occurs in vitamin B6 and/or riboflavin deficiencies and in the presence of isoniazid. The urinary excretion products of niacin include 2-pyridone and 2-methyl nicotinamide, measurements of which are used in the diagnosis of niacin deficiency. Deficiency Niacin deficiency causes pellagra, which is found mostly among people eating corn-based diets in parts of China, Africa, and India. Pellagra in North America is found mainly among alcoholics; among patients with congenital defects of intestinal and kidney absorption of tryptophan (Hartnup disease; Chap. 434e); and among patients with carcinoid syndrome (Chap. 113), in which there is increased conversion of tryptophan to serotonin. The antituberculosis drug isoniazid is a structural analog of niacin and can precipitate pellagra. In the setting of famine or population displacement, pellagra results from the absolute lack of niacin but also from the deficiency of micronutrients required for the conversion of tryptophan to niacin (e.g., iron, riboflavin, and pyridoxine). The early symptoms of pellagra include loss of appetite, generalized weakness and irritability, abdominal pain, and vomiting. Bright red glossitis then ensues and is followed by a characteristic skin rash that is pigmented and scaling, particularly in skin areas exposed to sunlight. This rash is known as Casal’s necklace because it forms a ring around the neck; it is seen in advanced cases. Vaginitis and esophagitis also may occur. Diarrhea (due in part to proctitis and in part to malabsorption), depression, seizures, and dementia are also part of the pellagra syndrome. The primary manifestations of this syndrome are sometimes referred to as “the four D’s”: dermatitis, diarrhea, and dementia leading to death. Treatment of pellagra consists of oral supplementation with 100– 200 mg of nicotinamide or nicotinic acid three times daily for 5 days. High doses of nicotinic acid (2 g/d in a time-release form) are used for the treatment of elevated cholesterol and triglyceride levels and/ or low high-density lipoprotein cholesterol levels (Chap. 421). Toxicity Prostaglandin-mediated flushing due to binding of the vitamin to a G protein–coupled receptor has been observed at daily nicotinic acid doses as low as 30 mg taken as a supplement or as therapy for dyslipidemia. There is no evidence of toxicity from niacin that is derived from food sources. Flushing always starts in the face and may be accompanied by skin dryness, itching, paresthesia, and headache. Pharmaceutical preparations of nicotinic acid combined with laropiprant, a selective prostaglandin D2 receptor 1 antagonist, or premedication with aspirin may alleviate these symptoms. Flushing is subject to tachyphylaxis and often improves with time. Nausea, vomiting, and abdominal pain also occur at similar doses of niacin. Hepatic toxicity is the most serious toxic reaction caused by sustained-release niacin and may present as jaundice with elevated aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels. A few cases of fulminant hepatitis requiring liver transplantation have been reported at doses of 3–9 g/d. Other toxic reactions include glucose intolerance, hyperuricemia, macular edema, and macular cysts. The combination of nicotinic acid preparations for dyslipidemia with 3-hydroxy3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors may increase the risk of rhabdomyolysis. The upper limit for daily niacin intake has been set at 35 mg. However, this upper limit does not pertain to the therapeutic use of niacin. Vitamin B6 refers to a family of compounds that includes pyridoxine, pyridoxal, pyridoxamine, and their 5′-phosphate derivatives. 5′-Pyridoxal phosphate (PLP) is a cofactor for more than 100 enzymes involved in amino acid metabolism. Vitamin B6 also is involved in heme and neurotransmitter synthesis and in the metabolism of glycogen, lipids, steroids, sphingoid bases, and several vitamins, including the conversion of tryptophan to niacin. Dietary Sources Plants contain vitamin B6 in the form of pyridoxine, whereas animal tissues contain PLP and pyridoxamine phosphate. The vitamin B6 contained in plants is less bioavailable than that in animal tissues. Rich food sources of vitamin B6 include legumes, nuts, wheat bran, and meat, although it is present in all food groups. Deficiency Symptoms of vitamin B6 deficiency include epithelial changes, as seen frequently with other B vitamin deficiencies. In addition, severe vitamin B6 deficiency can lead to peripheral neuropathy, abnormal electroencephalograms, and personality changes that include depression and confusion. In infants, diarrhea, seizures, and anemia have been reported. Microcytic hypochromic anemia is due to diminished hemoglobin synthesis, since the first enzyme involved in heme biosynthesis (aminolevulinate synthase) requires PLP as a cofactor (Chap. 126). In some case reports, platelet dysfunction has been reported. Since vitamin B6 is necessary for the conversion of homocysteine to cystathionine, it is possible that chronic low-grade vitamin B6 deficiency may result in hyperhomocysteinemia and increased risk of cardiovascular disease (Chaps. 291e and 434e). Independent of homocysteine, low levels of circulating vitamin B6 have been associated with inflammation and elevated levels of C-reactive protein. Certain medications, such as isoniazid, L-dopa, penicillamine, and cycloserine, interact with PLP due to a reaction with carbonyl groups. Pyridoxine should be given concurrently with isoniazid to avoid neuropathy. The increased ratio of AST to ALT seen in alcoholic liver disease reflects the relative vitamin B6 dependence of ALT. Vitamin B6 dependency syndromes that require pharmacologic doses of vitamin B6 are rare; they include cystathionine β-synthase deficiency, pyridoxineresponsive (primarily sideroblastic) anemias, and gyrate atrophy with chorioretinal degeneration due to decreased activity of the mitochondrial enzyme ornithine aminotransferase. In these situations, 100–200 mg/d of oral vitamin B6 is required for treatment. High doses of vitamin B6 have been used to treat carpal tunnel syndrome, premenstrual syndrome, schizophrenia, autism, and diabetic neuropathy but have not been found to be effective. The laboratory diagnosis of vitamin B6 deficiency is generally based on low plasma PLP values (<20 nmol/L). Vitamin B6 deficiency is treated with 50 mg/d; higher doses of 100–200 mg/d are given if the deficiency is related to medication use. Vitamin B6 should not be given with l-dopa, since the vitamin interferes with the action of this drug. Toxicity The safe upper limit for vitamin B6 has been set at 100 mg/d, although no adverse effects have been associated with high intakes of vitamin B6 from food sources only. When toxicity occurs, it causes severe sensory neuropathy, leaving patients unable to walk. Some cases of photosensitivity and dermatitis have been reported. See Chap. 128. Both ascorbic acid and its oxidized product dehydroascorbic acid are biologically active. Actions of vitamin C include antioxidant activity, promotion of nonheme iron absorption, carnitine biosynthesis, conversion of dopamine to norepinephrine, and synthesis of many peptide hormones. Vitamin C is also important for connective tissue metabolism and cross-linking (proline hydroxylation), and it is a component of many drug-metabolizing enzyme systems, particularly the mixed-function oxidase systems. Absorption and Dietary Sources Vitamin C is almost completely absorbed if <100 mg is administered in a single dose; however, only 50% or less is absorbed at doses >1 g. Enhanced degradation and fecal and urinary excretion of vitamin C occur at higher intake levels. Good dietary sources of vitamin C include citrus fruits, green vegetables (especially broccoli), tomatoes, and potatoes. Consumption of five servings of fruits and vegetables a day provides vitamin C in excess of the RDA of 90 mg/d for men and 75 mg/d for women. In addition, ~40% of the U.S. population consumes vitamin C as a dietary supplement in which “natural forms” of the vitamin are no more bioavailable than synthetic forms. Smoking, hemodialysis, pregnancy, and stress (e.g., infection, trauma) appear to increase vitamin C requirements. Deficiency Vitamin C deficiency causes scurvy. In the United States, this condition is seen primarily among the poor and the elderly, in alcoholics who consume <10 mg/d of vitamin C, and in individuals consuming macrobiotic diets. Vitamin C deficiency also can occur in young adults who eat severely unbalanced diets. In addition to generalized fatigue, symptoms of scurvy primarily reflect impaired formation of mature connective tissue and include bleeding into the skin (petechiae, ecchymoses, perifollicular hemorrhages); inflamed and bleeding gums; and manifestations of bleeding into joints, the peritoneal cavity, the pericardium, and the adrenal glands. In children, vitamin C deficiency may cause impaired bone growth. Laboratory diagnosis of vitamin C deficiency is based on low plasma or leukocyte levels. Administration of vitamin C (200 mg/d) improves the symptoms of scurvy within several days. High-dose vitamin C supplementation (e.g., 1–2 g/d) may slightly decrease the symptoms and duration of upper respiratory tract infections. Vitamin C supplementation has also been reported to be useful in Chédiak-Higashi syndrome (Chap. 80) and osteogenesis imperfecta (Chap. 427). Diets high in vitamin C have been claimed to lower the incidence of certain cancers, particularly esophageal and gastric cancers. If proved, this effect may be due to the fact that vitamin C can prevent the conversion of nitrites and secondary amines to carcinogenic nitrosamines. However, an intervention study from China did not show vitamin C to be protective. A potential role for parenteral ascorbic acid in the treatment of advanced cancers has been suggested. Toxicity Taking >2 g of vitamin C in a single dose may result in abdominal pain, diarrhea, and nausea. Since vitamin C may be metabolized to oxalate, it is feared that chronic high-dose vitamin C supplementation could result in an increased prevalence of kidney stones. However, except in patients with preexisting renal disease, this association has not been borne out in several trials. Nevertheless, it is reasonable to advise patients with a history of kidney stones not to take large doses of vitamin C. There is also an unproven but possible risk that chronic high doses of vitamin C could promote iron overload and iron toxicity. High doses of vitamin C can induce hemolysis in patients with glucose-6-phosphate dehydrogenase deficiency, and doses >1 g/d 96e-5 can cause false-negative guaiac reactions and interfere with tests for urinary glucose. High doses may interfere with the activity of certain drugs (e.g., bortezomib in myeloma patients). Biotin is a water-soluble vitamin that plays a role in gene expression, gluconeogenesis, and fatty acid synthesis and serves as a CO2 carrier on the surface of both cytosolic and mitochondrial carboxylase enzymes. The vitamin also functions in the catabolism of specific amino acids (e.g., leucine) and in gene regulation by histone biotinylation. Excellent food sources of biotin include organ meat such as liver or kidney, soy and other beans, yeast, and egg yolks; however, egg white contains the protein avidin, which strongly binds the vitamin and reduces its bioavailability. Biotin deficiency due to low dietary intake is rare; rather, deficiency is due to inborn errors of metabolism. Biotin deficiency has been induced by experimental feeding of egg white diets and by biotin-free parenteral nutrition in patients with short bowels. In adults, biotin deficiency results in mental changes (depression, hallucinations), paresthesia, anorexia, and nausea. A scaling, seborrheic, and erythematous rash may occur around the eyes, nose, and mouth as well as on the extremities. In infants, biotin deficiency presents as hypotonia, lethargy, and apathy. In addition, infants may develop alopecia and a characteristic rash that includes the ears. The laboratory diagnosis of biotin deficiency can be established on the basis of a decreased concentration of urinary biotin (or its major metabolites), increased urinary excretion of 3-hydroxyisovaleric acid after a leucine challenge, or decreased activity of biotin-dependent enzymes in lymphocytes (e.g., propionyl-CoA carboxylase). Treatment requires pharmacologic doses of biotin–i.e., up to 10 mg/d. No toxicity is known. Pantothenic acid is a component of coenzyme A and phosphopantetheine, which are involved in fatty acid metabolism and the synthesis of cholesterol, steroid hormones, and all compounds formed from isoprenoid units. In addition, pantothenic acid is involved in the acetylation of proteins. The vitamin is excreted in the urine, and the laboratory diagnosis of deficiency is based on low urinary vitamin levels. The vitamin is ubiquitous in the food supply. Liver, yeast, egg yolks, whole grains, and vegetables are particularly good sources. Human pantothenic acid deficiency has been demonstrated only by experimental feeding of diets low in pantothenic acid or by administration of a specific pantothenic acid antagonist. The symptoms of pantothenic acid deficiency are nonspecific and include gastrointestinal disturbance, depression, muscle cramps, paresthesia, ataxia, and hypoglycemia. Pantothenic acid deficiency is believed to have caused the “burning feet syndrome” seen in prisoners of war during World War II. No toxicity of this vitamin has been reported. Choline is a precursor for acetylcholine, phospholipids, and betaine. Choline is necessary for the structural integrity of cell membranes, cholinergic neurotransmission, lipid and cholesterol metabolism, methyl-group metabolism, and transmembrane signaling. Recently, a recommended adequate intake was set at 550 mg/d for men and 425 mg/d for women, although certain genetic polymorphisms can increase an individual’s requirement. Choline is thought to be a “conditionally essential” nutrient in that its de novo synthesis occurs in the liver and results in lesser-than-used amounts only under certain stress conditions (e.g., alcoholic liver disease). The dietary requirement for choline depends on the status of other nutrients involved in methyl-group metabolism (folate, vitamin B12, vitamin B6, and methionine) and thus varies widely. Choline is widely distributed in food (e.g., egg yolks, wheat germ, organ meat, milk) in the form of lecithin (phosphatidylcholine). Choline deficiency has occurred in patients receiving parenteral nutrition devoid of choline. Deficiency results in fatty liver, elevated aminotransferase levels, and skeletal muscle damage with high creatine phosphokinase values. The diagnosis of choline deficiency is 96e-6 currently based on low plasma levels, although nonspecific conditions (e.g., heavy exercise) may also suppress plasma levels. Toxicity from choline results in hypotension, cholinergic sweating, diarrhea, salivation, and a fishy body odor. The upper limit for choline intake has been set at 3.5 g/d. Because of its ability to lower cholesterol and homocysteine levels, choline treatment has been suggested for patients with dementia and patients at high risk of cardiovascular disease. However, the benefits of such treatment have not been firmly documented. Cholineand betaine-restricted diets are of therapeutic value in trimethylaminuria (“fish odor syndrome”). Flavonoids constitute a large family of polyphenols that contribute to the aroma, taste, and color of fruits and vegetables. Major groups of dietary flavonoids include anthocyanidins in berries; catechins in green tea and chocolate; flavonols (e.g., quercitin) in broccoli, kale, leeks, onions, and the skins of grapes and apples; and isoflavones (e.g., genistein) in legumes. Isoflavones have a low bioavailability and are partially metabolized by the intestinal flora. The dietary intake of flavonoids is estimated at 10–100 mg/d; this figure is almost certainly an underestimate attributable to a lack of information on their concentrations in many foods. Several flavonoids have antioxidant activity and affect cell signaling. From observational epidemiologic studies and limited clinical (human and animal) studies, flavonoids have been postulated to play a role in the prevention of several chronic diseases, including neurodegenerative disease, diabetes, and osteoporosis. The ultimate importance and usefulness of these compounds against human disease have not been demonstrated. Vitamin A, in the strictest sense, refers to retinol. However, the oxidized metabolites retinaldehyde and retinoic acid are also biologically active compounds. The term retinoids includes all molecules (including synthetic molecules) that are chemically related to retinol. Retinaldehyde (11-cis) is the essential form of vitamin A that is required for normal vision, whereas retinoic acid is necessary for normal morphogenesis, growth, and cell differentiation. Retinoic acid does not function in vision and, in contrast to retinol, is not involved in reproduction. Vitamin A also plays a role in iron utilization, humoral immunity, T cell–mediated immunity, natural killer cell activity, and phagocytosis. Vitamin A is commercially available in esterified forms (e.g., acetate, palmitate), which are more stable than other forms. There are more than 600 carotenoids in nature, ~50 of which can be metabolized to vitamin A. β-Carotene is the most prevalent carotenoid with provitamin A activity in the food supply. In humans, significant fractions of carotenoids are absorbed intact and are stored in liver and fat. It is estimated that ≥12 μg (range, 4–27 μg) of dietary all-trans β-carotene is equivalent to 1 μg of retinol activity, whereas the figure is ≥24 μg for other dietary provitamin A carotenoids (e.g., cryptoxanthin, α-carotene). The vitamin A equivalency for a β-carotene supplement in an oily solution is 2:1. Metabolism The liver contains ~90% of the vitamin A reserves and secretes vitamin A in the form of retinol, which is bound to retinolbinding protein. Once binding has occurred, the retinol-binding protein complex interacts with a second protein, transthyretin. This trimolecular complex functions to prevent vitamin A from being filtered by the kidney glomerulus, thus protecting the body against the toxicity of retinol and allowing retinol to be taken up by specific cell-surface receptors that recognize retinol-binding protein. A certain amount of vitamin A enters peripheral cells even if it is not bound to retinol-binding protein. After retinol is internalized by the cell, it becomes bound to a series of cellular retinol-binding proteins, which function as sequestering and transporting agents as well as co-ligands for enzymatic reactions. Certain cells also contain retinoic acid–binding proteins, which have sequestering functions but also shuttle retinoic acid to the nucleus and enable its metabolism. Retinoic acid is a ligand for certain nuclear receptors that act as transcription factors. Two families of receptors (retinoic acid receptors [RARs] and retinoid X receptors [RXRs]) are active in retinoid-mediated gene transcription. Retinoid receptors regulate transcription by binding as dimeric complexes to specific DNA sites—the retinoic acid response elements—in target genes (Chap. 400e). The receptors can either stimulate or repress gene expression in response to their ligands. RARs bind all-trans retinoic acid and 9-cis-retinoic acid, whereas RXRs bind only 9-cis-retinoic acid. The retinoid receptors play an important role in controlling cell proliferation and differentiation. Retinoic acid is useful in the treatment of promyelocytic leukemia (Chap. 132) and also is used in the treatment of cystic acne because it inhibits keratinization, decreases sebum secretion, and possibly alters the inflammatory reaction (Chap. 71). RXRs dimerize with other nuclear receptors to function as coregulators of genes responsive to retinoids, thyroid hormone, and calcitriol. RXR agonists induce insulin sensitivity experimentally, perhaps because RXRs are cofactors for the peroxisome proliferatoractivated receptors, which are targets for thiazolidinedione drugs such as rosiglitazone and troglitazone (Chap. 418). Dietary Sources The retinol activity equivalent (RAE) is used to express the vitamin A value of food: 1 RAE is defined as 1 μg of retinol (0.003491 mmol), 12 μg of β-carotene, and 24 μg of other provitamin A carotenoids. In older literature, vitamin A often was expressed in international units (IU), with 1 μg of retinol equal to 3.33 IU of retinol and 20 IU of β-carotene, but these units are no longer in scientific use. Liver, fish, and eggs are excellent food sources for preformed vitamin A; vegetable sources of provitamin A carotenoids include dark green and deeply colored fruits and vegetables. Moderate cooking of vegetables enhances carotenoid release for uptake in the gut. Carotenoid absorption is also aided by some fat in a meal. Infants are particularly susceptible to vitamin A deficiency because neither breast nor cow’s milk supplies enough vitamin A to prevent deficiency. In developing countries, chronic dietary deficiency is the main cause of vitamin A deficiency and is exacerbated by infection. In early childhood, low vitamin A status results from inadequate intakes of animal food sources and edible oils, both of which are expensive, coupled with seasonal unavailability of vegetables and fruits and lack of marketed fortified food products. Concurrent zinc deficiency can interfere with the mobilization of vitamin A from liver stores. Alcohol interferes with the conversion of retinol to retinaldehyde in the eye by competing for alcohol (retinol) dehydrogenase. Drugs that interfere with the absorption of vitamin A include mineral oil, neomycin, and cholestyramine. Deficiency Vitamin A deficiency is endemic in areas where diets are chronically poor, especially in southern Asia, sub- Saharan Africa, some parts of Latin America, and the western Pacific, including parts of China. Vitamin A status is usually assessed by measuring serum retinol (normal range, 1.05–3.50 μmol/L [30–100 μg/dL]) or blood-spot retinol or by tests of dark adaptation. Stable isotopic or invasive liver biopsy methods are available to estimate total body stores of vitamin A. As judged by deficient serum retinol (<0.70 μmol/L [20 μg/dL]), vitamin A deficiency worldwide is present in >90 million preschool-age children, among whom >4 million have an ocular manifestation of deficiency termed xerophthalmia. This condition includes milder stages of night blindness and conjunctival xerosis (dryness) with Bitot’s spots (white patches of keratinized epithelium appearing on the sclera) as well as rare, potentially blinding corneal ulceration and necrosis. Keratomalacia (softening of the cornea) leads to corneal scarring that blinds at least a quarter of a million children each year and is associated with fatality rates of 4–25%. However, vitamin A deficiency at any stage poses an increased risk of death from diarrhea, dysentery, measles, malaria, or respiratory disease. Vitamin A deficiency can compromise barrier, innate, and acquired immune defenses to infection. In areas where deficiency is widely prevalent, vitamin A supplementation can markedly reduce the risk of childhood mortality (by 23–34%, on average). About 10% of pregnant women in undernourished settings also develop night blindness (assessed by history) during the latter half of pregnancy, and this moderate vitamin A deficiency is associated with an increased risk of maternal infection and death. Any stage of xerophthalmia should be treated with 60 mg (or RAE) of vitamin A in oily solution, usually contained in a soft-gel capsule. The same dose is repeated 1 and 14 days later. Doses should be reduced by half for patients 6–11 months of age. Mothers with night blindness or Bitot’s spots should be given vitamin A orally–either 3 mg daily or 7.5 mg twice a week for 3 months. These regimens are efficacious, and they are less expensive and more widely available than injectable water-miscible vitamin A. A common approach to prevention is to provide vitamin A supplementation every 4–6 months to young children and infants (both HIV-positive and HIV-negative) in high-risk areas. Infants 6–11 months of age should receive 30 mg vitamin A; children 12–59 months of age, 60 mg. For reasons that are not clear, vitamin A supplementation has not proven useful in high-risk settings for preventing morbidity or death among infants 1–5 months of age. Uncomplicated vitamin A deficiency is rare in industrialized countries. One high-risk group—extremely low-birth-weight (<1000-g) infants—is likely to be vitamin A–deficient and should receive a supplement of 1500 μg (or RAE) three times a week for 4 weeks. Severe measles in any society can lead to secondary vitamin A deficiency. Children hospitalized with measles should receive two 60-mg doses of vitamin A on two consecutive days. Vitamin A deficiency most often occurs in patients with malabsorptive diseases (e.g., celiac sprue, short-bowel syndrome) who have abnormal dark adaptation or symptoms of night blindness without other ocular changes. Typically, such patients are treated for 1 month with 15 mg/d of a water-miscible preparation of vitamin A. This treatment is followed by a lower maintenance dose, with the exact amount determined by monitoring serum retinol. No specific signs or symptoms result from carotenoid deficiency. It was postulated that β-carotene would be an effective chemopreventive agent for cancer because numerous epidemiologic studies had shown that diets high in β-carotene were associated with lower incidences of cancers of the respiratory and digestive systems. However, intervention studies in smokers found that treatment with high doses of β-carotene actually resulted in more lung cancers than did treatment with placebo. Non–provitamin A carotenoids such as lutein and zeaxanthin have been suggested to confer protection against macular degeneration, and one large-scale intervention study did not show a beneficial effect except in those with a low lutein status. The use of the non–provitamin A carotenoid lycopene to protect against prostate cancer has been proposed. Again, however, the effectiveness of these agents has not been proved by intervention studies, and the mechanisms underlying these purported biologic actions are unknown. Selective plant-breeding techniques that lead to a higher provitamin A content in staple foods may decrease vitamin A malnutrition in low-income countries. Moreover, a recently developed genetically modified food (Golden Rice) has an improved β-carotene–to– vitamin A conversion ratio of ~3:1. Toxicity The acute toxicity of vitamin A was first noted in Arctic explorers who ate polar bear liver and has also been seen after administration of 150 mg to adults or 100 mg to children. Acute toxicity is manifested by increased intracranial pressure, vertigo, diplopia, bulging fontanels (in children), seizures, and exfoliative dermatitis; it may result in death. Among children being treated for vitamin A deficiency according to the protocols outlined above, transient bulging of fontanels occurs in 2% of infants, and transient nausea, vomiting, and headache occur in 5% of preschoolers. Chronic vitamin A intoxication is largely a concern in industrialized countries and has been seen in otherwise healthy adults who ingest 15 mg/d and children who ingest 6 mg/d over a period of several months. Manifestations include dry skin, cheilosis, glossitis, vomiting, alopecia, bone demineralization and pain, hypercalcemia, lymph node enlargement, hyperlipidemia, amenorrhea, and features of pseudotumor cerebri with increased intracranial pressure and papilledema. Liver fibrosis with portal hypertension and 96e-7 bone demineralization may result from chronic vitamin A intoxication. Provision of vitamin A in excess to pregnant women has resulted in spontaneous abortion and in congenital malformations, including craniofacial abnormalities and valvular heart disease. In pregnancy, the daily dose of vitamin A should not exceed 3 mg. Commercially available retinoid derivatives are also toxic, including 13-cis-retinoic acid, which has been associated with birth defects. Thus contraception should be continued for at least 1 year and possibly longer in women who have taken 13-cis-retinoic acid. In malnourished children, vitamin A supplements (30–60 mg), in amounts calculated as a function of age and given in several rounds over 2 years, are considered to amplify nonspecific effects of vaccines. However, for unclear reasons, there may be a negative effect on mortality rates in incompletely vaccinated girls. High doses of carotenoids do not result in toxic symptoms but should be avoided in smokers due to an increased risk of lung cancer. Very high doses of β-carotene (~200 mg/d) have been used to treat or prevent the skin rashes of erythropoietic protoporphyria. Carotenemia, which is characterized by a yellowing of the skin (in creases of the palms and soles) but not the sclerae, may follow ingestion of >30 mg of β-carotene daily. Hypothyroid patients are particularly susceptible to the development of carotenemia due to impaired breakdown of carotene to vitamin A. Reduction of carotenes in the diet results in the disappearance of skin yellowing and carotenemia over a period of 30–60 days. The metabolism of the fat-soluble vitamin D is described in detail in Chap. 423. The biologic effects of this vitamin are mediated by vitamin D receptors, which are found in most tissues; binding with these receptors potentially expands vitamin D actions on nearly all cell systems and organs (e.g., immune cells, brain, breast, colon, and prostate) as well as exerting classic endocrine effects on calcium metabolism and bone health. Vitamin D is thought to be important for maintaining normal function of many nonskeletal tissues such as muscle (including heart muscle), for immune function, and for inflammation as well as for cell proliferation and differentiation. Studies have shown that vitamin D may be useful as adjunctive treatment for tuberculosis, psoriasis, and multiple sclerosis or for the prevention of certain cancers. Vitamin D insufficiency may increase the risk of type 1 diabetes mellitus, cardiovascular disease (insulin resistance, hypertension, or low-grade inflammation), or brain dysfunction (e.g., depression). However, the exact physiologic roles of vitamin D in these nonskeletal diseases and the importance of these roles have not been clarified. The skin is a major source of vitamin D, which is synthesized upon skin exposure to ultraviolet B radiation (UV-B; wavelength, 290–320 nm). Except for fish, food (unless fortified) contains only limited amounts of vitamin D. Vitamin D2 (ergocalciferol) is obtained from plant sources and is the chemical form found in some supplements. Deficiency Vitamin D status has been assessed by measuring serum levels of 25-dihydroxyvitamin D (25[OH]2 vitamin D); however, there is no consensus on a uniform assay or on optimal serum levels. The optimal level might, in fact, differ according to the targeted disease entity. Epidemiologic and experimental data indicate that a 25(OH)2 vitamin D level of >20 ng/mL (≥50 nmol/L; to convert ng/mL to nmol/L, multiply by 2.496) is sufficient for good bone health. Some experts advocate higher serum levels (e.g., >30 ng/mL) for other desirable endpoints of vitamin D action. There is insufficient evidence to recommend combined vitamin D and calcium supplementation as a primary preventive strategy for reduction of the incidence of fractures in healthy men and premenopausal women. Risk factors for vitamin D deficiency are old age, lack of sun exposure, dark skin (especially among residents of northern latitudes), fat malabsorption, and obesity. Rickets represents the classic disease of vitamin D deficiency. Signs of deficiency are muscle soreness, weakness, and bone pain. Some of these effects are independent of calcium intake. 96e-8 The U.S. National Academy of Sciences recently concluded that the majority of North Americans are receiving adequate amounts of vitamin D (RDA = 15 μg/d or 600 IU/d; Chap. 95e). However, for people older than 70 years, the RDA is set at 20 μg/d (800 IU/d). The consumption of fortified or enriched foods as well as suberythemal sun exposure should be encouraged for people at risk for vitamin D deficiency. If adequate intake is impossible, vitamin D supplements should be taken, especially during the winter months. Vitamin D deficiency can be treated by the oral administration of 50,000 IU/week for 6–8 weeks followed by a maintenance dose of 800 IU/d (100 μg/d) from food and supplements once normal plasma levels have been attained. The physiologic effects of vitamin D2 and vitamin D3 are identical when these vitamins are ingested over long periods. Toxicity The upper limit of intake has been set at 4000 IU/d. Contrary to earlier beliefs, acute vitamin D intoxication is rare and usually is caused by the uncontrolled and excessive ingestion of supplements or by faulty food fortification practices. High plasma levels of 1,25(OH)2 vitamin D and calcium are central features of toxicity and mandate discontinuation of vitamin D and calcium supplements; in addition, treatment of hypercalcemia may be required. Vitamin E is the collective designation for all stereoisomers of tocopherols and tocotrienols, although only the RR tocopherols meet human requirements. Vitamin E acts as a chain-breaking antioxidant and is an efficient pyroxyl radical scavenger that protects low-density lipoproteins and polyunsaturated fats in membranes from oxidation. A network of other antioxidants (e.g., vitamin C, glutathione) and enzymes maintains vitamin E in a reduced state. Vitamin E also inhibits prostaglandin synthesis and the activities of protein kinase C and phospholipase A2. Absorption and Metabolism After absorption, vitamin E is taken up from chylomicrons by the liver, and a hepatic α-tocopherol transport protein mediates intracellular vitamin E transport and incorporation into very low density lipoprotein. The transport protein has a particular affinity for the RRR isomeric form of α-tocopherol; thus, this natural isomer has the most biologic activity. requirement Vitamin E is widely distributed in the food supply, with particularly high levels in sunflower oil, safflower oil, and wheat germ oil; γ-tocotrienols are notably present in soybean and corn oils. Vitamin E is also found in meats, nuts, and cereal grains, and small amounts are present in fruits and vegetables. Vitamin E pills containing doses of 50–1000 mg are ingested by ~10% of the U.S. population. The RDA for vitamin E is 15 mg/d (34.9 μmol or 22.5 IU) for all adults. Diets high in polyunsaturated fats may necessitate a slightly higher intake of vitamin E. Dietary deficiency of vitamin E does not exist. Vitamin E deficiency is seen only in severe and prolonged malabsorptive diseases, such as celiac disease, or after small-intestinal resection or bariatric surgery. Children with cystic fibrosis or prolonged cholestasis may develop vitamin E deficiency characterized by areflexia and hemolytic anemia. Children with abetalipoproteinemia cannot absorb or transport vitamin E and become deficient quite rapidly. A familial form of isolated vitamin E deficiency also exists; it is due to a defect in the α-tocopherol transport protein. Vitamin E deficiency causes axonal degeneration of the large myelinated axons and results in posterior column and spinocerebellar symptoms. Peripheral neuropathy is initially characterized by areflexia, with progression to an ataxic gait, and by decreased vibration and position sensations. Ophthalmoplegia, skeletal myopathy, and pigmented retinopathy may also be features of vitamin E deficiency. A deficiency of either vitamin E or selenium in the host has been shown to increase certain viral mutations and, therefore, virulence. The laboratory diagnosis of vitamin E deficiency is based on low blood levels of α-tocopherol (<5 μg/mL, or <0.8 mg of α-tocopherol per gram of total lipids). Symptomatic vitamin E deficiency should be treated with 800–1200 mg of α-tocopherol per day. Patients with abetalipoproteinemia may need as much as 5000–7000 mg/d. Children with symptomatic vitamin E deficiency should be treated orally with water-miscible esters (400 mg/d); alternatively, 2 mg/kg per day may be administered intramuscularly. Vitamin E in high doses may protect against oxygen-induced retrolental fibroplasia and bronchopulmonary dysplasia as well as intraventricular hemorrhage of prematurity. Vitamin E has been suggested to increase sexual performance, treat intermittent claudication, and slow the aging process, but evidence for these properties is lacking. When given in combination with other antioxidants, vitamin E may help prevent macular degeneration. High doses (60–800 mg/d) of vitamin E have been shown in controlled trials to improve parameters of immune function and reduce colds in nursing home residents, but intervention studies using vitamin E to prevent cardiovascular disease or cancer have not shown efficacy, and, at doses >400 mg/d, vitamin E may even increase all-cause mortality rates. Toxicity All forms of vitamin E are absorbed and could contribute to toxicity; however, the toxicity risk seems to be rather low as long as liver function is normal. High doses of vitamin E (>800 mg/d) may reduce platelet aggregation and interfere with vitamin K metabolism and are therefore contraindicated in patients taking warfarin and anti-platelet agents (such as aspirin or clopidogrel). Nausea, flatulence, and diarrhea have been reported at doses >1 g/d. There are two natural forms of vitamin K: vitamin K1, also known as phylloquinone, from vegetable and animal sources, and vitamin K2, or menaquinone, which is synthesized by bacterial flora and found in hepatic tissue. Phylloquinone can be converted to menaquinone in some organs. Vitamin K is required for the posttranslational carboxylation of glutamic acid, which is necessary for calcium binding to γ-carboxylated proteins such as prothrombin (factor II); factors VII, IX, and X; protein C; protein S; and proteins found in bone (osteocalcin) and vascular smooth muscle (e.g., matrix Gla protein). However, the importance of vitamin K for bone mineralization and prevention of vascular calcification is not known. Warfarin-type drugs inhibit γ-carboxylation by preventing the conversion of vitamin K to its active hydroquinone form. Dietary Sources Vitamin K is found in green leafy vegetables such as kale and spinach, and appreciable amounts are also present in margarine and liver. Vitamin K is present in vegetable oils; olive, canola, and soybean oils are particularly rich sources. The average daily intake by Americans is estimated to be ~100 μg/d. Deficiency The symptoms of vitamin K deficiency are due to hemorrhage; newborns are particularly susceptible because of low fat stores, low breast milk levels of vitamin K, relative sterility of the infantile intestinal tract, liver immaturity, and poor placental transport. Intracranial bleeding as well as gastrointestinal and skin bleeding can occur in vitamin K–deficient infants 1–7 days after birth. Thus, vitamin K (0.5–1 mg IM) is given prophylactically at delivery. Vitamin K deficiency in adults may be seen in patients with chronic small-intestinal disease (e.g., celiac disease, Crohn’s disease), in those with obstructed biliary tracts, or after small-bowel resection. Broad-spectrum antibiotic treatment can precipitate vitamin K deficiency by reducing numbers of gut bacteria, which synthesize menaquinones, and by inhibiting the metabolism of vitamin K. In patients with warfarin therapy, the anti-obesity drug orlistat can lead to international normalized ratio changes due to vitamin K malabsorption. Vitamin K deficiency usually is diagnosed on the basis of an elevated prothrombin time or reduced clotting factors, although vitamin K may also be measured directly by high-pressure liquid chromatography. Vitamin K deficiency is treated with a parenteral dose of 10 mg. For patients with chronic malabsorption, 1–2 mg/d should be given orally or 1–2 mg per week can be taken parenterally. Patients with liver disease may have an elevated prothrombin time because of liver cell destruction as well as vitamin K deficiency. If an elevated prothrombin time does not improve during vitamin K therapy, it can be deduced that this abnormality is not the result of vitamin K deficiency. Toxicity Toxicity from dietary phylloquinones and menaquinones has not been described. High doses of vitamin K can impair the actions of oral anticoagulants. See also Table 96e-2. See Chap. 423. Zinc is an integral component of many metalloenzymes in the body; it is involved in the synthesis and stabilization of proteins, DNA, and RNA and plays a structural role in ribosomes and membranes. Zinc is necessary for the binding of steroid hormone receptors and several other transcription factors to DNA. Zinc is absolutely required for normal spermatogenesis, fetal growth, and embryonic development. Absorption The absorption of zinc from the diet is inhibited by dietary phytate, fiber, oxalate, iron, and copper as well as by certain drugs, including penicillamine, sodium valproate, and ethambutol. Meat, shellfish, nuts, and legumes are good sources of bioavailable zinc, whereas zinc in grains and legumes is less available for absorption. diseases, including diabetes mellitus, HIV/AIDS, cirrhosis, alcoholism, inflammatory bowel disease, malabsorption syndromes, and sickle cell disease. In these diseases, mild chronic zinc deficiency can cause stunted growth in children, decreased taste sensation (hypogeusia), and impaired immune function. Severe chronic zinc deficiency has been described as a cause of hypogonadism and dwarfism in several Middle Eastern countries. In these children, hypopigmented hair is also part of the syndrome. Acrodermatitis enteropathica is a rare autosomal recessive disorder characterized by abnormalities in zinc absorption. Clinical manifestations include diarrhea, alopecia, muscle wasting, depression, irritability, and a rash involving the extremities, face, and perineum. The rash is characterized by vesicular and pustular crusting with scaling and erythema. Occasional patients with Wilson’s disease have developed zinc deficiency as a consequence of penicillamine therapy (Chap. 429). Zinc deficiency is prevalent in many developing countries and usually coexists with other micronutrient deficiencies (especially iron deficiency). Zinc (20 mg/d until recovery) may be an effective adjunctive therapeutic strategy for diarrheal disease and pneumonia in children ≥ 6 months of age. The diagnosis of zinc deficiency is usually based on a serum zinc level <12 μmol/L (<70 μg/dL). Pregnancy and birth control pills may cause a slight depression in serum zinc levels, and hypoalbuminemia from any cause can result in hypozincemia. In acute stress situations, zinc may be redistributed from serum into tissues. Zinc deficiency may be treated with 60 mg of elemental zinc taken by mouth twice a day. Zinc gluconate lozenges (13 mg of elemental zinc every 2 h while awake) have been reported to reduce the duration and symptoms of the common cold in adults, but study results are conflicting. 96e-10 Toxicity Acute zinc toxicity after oral ingestion causes nausea, vomiting, and fever. Zinc fumes from welding may also be toxic and cause fever, respiratory distress, excessive salivation, sweating, and headache. Chronic large doses of zinc may depress immune function and cause hypochromic anemia as a result of copper deficiency. Intranasal zinc preparations should be avoided because they may lead to irreversible damage of the nasal mucosa and anosmia. Copper is an integral part of numerous enzyme systems, including amine oxidases, ferroxidase (ceruloplasmin), cytochrome c oxidase, superoxide dismutase, and dopamine hydroxylase. Copper is also a component of ferroprotein, a transport protein involved in the basolateral transfer of iron during absorption from the enterocyte. As such, copper plays a role in iron metabolism, melanin synthesis, energy production, neurotransmitter synthesis, and CNS function; the synthesis and cross-linking of elastin and collagen; and the scavenging of superoxide radicals. Dietary sources of copper include shellfish, liver, nuts, legumes, bran, and organ meats. Deficiency Dietary copper deficiency is relatively rare, although it has been described in premature infants who are fed milk diets and in infants with malabsorption (Table 96e-2). Copper-deficiency anemia (refractory to therapeutic iron) has been reported in patients with malabsorptive diseases and nephrotic syndrome and in patients treated for Wilson’s disease with chronic high doses of oral zinc, which can interfere with copper absorption. Menkes kinky hair syndrome is an X-linked metabolic disturbance of copper metabolism characterized by mental retardation, hypocupremia, and decreased circulating ceruloplasmin (Chap. 427). This syndrome is caused by mutations in the copper-transporting ATP7A gene. Children with this disease often die within 5 years because of dissecting aneurysms or cardiac rupture. Aceruloplasminemia is a rare autosomal recessive disease characterized by tissue iron overload, mental deterioration, microcytic anemia, and low serum iron and copper concentrations. The diagnosis of copper deficiency is usually based on low serum levels of copper (<65 μg/dL) and low ceruloplasmin levels (<20 mg/ dL). Serum levels of copper may be elevated in pregnancy or stress conditions since ceruloplasmin is an acute-phase reactant and 90% of circulating copper is bound to ceruloplasmin. Toxicity Copper toxicity is usually accidental (Table 96e-2). In severe cases, kidney failure, liver failure, and coma may ensue. In Wilson’s disease, mutations in the copper-transporting ATP7B gene lead to accumulation of copper in the liver and brain, with low blood levels due to decreased ceruloplasmin (Chap. 429). Selenium, in the form of selenocysteine, is a component of the enzyme glutathione peroxidase, which serves to protect proteins, cell membranes, lipids, and nucleic acids from oxidant molecules. As such, selenium is being actively studied as a chemopreventive agent against certain cancers, such as prostate cancer. Selenocysteine is also found in the deiodinase enzymes, which mediate the deiodination of thyroxine to triiodothyronine (Chap. 405). Rich dietary sources of selenium include seafood, muscle meat, and cereals, although the selenium content of cereal is determined by the soil concentration. Countries with low soil concentrations include parts of Scandinavia, China, and New Zealand. Keshan disease is an endemic cardiomyopathy found in children and young women residing in regions of China where dietary intake of selenium is low (<20 μg/d). Concomitant deficiencies of iodine and selenium may worsen the clinical manifestations of cretinism. Chronic ingestion of large amounts of selenium leads to selenosis, characterized by hair and nail brittleness and loss, garlic breath odor, skin rash, myopathy, irritability, and other abnormalities of the nervous system. Chromium potentiates the action of insulin in patients with impaired glucose tolerance, presumably by increasing insulin receptor– mediated signaling, although its usefulness in treating type 2 diabetes is uncertain. In addition, improvement in blood lipid profiles has been reported in some patients. The usefulness of chromium supplements in muscle building has not been substantiated. Rich food sources of chromium include yeast, meat, and grain products. Chromium in the trivalent state is found in supplements and is largely nontoxic; however, chromium-6 is a product of stainless steel welding and is a known pulmonary carcinogen as well as a cause of liver, kidney, and CNS damage. See Chap. 423. FLuOrIDE, MANgANESE, AND uLTrATrACE ELEMENTS An essential function for fluoride in humans has not been described, although it is useful for the maintenance of structure in teeth and bones. Adult fluorosis results in mottled and pitted defects in tooth enamel as well as brittle bone (skeletal fluorosis). Manganese and molybdenum deficiencies have been reported in patients with rare genetic abnormalities and in a few patients receiving prolonged total parenteral nutrition. Several manganese-specific enzymes have been identified (e.g., manganese superoxide dismutase). Deficiencies of manganese have been reported to result in bone demineralization, poor growth, ataxia, disturbances in carbohydrate and lipid metabolism, and convulsions. Ultratrace elements are defined as those needed in amounts <1 mg/d. Essentiality has not been established for most ultratrace elements, although selenium, chromium, and iodine are clearly essential (Chap. 405). Molybdenum is necessary for the activity of sulfite and xanthine oxidase, and molybdenum deficiency may result in skeletal and brain lesions. 459 CHAPTER 97 Malnutrition and Nutritional Assessment Douglas C. Heimburger Malnutrition can arise from primary or secondary causes, result-ing in the former case from inadequate or poor-quality food intake and in the latter case from diseases that alter food intake or nutri-97 ent requirements, metabolism, or absorption. Primary malnutrition occurs mainly in developing countries and under conditions of political unrest, war, or famine. Secondary malnutrition, the main form encountered in industrialized countries, was largely unrecognized until the early 1970s, when it was appreciated that persons with adequate food supplies can become malnourished as a result of acute or chronic diseases that alter nutrient intake or metabolism, particularly diseases that cause acute or chronic inflammation. Various studies have shown that protein-energy malnutrition (PEM) affects one-third to one-half of patients on general medical and surgical wards in teaching hospitals. The consistent finding that nutritional status influences patient prognosis underscores the importance of preventing, detecting, and treating malnutrition. Definitions for forms of PEM are in flux. Traditionally, the two major types of PEM have been marasmus and kwashiorkor. These conditions are compared in Table 97-1. Marasmus is the end result of a long-term deficit of dietary energy, whereas kwashiorkor has been understood to result from a protein-poor diet. Although the former concept remains essentially correct, evidence is accumulating that PEM syndromes are distinguished by two main features: insufficient dietary intake and underlying inflammatory processes. Energy-poor diets with minimal inflammation cause gradual erosion of body mass, resulting in classic marasmus. By contrast, inflammation from acute illnesses such as injury or sepsis or from chronic illnesses such as cancer, lung or heart disease, or HIV infection can erode lean body mass even in the presence of relatively sufficient dietary intake, leading to a kwashiorkor-like state. Quite often, inflammatory illnesses impair appetite and dietary intake, producing combinations of the two conditions. Consensus committees have proposed the following revised definitions. Starvation–related malnutrition is suggested for instances of chronic starvation without inflammation, chronic disease–related malnutrition when inflammation is chronic and of mild to moderate degree, and acute disease– or injury–related malnutrition when inflammation is acute and of a severe degree. However, because distinguishing diagnostic criteria for these conditions have not been universally adopted, this chapter integrates the older and newer terms. Marasmus (starvation–related malnutrition) is a state in which virtually all available body fat stores have been exhausted due to starvation without systemic inflammation. Cachexia (chronic disease–related malnutrition) is a state that involves substantial loss of lean body mass in the presence of chronic systemic inflammation. Conditions that produce cachexia tend to be chronic and indolent, such as cancer and chronic pulmonary disease, whereas, in high-income countries, the classic setting for marasmus is in patients with anorexia nervosa. These conditions are relatively easy to detect because of the patient’s starved aThe findings used to diagnose kwashiorkor/acute malnutrition must be unexplained by other causes. bTested by firmly pulling a lock of hair from the top (not the sides or back), grasping with the thumb and forefinger. An average of three or more hairs removed easily and painlessly is considered abnormal hair pluckability. appearance. The diagnosis is based on fat and muscle wastage resulting from prolonged calorie deficiency and/or inflammation. Diminished skinfold thickness reflects the loss of fat reserves; reduced arm muscle circumference with temporal and interosseous muscle wasting reflects the catabolism of protein throughout the body, including in vital organs such as the heart, liver, and kidneys. Routine laboratory findings in cachexia/marasmus are relatively unremarkable. The creatinine-height index (24-h urinary creatinine excretion compared with normal values based on height) is low, reflecting the loss of muscle mass. Occasionally, the serum albumin level is reduced, but it remains above 2.8 g/dL when systemic inflammation is absent. Despite a morbid appearance, immunocompetence, wound healing, and the ability to handle short-term stress are reasonably well preserved in most patients. Pure starvation–related malnutrition is a chronic, fairly well adapted form of starvation rather than an acute illness; it should be treated cautiously in an attempt to reverse the downward trend gradually. Although nutritional support is necessary, overly aggressive repletion can result in severe, even life-threatening metabolic imbalances such as hypophosphatemia and cardiorespiratory failure (refeeding syndrome). When possible, oral or enteral nutritional support is preferred; treatment started slowly allows readaptation of metabolic and intestinal functions (Chap. 98e). By contrast, kwashiorkor (acute disease– or injury–related malnutrition) in developed countries occurs mainly in connection with acute, life-threatening conditions such as trauma and sepsis. The physiologic stress produced by these illnesses increases protein and energy requirements at a time when intake is often limited. A classic scenario is an acutely stressed patient who receives only 5% dextrose solutions for periods as brief as 2 weeks. Although the etiologic mechanisms are not fully known, the protein-sparing response normally seen in starvation is blocked by the stressed state and by carbohydrate infusion. In its early stages, the physical findings of kwashiorkor/acute malnutrition are few and subtle. Initially unaffected fat reserves and muscle mass give the deceptive appearance of adequate nutrition. Signs that support the diagnosis include easy hair pluckability, edema, skin breakdown, and poor wound healing. The major sine qua non is severe reduction of levels of serum proteins such as albumin (<2.8 g/dL) and transferrin (<150 mg/dL) or of iron-binding capacity (<200 μg/dL). Cellular immune function is depressed, as reflected by lymphopenia (<1500 lymphocytes/μL in adults and older children) and lack of response to skin test antigens (anergy). The prognosis of adult patients with full-blown kwashiorkor/acute malnutrition is not good even with aggressive nutritional support. Surgical wounds often dehisce (fail to heal), pressure sores develop, gastroparesis and diarrhea can occur with enteral feeding, the risk of gastrointestinal bleeding from stress ulcers is increased, host defenses are compromised, and death from overwhelming infection may occur despite antibiotic therapy. Unlike treatment of marasmus, therapy for kwashiorkor entails aggressive nutritional support to restore better metabolic balance rapidly (Chap. 98e). The metabolic characteristics and nutritional needs of hypermetabolic patients who are stressed from injury, infection, or chronic inflammatory illness differ from those of hypometabolic patients who are unstressed but chronically starved. In both cases, nutritional support is important, but misjudgments in selecting the appropriate approach may have serious adverse consequences. The hypometabolic patient is typified by the relatively less stressed but mildly catabolic and chronically starved individual who, with time, will develop cachexia/marasmus. The hypermetabolic patient stressed from injury or infection is catabolic (experiencing rapid breakdown of body mass) and is at high risk for developing acute malnutrition/ kwashiorkor if nutritional needs are not met and/or the illness does not resolve quickly. As summarized in Table 97-2, the two states are distinguished by differing perturbations of metabolic rate, rates of protein breakdown (proteolysis), and rates of gluconeogenesis. These differences are mediated by proinflammatory cytokines and counter-regulatory hormones—tumor necrosis factor, interleukins 1 and 6, C-reactive protein, catecholamines (epinephrine and norepinephrine), glucagon, and cortisol—whose levels are relatively reduced in hypo-metabolic patients and increased in hypermetabolic patients. Although insulin levels are also elevated in stressed patients, insulin resistance in the target tissues blocks insulin-mediated anabolic effects. Physiologic Cytokines, catecholamines, ↓↑ glucagon, cortisol, insulin Metabolic rate, O2 consumption ↓↑ Proteolysis, gluconeogenesis ↓↑ Ureagenesis, urea excretion ↓↑ Fat catabolism, fatty acid utilization Relative ↑ Absolute ↑ Adaptation to starvation Normal Abnormal characteristics of patients at risk for chronic disease–related malnutrition are less predictable and likely represent a mixture of the two extremes depicted in Table 97-2. Metabolic rate In starvation and semistarvation, the resting metabolic rate falls between 10% and 30% as an adaptive response to energy restriction, slowing the rate of weight loss. By contrast, the resting metabolic rate rises in the presence of physiologic stress in proportion to the degree of the insult. The rate may increase by ~10% after elective surgery, 20–30% after bone fractures, 30–60% with severe infections such as peritonitis or gram-negative septicemia, and as much as 110% after major burns. If the metabolic rate (energy requirement) is not matched by energy intake, weight loss results—slowly in hypometabolism and quickly in hypermetabolism. Losses of up to 10% of body mass are unlikely to be detrimental; however, greater losses in acutely ill hypermetabolic patients may be associated with rapid deterioration in body functions. Protein Catabolism The rate of endogenous protein breakdown (catabolism) to supply energy needs normally falls during uncomplicated energy deprivation. After ~10 days of total starvation, an unstressed individual loses about 12–18 g of protein per day (equivalent to ~60 g of muscle tissue or ~2–3 g of nitrogen). In contrast, in injury and sepsis, protein breakdown accelerates in proportion to the degree of stress, reaching 30–60 g/d after elective surgery, 60–90 g/d with infection, 100–130 g/d with severe sepsis or skeletal trauma, and >175 g/d with major burns or head injuries. These losses are reflected by proportional increases in the excretion of urea nitrogen, the major by-product of protein breakdown. Gluconeogenesis The major aim of protein catabolism during a state of starvation is to provide the glucogenic amino acids (especially alanine and glutamine) that serve as substrates for endogenous glucose production (gluconeogenesis) in the liver. In the hypometabolic/starved state, protein breakdown for gluconeogenesis is minimized, especially as ketones derived from fatty acids become the substrate preferred by certain tissues. In the hypermetabolic/stress state, gluconeogenesis increases dramatically and in proportion to the degree of the insult to increase the supply of glucose (the major fuel of reparation). Glucose is the only fuel that can be utilized by hypoxemic tissues (anaerobic glycolysis), white blood cells, and newly generated fibroblasts. Infusions of glucose partially offset a negative energy balance but do not significantly suppress the high rates of gluconeogenesis in catabolic patients. Hence, adequate supplies of protein are needed to replace the amino acids used for this metabolic response. In summary, a hypometabolic patient is adapted to starvation and conserves body mass through reduction of the metabolic rate and use of fat as the primary fuel (rather than glucose and its precursor amino acids). A hypermetabolic patient also uses fat as a fuel but rapidly breaks down body protein to produce glucose, with consequent loss of muscle and organ tissue and danger to vital body functions. The same illnesses and reductions in nutrient intake that lead to PEM often produce deficiencies of vitamins and minerals as well (Chap. 96e). Deficiencies of nutrients that are stored in small amounts (such as the water-soluble vitamins) occur because of loss through external secretions, such as zinc in diarrhea fluid or burn exudate, and are probably more common than is generally recognized. Deficiencies of vitamin C, folic acid, and zinc are relatively common in sick patients. Signs of scurvy, such as corkscrew hairs on the lower extremities, are found frequently in chronically ill and/or alcoholic patients. The diagnosis can be confirmed by determination of plasma vitamin C levels. Folic acid intakes and blood levels are often less than optimal, even among healthy persons; with illness, alcoholism, poverty, or poor dentition, these deficiencies are common. Low blood zinc levels are prevalent in patients with malabsorption syndromes such as inflammatory bowel disease. Patients with zinc deficiency often exhibit poor wound healing, pressure ulcer formation, and impaired immunity. Thiamine deficiency is a common complication of alcohol-461 ism but may be prevented by therapeutic doses of thiamine in patients treated for alcohol abuse. Patients with low plasma vitamin C levels usually respond to the doses in multivitamin preparations, but patients with deficiencies should be supplemented with 250–500 mg/d. Folic acid is absent from some oral multivitamin preparations; patients with deficiencies should be supplemented with ~1 mg/d. Patients with zinc deficiencies resulting from large external losses sometimes require oral supplementation with 220 mg of zinc sulfate one to three times daily. For these reasons, laboratory assessments of the micronutrient status of patients at high risk are desirable. Hypophosphatemia develops in hospitalized patients with remarkable frequency and generally results from rapid intracellular shifts of phosphate in underweight or alcoholic patients receiving intravenous glucose (Chap. 63). The adverse clinical sequelae are numerous; some, such as acute cardiopulmonary failure, are collectively called refeeding syndrome and can be life-threatening. Many developing countries are still faced with high preva lences of the classic forms of PEM: marasmus and kwashior kor. Food insecurity, which characterizes many poor countries, prevents consistent dietary sufficiency and/or quality and leads to endemic or cyclic malnutrition. Factors threatening food security include marked seasonal variations in agricultural productivity (rainy season–dry season cycles), periodic droughts, political unrest or injustice, and disease epidemics (especially of HIV/AIDS). The coexistence of malnutrition and disease epidemics exacerbates the latter and increases complications and mortality rates, creating vicious cycles of malnutrition and disease. As economic prosperity improves, developing countries have been observed to undergo an epidemiologic transition, a component of which has been termed the nutrition transition. As improved economic resources make greater dietary diversity possible, middle-income populations (e.g., in southern Asia, China, and Latin America) typically begin to adopt lifestyle habits of industrialized nations, with increased consumption of energy and fat and decreased levels of physical activity. These changes lead to rising levels of obesity, metabolic syndrome, diabetes, cardiovascular disease, and cancer, sometimes coexisting in populations with persistent undernutrition. Micronutrient deficiencies also remain prevalent in many countries of the world, impairing functional status and productivity and increasing mortality rates. Vitamin A deficiency impairs vision and increases morbidity and mortality rates from infections such as measles. Mild to moderate iron deficiency may be prevalent in up to 50% of the world, resulting from poor dietary diversity coupled with periodic blood loss and pregnancies. Iodine deficiency remains prevalent, causing goiter, hypothyroidism, and cretinism. Zinc deficiency is endemic in many populations, producing growth retardation, hypogonadism, and dermatoses and impairing wound healing. Fortunately, public health supplementation programs have substantially improved vitamin A and zinc status in developing countries during the past two decades, reducing mortality rates from measles, diarrheal diseases, and other manifestations. However, with the advancing nutrition transition and a shift toward nutritionally related chronic noncommunicable conditions, it is estimated that nutrition remains one of the three greatest contributors of risk for morbidity and mortality worldwide. Because interactions between illness and nutrition are complex, many physical and laboratory findings reflect both underlying disease and nutritional status. Therefore, the nutritional evaluation of a patient requires an integration of history, physical examination, anthropometrics, and laboratory studies. This approach helps both to detect nutritional problems and to prevent the conclusion that isolated findings indicate nutritional problems when they do not. For example, hypoalbuminemia caused by an inflammatory illness does not necessarily indicate malnutrition. NuTRITIONAl DEfICIENCy: THE HIgH-RIsK PATIENT Underweight (body mass index <18.5) and/or recent loss of ≥10% of usual body mass Poor intake: anorexia, food avoidance (e.g., psychiatric condition), or NPOa status for more than ~5 days Protracted nutrient losses: malabsorption, enteric fistulas, draining abscesses or wounds, renal dialysis Hypermetabolic states: sepsis, protracted fever, extensive trauma or burns Alcohol abuse or use of drugs with antinutrient or catabolic properties: glucocorticoids, antimetabolites (e.g., methotrexate), immunosuppressants, antitumor agents Impoverishment, isolation, advanced age aNil per os (nothing by mouth). Nutritional History Elicitation of a nutritional history is directed toward the identification of underlying mechanisms that put patients at risk for nutritional depletion or excess. These mechanisms include inadequate intake, impaired absorption, decreased utilization, increased losses, and increased requirements for nutrients. Individuals with the characteristics listed in Table 97-3 are at particular risk for nutritional deficiencies. Physical Examination Physical findings that suggest vitamin, mineral, and protein-energy deficiencies and excesses are outlined in Table 97-4. Most of the physical findings are not specific for individual nutrient deficiencies and must be integrated with historic, anthropometric, and laboratory findings. For example, follicular hyperkeratosis on the back of the arms is a fairly common, normal finding. However, if it is widespread in a person who consumes few fruits and vegetables and smokes regularly (increasing ascorbic acid requirements), vitamin C deficiency is likely. Similarly, easily pluck-able hair may be a consequence of chemotherapy but suggests acute malnutrition/kwashiorkor in a hospitalized patient who has poorly healing surgical wounds and hypoalbuminemia. anthropometric Measurements Anthropometric measurements provide information on body muscle mass and fat reserves. The most practical and commonly used measurements are body weight, height, triceps skinfold (TSF), and midarm muscle circumference (MAMC). Body weight is one of the most useful nutritional parameters to follow in patients who are acutely or chronically ill. Unintentional weight loss during illness often reflects loss of lean body mass (muscle and organ tissue), especially if it is rapid and is not caused by diuresis. Such weight loss can be an ominous sign since it indicates use of vital body protein stores for metabolic fuel. The reference standard for normal body weight, body mass index (BMI: weight in kilograms divided by height, in meters, squared), is discussed in Chap. 416. BMI values <18.5 are considered underweight; <17, significantly underweight; and <16, severely wasted. Values of 18.5–24.9 are normal; 25–29.9, overweight; and ≥30, obese. Measurement of skinfold thickness is useful for estimating body fat stores, because ~50% of body fat is normally located in the subcutaneous region. This measurement can also permit discrimination of fat mass from muscle mass. The triceps is a convenient site that is generally representative of the body’s overall fat level. A thickness <3 mm suggests virtually complete exhaustion of fat stores. The MAMC can be used to estimate skeletal muscle mass, calculated as follows: MAMC (cm) = upper arm circumference (cm) − [0.314 × TSF (mm)] Laboratory Studies A number of laboratory tests used routinely in clinical medicine can yield valuable information about a patient’s nutritional status if a slightly different approach to their interpretation is used. For example, abnormally low serum albumin levels, low total iron-binding capacity, and anergy may have a distinct explanation, but collectively they may represent kwashiorkor. In the clinical setting of a hypermetabolic, acutely ill patient who is edematous and has easily pluckable hair and inadequate protein intake, the diagnosis of acute malnutrition/kwashiorkor is clear-cut. Commonly used laboratory tests for assessing nutritional status are outlined in Table 97-5. The table also provides tips to avoid the assignment of nutritional significance to tests that may be abnormal for nonnutritional reasons. Assessment of circulAting (viscerAl) proteins The serum proteins most commonly used to assess nutritional status include albumin, total iron-binding capacity (or transferrin), thyroxine-binding prealbumin (or transthyretin), and retinol-binding protein. Because they have different synthesis rates and half-lives (the half-life of serum albumin is ~21 days, whereas those of prealbumin and retinol-binding protein are ~2 days and ~12 h, respectively), some of these proteins reflect changes in nutritional status more quickly than do others. However, rapid fluctuations can also make shorter-half-life proteins less reliable. Levels of circulating proteins are influenced by their rates of synthesis and catabolism, “third spacing” (loss into interstitial spaces), and, in some cases, external loss. Although an adequate intake of calories and protein is necessary for optimal circulating protein levels, serum protein levels generally do not reflect protein intake. For example, a drop in the serum level of albumin or transferrin often accompanies significant physiologic stress (e.g., from infection or injury) and is not necessarily an indication of malnutrition or poor intake. A low serum albumin level in a burned patient with both hypermetabolism and increased dermal losses of protein may not indicate malnutrition. However, adequate nutritional support of the patient’s calorie and protein needs is critical for returning circulating proteins to normal levels as stress resolves. Thus low values by themselves do not define malnutrition, but they often point to increased risk of malnutrition because of the hypermetabolic stress state. As long as significant physiologic stress persists, serum protein levels remain low, even with aggressive nutritional support. However, if the levels do not rise after the underlying illness improves, the patient’s protein and calorie needs should be reassessed to ensure that intake is sufficient. Assessment of vitAmin And minerAl stAtus The use of laboratory tests to confirm suspected micronutrient deficiencies is desirable because the physical findings for those deficiencies are often equivocal or nonspecific. Low blood micronutrient levels can predate more serious clinical manifestations and also may indicate drug-nutrient interactions. A patient’s basal energy expenditure (BEE, measured in kilocalories per day) can be estimated from height, weight, age, and sex with the Harris-Benedict equations: Men: BEE = 66.47 + 13.75W + 5.00H − 6.76A Women: BEE = 655.10 + 9.56W + 1.85H − 4.68A In these equations, W is weight in kilograms, H is height in centimeters, and A is age in years. After these equations are solved, total energy requirements are estimated by multiplying BEE by a factor that accounts for the stress of illness. Multiplying by 1.1–1.4 yields a range 10–40% above basal that estimates the 24-h energy expenditure of the majority of patients. The lower value (1.1) is used for patients without evidence of significant physiologic stress; the higher value (1.4) is appropriate for patients with marked stress such as sepsis or trauma. The result is used as a 24-h energy goal for feeding. When it is important to have a more accurate assessment, energy expenditure can be measured at the bedside by indirect calorimetry. This technique is useful in patients who are thought to be hypermetabolic from sepsis or trauma and whose body weight cannot be ascertained accurately. Indirect calorimetry can also be useful in patients who have difficulty weaning from a ventilator and whose energy needs therefore should not be exceeded to avoid excessive CO2 production. Patients at the extremes of weight (e.g., obese persons) and/or age are good candidates as well, because the Harris-Benedict equations were developed from measurements in adults with roughly normal body weights. Because urea is a major by-product of protein catabolism, the amount of urea nitrogen excreted each day can be used to estimate the rate of protein catabolism and determine whether protein intake is adequate to Edema Heart failure Hepatomegaly Parotid enlargement Sudden heart failure, death offset it. Total protein loss and protein balance can be calculated from urinary urea nitrogen (UUN) as follows: Protein catabolic rate (g/d) = [24-h UUN (g) + 4] × 6.25 (g protein/g nitrogen) The value of 4 g added to the UUN represents a liberal estimate of the unmeasured nitrogen lost in the urine (e.g., creatinine and uric acid), sweat, hair, skin, and feces. When protein intake is low (e.g., less than Thiamine; acute malnutrition Thiamine (“wet” beriberi), phosphorus Acute malnutrition Vitamin A Acute malnutrition (consider also bulimia) Vitamin C ~20 g/d), the equation indicates both the patient’s protein requirement and the severity of the catabolic state (Table 97-5). More substantial protein intakes can raise the UUN because some of the ingested (or intravenously infused) protein is catabolized and converted to UUN. Thus, at lower protein intakes, the equation is useful for estimating requirements, and at higher protein intakes it is useful for assessing protein balance. Causes of Normal Value Test (Normal Values) Nutritional Use Despite Malnutrition Other Causes of Abnormal Value Serum albumin (3.5–5.5 g/dL) Serum prealbumin, also called transthyretin (20–40 mg/dL; lower in prepubertal children) Prothrombin time (2.0–15.5 s) Serum creatinine (0.6–1.6 mg/dL) 24-h urinary creatinine (500–1200 mg/d, standardized for height and sex) 24-h urinary urea nitrogen (UUN; <5 g/d; depends on level of protein intake) 2.8–3.5 g/dL: Protein depletion or systemic inflammation <2.8 g/dL: Possible acute malnutrition or severe inflammation 10–15 mg/dL: Mild protein depletion or inflammation 5–10 mg/dL: Moderate protein depletion or inflammation <5 mg/dL: Severe protein depletion or inflammation Increasing value reflects positive protein balance <200 μg/dL: Protein depletion or inflammatory state; Prolongation: vitamin K deficiency <0.6 mg/dL: Muscle wasting due to prolonged energy deficit Reflects muscle mass Low value: muscle wasting due to prolonged energy deficit Determine level of catabolism (as long as protein intake is ≥10 g below calculated protein loss or <20 g total, and as long as carbohydrate intake has been at least 100 g) 5–10 g/d: Mild catabolism or normal fed state 10–15 g/d: Moderate catabolism >15 g/d: Severe catabolism loss (protein catabolic rate) = [24-h UUN (g) + 4] × 6.25. Adjustments required in burn patients and others with large nonurinary nitrogen losses and in patients with fluctuating levels of blood urea nitrogen (e.g., in renal failure) <8 mg/dL: Possibly inadequate protein intake 12–23 mg/dL: Possibly adequate protein intake >23 mg/dL: Possibly excessive protein intake If serum creatinine is normal, use BUN. If serum creatinine is elevated, use BUN/creatinine ratio. (Normal range is essentially the same as for BUN.) Infusion of albumin, fresh-frozen plasma, or whole blood Common: Infection and other stress, especially with poor protein intake Burns, trauma Congestive heart failure Fluid overload Severe liver disease Uncommon: Nephrotic syndrome Zinc deficiency Bacterial stasis/overgrowth of small intestine Similar to serum albumin Similar to serum albumin Despite muscle wasting: Renal failure Severe dehydration Severe liver disease Anabolic state Syndrome of inappropriate Despite poor protein intake: Renal failure (Use BUN/creatinine ratio.) Congestive heart failure Gastrointestinal hemorrhage Enteral and Parenteral Nutrition Therapy Bruce R. Bistrian, L. John Hoffer, David F. Driscoll When correctly implemented, specialized nutritional support (SNS) plays a major and often life-saving role in medicine. SNS is used for 98e two main purposes: (1) to provide an appropriate nutritional substrate in order to maintain or replenish the nutritional status of patients unable to voluntarily ingest or absorb sufficient amounts of food, and (2) to maintain the nutritional and metabolic status of adequately nourished patients who are experiencing systemic hypercatabolic effects of severe inflammation, injury, or infection in the course of persistent critical illness. Patients with permanent major loss of intestinal length or function often require lifelong SNS. Many patients who require treatment in chronic-care facilities receive enteral SNS, most often because their voluntary food intake is deemed insufficient or because impaired chewing and swallowing create a high risk of aspiration pneumonia. Enteral SNS is the provision of liquid formula meals through a tube placed into the gut. Parenteral SNS is the direct infusion of complete mixtures of crystalline amino acids, dextrose, triglyceride emulsions, and micronutrients into the bloodstream through a central venous catheter or (rarely in adults) via a peripheral vein. The enteral route is almost always preferred because of its relative simplicity and safety, its low cost, and the benefits of maintaining digestive, absorptive, and immunologic barrier functions of the gastrointestinal tract. Pliable, small-bore feeding tubes make placement relatively easy and acceptable to patients. Constant-rate infusion pumps increase the reliability of nutrient delivery. The chief disadvantage of enteral SNS is that many days may be required to meet the patient’s nutrient requirements. For short-term use, the feeding tube can be placed via the nose into the stomach, duodenum, or jejunum. For long-term use, these sites may be accessed through the abdominal wall by endoscopic or surgical procedures. The chief disadvantage of tube feeding in acute illness is intolerance due to gastric retention, risk of vomiting, or diarrhea. The presence of severe coagulopathy is a relative contraindication to the insertion of a feeding tube. In adults, parenteral nutrition (PN) almost always requires aseptic insertion of a central venous catheter with a dedicated port. Many circumstances can delay or slow the progression of enteral SNS, whereas parenteral SNS can provide a complete substrate mix easily and promptly. This practical advantage is mitigated by the need to infuse relatively large fluid volumes and the real risk of inadvertent toxic overfeeding. APPROACH TO THE PATIENT: Approximately one-fifth to one-quarter of patients in acute-care hospitals suffer from at least moderate protein-energy malnutrition (PEM), the defining features of which are malnutrition-induced weight loss and skeletal muscle atrophy. Usually, but not always, other features further compromise clinical responses; these features include a subnormal adipose tissue mass, with the accompanying adverse consequences of weakness, skin thinning, and breakdown; reduced ventilatory drive; ineffective cough; immunodeficiency; and impaired thermoregulation. Commonly, PEM is already present at the time of hospital admission and remains unimproved or worsens during the ensuing hospital stay. Common reasons for PEM worsening during hospitalization are refusal of food (because of anorexia, nausea, pain, or delirium), communication barriers, an unmet need for hand-feeding of patients with physical or sensory impairment, disordered or ineffective chewing or swallowing, and prolonged periods of physician-ordered fasting—all potentially TAblE 98e-1 body MASS INdEx (bMI), MuSClE MASS, ANd PRoTEIN ENERgy MAlNuTRITIoN (PEM) taking place in a context of caregiver unawareness and inattention. Most patients who are suffering from in-hospital PEM do not, or ought not, to require SNS. A large proportion of these patients can be expected to improve with appropriate management of their primary disease. Others have a terminal disease whose downward course will not be altered by SNS. In yet other cases, the PEM is sufficiently mild that the benefits of SNS are exceeded by its risks. For patients who fall into this last category, the correct approach is to intensify and/or modify the patient’s oral nutrition as directed by the unit dietitian. PEM is often classified as minimal, moderate, or severe on the basis of weight for height (body mass index, BMI) and percentage of body weight recently lost. As shown in Table 98e-1, the BMI (when corrected for abnormal extracellular fluid accumulation) is a crude but useful indicator of PEM severity. Note, however, that obesity does not preclude moderate or severe PEM, especially in older or bedridden patients; indeed, obesity can mask the presence of PEM if the patient’s muscle mass is not specifically examined. The decision to implement SNS must be based on the determinations (1) that intensified or modified oral nutrition has failed or is impossible, impractical, or undesirable; and (2) that SNS will increase the patient’s rate and likelihood of recovery, reduce the risk of infection, improve healing, or otherwise shorten the hospital stay. In chronic-care situations, the decision to institute SNS is based on the likelihood that the intervention will extend the duration or quality of the patient’s life. An algorithm for determining when to use SNS is depicted in Fig. 98e-1. The decision to enhance oral nutrition or—that attempt failing— to resort to SNS is based on the anticipated consequences of nonintervention. The mnemonic “in-in-in” (for inanition-inflammation-inactivity) can serve as a reminder of the three main factors that come into play when deciding whether or not it is acceptable to withhold SNS from a patient with PEM. Inanition Key issues include whether normal food intake is likely to be impossible for a prolonged period and whether the patient can tolerate prolonged starvation. A previously well-nourished person can tolerate ~7 days of starvation without harm, even in the presence of a moderate systemic response to inflammation (SRI), whereas the degree of tolerance to prolonged starvation is much less in patients whose skeletal muscle mass is already reduced, whether from PEM, from the muscle atrophy of old age (sarcopenia), or from muscle atrophy due to neuromuscular disease. Excess body fat does not exclude the possibility of coexisting muscle atrophy from any of these causes. In general, unintentional weight loss of >10% during the previous 6 months or a weight-to-height ratio that is <90% of standard, when associated with physiologic impairment, crudely predicts that the patient has moderate PEM. Weight loss >20% of usual or <80% of standard makes severe PEM more likely. Inflammation The anorexia that invariably accompanies the SRI reduces the likelihood that a patient’s nutritional goals will be achieved by intensifying or modifying the diet, by providing counseling, or by hand-feeding. Furthermore, the protein-catabolic Is disease process likely to cause nutritional impairment? Does the patient have PCM or is at risk for PCM? Would preventing or treating the malnutrition with SNS improve the prognosis and quality of life? What are the fluid, energy, mineral, and vitamin requirements and can these be provided enterally? Does the patient require total parenteral nutrition? Can requirements be met through oral foods and liquid supplements? Keep under surveillance with frequent calorie counts and clinical assessment Request feeding tube Needed for several weeks Needed for months or years Nasally inserted tube Percutaneously inserted tube Request CVC, PICC or peripheral catheter plus enteral nutrition Request CVC or PICC Need for several weeks Need for months or years Tunneled external catheter or subcutaneous infusion port Subclavian catheter or PICC Risks and discomfort of SNS outweigh potential benefits. Explain issue to patient or legal surrogate. Support patient with general comfort measures including oral food and liquid supplements if desired. Yes Yes Yes No Yes Yes Yes No No No FIgURE 98e-1 Decision-making for the implementation of specialized nutritional support (SNS). CVC, central venous catheter; PICC, peripherally inserted central catheter. (Adapted from the chapter on this topic in Harrison’s Principles of Internal Medicine, 16e, by Lyn Howard, MD.) effects of the SRI accelerate skeletal muscle wasting and substantially block normal protein-sparing adaptation to protein and energy starvation. Inactivity A nutritional red flag should be raised over every acutely ill patient who remains bedridden or inactive for a prolonged period. Such patients commonly manifest muscle atrophy (due to nutritional deficiencies and disuse) and anorexia with inadequate voluntary food intake. Once it has been determined that a patient has significant— and, in particular, progressive—PEM despite meaningful efforts to reverse it by modifying the diet or the way food is provided, the next step is to decide whether SNS will have a net positive effect on the patient’s clinical outcome. The pathway to the end stage of most severe chronic diseases leads through PEM. In most patients with end-stage untreatable cancer or certain end-organ diseases, SNS will neither reverse PEM nor improve the quality of life. Provision of food and water is commonly regarded as an aspect of basic humane care; in contrast, enteral and parenteral SNS is a therapeutic intervention that can cause discomfort and pose risks. As with other life-support interventions, the discontinuation of enteral or parenteral SNS can be psychologically difficult for patients, their families, and their caregivers. Indeed, the difficulty can be greater with SNS than with other life-support interventions because the provision of food and water is often considered equivalent to comfort care. In such difficult, near end-of-life situations, it is prudent to explicitly state the treatment goals at the outset of a course of SNS therapy. Such clarity can smooth the way for subsequent appropriate discontinuation in those patients whose prognosis has become hopeless. After the decision has been made that SNS is indeed appropriate, the next determinations are the route of delivery (enteral versus parenteral), timing, and calculation of the patient’s nutritional goals. Although enteral SNS is the default option, the choice of optimal route depends on the degree of gut function as well as on available technical resources. Both the choice of route and the timing of SNS require an evaluation of the patient’s current nutritional status, the presence and extent of the SRI, and the anticipated clinical course. Severe SRI is identified on the basis of the standard clinical signs of leukocytosis, tachycardia, tachypnea, and temperature elevation or depression. Serum albumin is a negative acute-phase protein and hence a marker of the SRI. More severe hypoalbuminemia is a crude indicator of greater SRI severity, but this condition is almost certainly worsened by concurrent dietary protein deficiency. Despite the importance of adequate protein provision to patients with the SRI, no amount of SNS will raise serum albumin levels into the normal range as long as the SRI persists. The SRI can be graded as mild, moderate, or severe. Examples of a severe SRI include (1) sepsis or other major inflammatory diseases (e.g., pancreatitis) that require care in the intensive care unit; (2) multiple trauma with an Injury Severity Score >20–25 or an Acute Physiology and Chronic Health Evaluation II (APACHE II) score >25; (3) closed head injury with a Glasgow Coma Scale <8; and (4) major third-degree burns of >40% of the body surface area. A moderate SRI occurs with less severe infections, injuries, or inflammatory conditions like pneumonia, uncomplicated major surgery, acute hepatic or renal injury, and exacerbations of ulcerative colitis or regional enteritis requiring hospitalization. Patients with a severe SRI require the initiation of SNS within the first several days of care, for they are highly unlikely to consume an adequate amount of food voluntarily over the next 7 days. On the other hand, a moderate SRI, as is common during the period following major uncomplicated surgery without oral intake, may be tolerated for 5–7 days as long as the patient is initially well nourished. Patients awaiting elective major surgery benefit from preoperative nutritional repletion for 5–10 days but only in the presence of significant PEM. When adequate preoperative nutrition or SNS is impractical, early postoperative SNS is usually indicated. Furthermore, patients with a combination of a moderate SRI and moderate PEM are likely to benefit from early postoperative SNS. The risks of enteral SNS are determined primarily by the patient’s state of alertness and swallowing competence, the anatomy and function of the gastrointestinal tract, and the experience of the supervising clinical team. The safest and least costly approach is to avoid SNS by close attention to oral food intake; personal encouragement; dietary modifications; hand-feeding, when possible; and, often, the addition of an oral liquid supplement. For this reason, all patients at nutritional risk should be assessed and followed by a nutritionist. There is increasing interest in the use, under selected circumstances and when not contraindicated, of pharmacologic doses of anabolic steroids to stimulate appetite and promote muscle anabolism. Nasogastric tube insertion is a bedside procedure, but many critically ill patients have impaired gastric emptying and a high risk of aspiration pneumonia. This risk can be reduced by placing the tip of the feeding tube in the jejunum beyond the ligament of Treitz, a procedure that usually requires fluoroscopic or endoscopic guidance. When a laparotomy is planned for a patient who has other surgical conditions likely to necessitate prolonged SNS, it is advantageous to place a jejunal feeding tube at the time of surgery. A major disadvantage of enteral SNS is that the amounts of protein and calories provided to critically ill patients commonly fail to reach target goals within the first 7–14 days after SNS is initiated. This problem is compounded by the lack of enteral products that allow the provision of the recommended protein target of 1.5–2.0 g/kg without simultaneously inducing potentially harmful caloric overfeeding. Enteral SNS is often required in patients with anorexia, impaired swallowing, or small-intestinal disease. The bowel and its associated digestive organs derive 70% of their required nutrients directly from nutritional substrates absorbed from the intestinal lumen. Enteral feeding also supports gut function by stimulating splanchnic blood flow, neuronal activity, IgA antibody release, and secretion of gastrointestinal hormones that stimulate gut trophic activity. These factors support the gut as an immunologic barrier against enteric pathogens. For these reasons, current evidence indicates that some luminal nutrition should be provided, even when PN is required to provide most of the nutritional support. The nonessential amino acids arginine and glutamine, short-chain fatty acids, long-chain omega 3 fatty acids, and nucleotides are available in some specialty enteral formulas and appear to have an important role in maintaining immune function. The addition of supplemental PN to enteral feeding (either by mouth or as SNS by enteral tube) may hasten the transition to full enteral feeding, which is usually successful when >50% of requirements can be met enterally. As long as protein and other essential nutrient requirements are met, substantial nutritional benefit can be achieved by providing ~50% of energy needs for periods of up to 10 days. As a rule of thumb, dietary protein provision should be increased by ~25–50% when energy intake is reduced by this amount, since negative energy balance reduces the efficiency of dietary protein retention. For longer periods and in patients who have a normal or increased body fat content, it may be preferable to provide only 75–80% of energy needs (together with increased protein), as the mild energy deficit improves gastrointestinal tolerance, makes glycemic control far easier, and avoids excess fluid administration. The main risks of PN are related to the placement of a central venous catheter, with its complications of thrombosis and infection, and the relatively large intravenous volumes infused. Less often appreciated are the risks associated with the ease of inadvertently infusing excessive carbohydrate and lipid directly into the bloodstream. These risks include hyperglycemia, inadequate lipid clearance from the circulation, hepatic steatosis and inflammation, and even respiratory failure in patients with borderline pulmonary function. On the other hand, renal dysfunction does not reduce a patient’s requirement for protein or amino acids. In cases in which renal function is a limiting factor, appropriate renal replacement therapy must be provided along with SNS. In the past, bowel rest through PN was the cornerstone of treatment for many severe gastrointestinal disorders. However, the value of providing even minimal amounts of enteral nutrition (EN) is now widely accepted. Protocols to facilitate more widespread use of EN include initiation within 24 h of ICU admission; aggressive use of the head-upright position; use of postpyloric and nasojejunal feeding tubes; use of prokinetic agents; more rapid increases in feeding rates; tolerance of higher gastric residuals; and adherence to nurse-directed algorithms for feeding progression. Parenteral SNS alone is generally necessary only for severe gut dysfunction due to prolonged ileus, intestinal obstruction, or severe hemorrhagic pancreatitis. In critically ill patients, parenteral SNS can be commenced within the first 24 h of care, with the anticipation of a better clinical outcome and a lower mortality risk than those following delayed or inadequate enteral SNS; however, this point remains controversial. Some evidence suggests that early SNS is associated with a reduced risk of death but also with an increased risk of serious infection. More recent data, obtained in studies of moderately critically ill patients, suggest that early hypocaloric parenteral SNS lessens morbidity and mitigates muscle atrophy without an increased risk of infection, but also without a detectable reduction in mortality risk. Unfortunately, the current clinical-trial evidence fails to address several important unknowns. It is important to note that the level of protein substrate provided in published clinical trials generally falls well below the current recommendation, even in trials of supplemental parenteral SNS. Much of the increase in morbidity associated with parenteral and enteral SNS can be ascribed to hyperglycemia, which can be prevented by appropriately intensive insulin therapy. The level of glycemia necessary to prevent complications, whether <110 mg/dL or <150 mg/dL, remains unclear. Adequately fed surgical patients may benefit from the lower glucose range, but studies of intensive insulin therapy alone, without full feeding, suggest improved morbidity and mortality outcomes with looser control of glucose at <180 mg/dL. In the early years of its use, PN was relatively expensive, but its components now are often less costly than specialty enteral formulas. Percutaneous placement of a central venous catheter into the subclavian vein or (less desirably) the internal jugular vein with advancement into the superior vena cava can be accomplished at the bedside by trained personnel using sterile techniques. Peripherally inserted central catheters (PICCs) can also be used, although they are usually more appropriate for non-ICU patients. Subclavian or internal jugular catheters carry the risks of pneumothorax or serious vascular damage but are generally well tolerated and, rather than requiring reinsertion, can be exchanged over a wire when catheter infection is suspected. Although most SNS is delivered in hospitals, some patients require it on a long-term basis. At-home SNS requires a safe home environment, a stable clinical condition, and the patient’s ability and willingness to learn appropriate self-care techniques. Other important considerations in determining the appropriateness of at-home parenteral or enteral SNS are that the patient’s prognosis indicates survival for longer than several months and that the therapy enhances the patient’s quality of life. The purpose of SNS is to correct and prevent malnutrition. Certain conditions require special modification of the SNS regimen. Protein intake may need to be limited in many stable patients with renal insufficiency or borderline liver function. In renal disease, except for brief periods, protein intakes should approach the required level for normal adults of at least 0.8 g/kg and should aim for 1.2 g/kg as long as severe azotemia does not occur. Patients with severe renal failure who require SNS need concurrent renal replacement therapy. In hepatic failure, protein intakes of 1.2–1.4 g/kg (up to 1.5 g/kg) should be provided as long as encephalopathy due to protein intolerance does not occur. In the presence of protein intolerance, formulas containing 33–50% branched-chain amino acids are available and can be provided at the 1.2to 1.4-g/kg level. Cardiac patients and many other severely stressed patients often benefit from fluid and sodium restriction to 1000 mL of PN formula and 5–20 meq of sodium per day. In patients with severe chronic PEM characterized by severe weight loss, it is important to initiate PN gradually because of the profound antinatriuresis, antidiuresis, and intracellular accumulation of potassium, magnesium, and phosphorus that develop as a consequence of the resulting high insulin levels. This modification of parenteral SNS is usually accomplished by limiting daily fluid intake initially to ~1000 mL; limiting carbohydrate intake to 10–20% dextrose; limiting sodium intake; and providing ample potassium, magnesium, and phosphorus, with careful daily assessment of fluid and electrolyte status. Protein need not be restricted. Normal adults require ~30 mL of fluid/kg of body weight from all sources each day as well as the replacement of abnormal losses such as those caused by diuretic therapy, nasogastric tube drainage, wound output, high rates of perspiration (which can be several liters per day during periods of extreme heat), and diarrhea/ostomy losses. Electrolyte and mineral losses can be estimated or measured and need to be replaced (Table 98e-2). Fluid restriction may be necessary in patients with fluid overload. Total fluid input can usually be limited to 1200 mL/d as long as urine is the only significant source of fluid output. In severe fluid overload, a 1-L central vein PN solution of 7% crystalline amino acids (70 g) and 21% dextrose (210 g) can temporarily provide an acceptable amount of glucose and protein substrate in the absence of significant catabolic stress. Patients who require PN or EN in the acute-care setting generally have associated hormonal adaptations to their underlying disease (e.g., increased secretion of antidiuretic hormone, aldosterone, insulin, glucagon, or cortisol), and these signals promote fluid retention and hyperglycemia. In critical illness, body weight is invariably increased due to fluid resuscitation and fluid retention. Lean-tissue accretion is minimal in the acute phase of critical illness, no matter how much protein and or how many calories are provided. Because excess fluid removal can be difficult, limiting fluid intake to allow for balanced intake and output is more effective. Total energy expenditure comprises resting energy expenditure, activity energy expenditure, and the thermal effect of feeding (Chap. 97). Resting energy expenditure accounts for two-thirds of total energy expenditure, activity energy expenditure for one-fourth to one-third, and the thermal effect of feeding for ~10%. For normally nourished, healthy individuals, the total energy expenditure is ~30–35 kcal/kg. Critical illness increases resting energy expenditure, but this increase is significant only in initially well-nourished individuals with a robust SRI who experience, for example, severe multiple trauma, extensive burns, sepsis, sustained high fever, or closed head injury. In these situations, total energy expenditure can reach 40–45 kcal/kg. The chronically starved patient with adapted PEM has a reduced energy expenditure and is inactive, with a usual total energy expenditure of ~20–25 kcal/kg. Very few patients with adapted PEM require as much as 30 kcal/kg for energy balance. Because providing ~50% of measured energy expenditure as SNS is at least as effective as 100% for the first 10 days of critical illness, actual measurement of energy expenditure generally is not necessary in the early period of SNS. However, in patients who remain critically ill beyond several weeks, in patients with severe PEM for whom estimates of energy expenditure are unreliable, and in patients who are difficult to wean from ventilators, it is reasonable to measure energy expenditure directly when the technique is available, targeting an energy intake of 100–120% of the measured energy expenditure. Insulin resistance due to the SRI is associated with increased gluconeogenesis and reduced peripheral glucose utilization, with resulting hyperglycemia. Hyperglycemia is aggravated by excessive exogenous carbohydrate administration from SNS. In critically ill patients receiving SNS, normalization of blood glucose levels by insulin infusion reduces morbidity and mortality risk. In mildly or moderately malnourished patients, it is reasonable to provide metabolic support in order to improve protein synthesis and maintain metabolic homeostasis. Hypocaloric nutrition, with provision of ~1000 kcal and 70 g protein per day for up to 10 days, requires less fluid and reduces the likelihood of poor glycemic control, although a higher protein intake would be optimal. During the second week of SNS, energy and protein provision can be advanced to 20–25 kcal/kg and 1.5 g/kg per day, respectively, as metabolic conditions permit. As mentioned above, patients with multiple trauma, closed head injury, and severe burns often have greatly elevated energy expenditures, but there is little evidence that providing >30 kcal/kg daily confers further benefit, and such high caloric intake may well be harmful as it substantially increases the risk of hyperglycemia. As a rule, amino acids and glucose are provided in an increasing dose until energy provision matches estimated resting energy expenditure. At this point, it becomes beneficial to add fat. A surfeit of glucose merely stimulates de novo lipogenesis—an energy-inefficient process. Polyunsaturated long-chain triglycerides (e.g., in soybean oil) are the chief ingredient in most parenteral fat emulsions and provide the majority of the fat in enteral feeding formulas. These vegetable oil–based emulsions provide essential fatty acids. The fat content of enteral feeding formulas ranges from 3% to 50% of energy. Parenteral fat is provided in separate containers as 20% and 30% emulsions that can be infused separately or mixed in the sterile pharmacy as an all-in-one or total nutrient admixture of amino acids, glucose, lipid, electrolytes, vitamins, and minerals. Although parenteral fat needs to make up only ~3% of the energy requirement in order to meet essential fatty acid requirements, when provided daily as an all-in-one mixture of carbohydrate, fat, and protein, the complete admixture has a fat content of 2–3 g/dL and provides 20–30% of the total energy requirement—an acceptable level that offers the advantage of ensuring emulsion stability. When given as a separate infusion, parenteral fat should not be provided at rates exceeding 0.11 g/kg of body mass or 100 g over 12 h—equivalent to 500 mL of 20% parenteral fat. Medium-chain triglycerides containing saturated fatty acids with chain lengths of 6, 8, 10, or 12 carbons (>95% of which are C8 and C10) are included in a number of enteral feeding formulas because they are absorbed preferentially. Fish oil contains polyunsaturated fatty acids of the omega 3 family, which improve immune function and reduce the inflammatory response. At this time, fish oil injectable emulsions are available in the United States as an investigational new drug. PN formulations provide carbohydrate as hydrous glucose (3.4 kcal/g). In enteral formulas, glucose is the carbohydrate source for so-called monomeric diets. These diets provide protein as amino acids and fat in minimal amounts (3%) to meet essential fatty acid requirements. Monomeric formulas are designed to optimize absorption in the seriously compromised gut. These formulas, like immune-enhancing diets, are expensive. In polymeric diets, the carbohydrate source is usually an osmotically less active polysaccharide, the protein is usually soy or casein protein, and fat is present at concentrations of 25–50%. Such formulas are usually well tolerated by patients with normal intestinal length, and some are acceptable for oral consumption. The daily protein recommendation for healthy adults is 0.8 g/kg, but body proteins are replenished faster with 1.5 g/kg in patients with PEM, and net protein catabolism is reduced in critically ill patients when 1.5–2.0 g/kg is provided. In patients who are not critically ill but who require SNS in the acute-care setting, at least 1 g of protein/ kg is recommended, and larger amounts up to 1.5 g/kg are appropriate when volume, renal, and hepatic tolerances allow. The standard parenteral and enteral formulas contain protein of high biologic value and meet the requirements for the eight essential amino acids when nitrogen needs are met. Parenteral amino acid mixtures and elemental enteral mixtures consist of hydrated individual amino acids. Because of their hydrated status, elemental amino acid solutions deliver 17% less protein substrate than intact proteins. In protein-intolerant conditions such as renal and hepatic failure, modified amino acid formulas may be considered. In hepatic failure, higher branched-chain, amino acid–enriched formulas appear to improve outcomes. Conditionally essential amino acids like arginine and glutamine may also have some benefit in supplemental amounts. Protein (nitrogen) balance provides a measure of the efficacy of parenteral or enteral SNS. This balance is calculated as protein intake/6.25 (because proteins are, on average, 16% nitrogen) minus the 24-h urine urea nitrogen plus 4 g of nitrogen (the latter reflecting other nitrogen losses). In critical illness, a mild negative nitrogen balance of 2–4 g/d is often achievable. A similarly mild positive nitrogen balance is observed in the nonstressed recuperating patient. Each gram of nitrogen lost or gained represents ~30 g of lean tissue. Parenteral electrolyte, vitamin, and trace mineral requirements are summarized in Tables 98e-3, 98e-4, and 98e-5, respectively. Electrolyte modifications are necessary with substantial gastrointestinal losses from nasogastric drainage or intestinal losses from fistulas, diarrhea, or ostomy outputs. Such losses also imply extra calcium, magnesium, and zinc losses. Zinc losses are high in secretory diarrhea. Secretory diarrhea contains ~12 mg of zinc/L, and patients with intestinal fistulas or chronic diarrhea require an average of ~12 mg of Acetate Calcium Magnesium Phosphorus 10 meq 10 meq 30 mmol 1–2 meq/kg + replacement, but can be as low as 5–40 meq/d 40–100 meq/d + replacement of unusual losses As needed for acid-base balance, but usually 2:1 to 1:1 with acetate As needed for acid-base balance 10–20 meq/d 8–16 meq/d 20–40 mmol parenteral zinc/d (equivalent to 30 mg of oral elemental zinc) to maintain zinc balance. Excessive urinary potassium losses with amphotericin or magnesium losses with cisplatin or in renal failure necessitate adjustments in sodium, potassium, magnesium, phosphorus, and acid-base balance. Vitamin and trace element requirements are met by the daily provision of a complete parenteral vitamin supplement and trace elements via PN and by the provision of adequate amounts of enteral feeding formulas that contain these micronutrients. Iron is a highly reactive catalyst of oxidative reactions and thus is not included in PN mixtures. The parenteral iron requirement is normally only ~1 mg/d. Iron deficiency occurs with considerable frequency in acutely ill hospitalized patients, especially those with PEM and gastrointestinal tract disease, and in patients subjected to frequent blood withdrawals. Iron deficiency is sometimes inadequately considered in hospitalized patients because there are commoner causes: the inflammation-mediated anemia of chronic disease (with an associated increase in serum ferritin, an acute-phase protein) and redistribution of the intravascular fluid volume during prolonged bed rest. Iron deficiency should be considered in every patient receiving SNS. A falling mean red cell volume, even if still in the low-normal range, together with an intermediate serum ferritin concentration is suggestive of iron deficiency. Intravenous iron infusions follow standard guidelines, always with a termination order and never as a standing order because of the risk of inadvertent iron overdosing. Major iron replacement during critical illness is of some concern because of the possibility that a substantial rise in the serum iron concentration may increase susceptibility to some bacterial infections. 3.6 mg 40 mg 600 μg 15 mg 6 mg 5 μg 60 μg 200 mg 200 IUa 10 IU 150 μg aThe current vitamin D requirement—a minimum of 600 IU/day—cannot be met with available injectable vitamin formulations. Calcitriol is not equivalent to vitamin D and is not a suitable replacement for it, since it is not a substrate for 25-hydroxyvitamin D biosynthesis. bA product is available without vitamin K. Vitamin K supplementation is recommended at 2–4 mg/week in patients not receiving oral anticoagulation therapy when the vitamin K–free product is used. aCommercial products are available with the first four, the first five, and all seven of these metals in recommended amounts. bThe basal IV zinc requirement is approximately one-third of the oral requirement, because only approximately one-third of orally ingested zinc is absorbed. Abbreviation: PN, parenteral nutrition. Parenteral feeding through a peripheral vein is limited by osmolarity and volume constraints. Solutions with an osmolarity >900 mOsm/L (e.g., those which contain >3% amino acids and 5% glucose [290 kcal/L]) are poorly tolerated peripherally. Parenteral lipid emulsions (20%) can be given to increase the calories delivered. The total volume required for a marginal amino acid provision rate of 60 g (equivalent to 50 g of protein) and a total of 1680 kcal is 2.5 L. Moreover, the risk of significant morbidity and mortality from incompatibilities of calcium and phosphate salts is greatest in these low-osmolarity, low-glucose regimens. For short-term infusions, calcium may be temporarily limited or even omitted from the mixture. Parenteral feeding via a peripheral vein is generally intended as a supplement to oral feeding; it is not suitable for the critically ill. Peripheral PN may be enhanced by small amounts of heparin (1000 U/L) and co-infusion with parenteral fat to reduce osmolarity, but volume constraints still limit the value of this therapy, especially in critical illness. PICCs may be used to infuse solutions of 20–25% dextrose and 4–7% amino acids, thus avoiding the traumatic complications of percutaneous central vein catheter placement. With PICC lines, however, flow can be position-related, and the lines cannot be exchanged over a wire for infection monitoring. It is important to withdraw blood samples carefully and appropriately from a dual-port PICC because intermixing of the blood sample with even tiny volumes of nutrient infusate will falsely indicate hyperglycemia and hyperkalemia. For all these reasons, centrally placed catheters are preferred in critical illness. The subclavian approach is best tolerated by the patient and is the easiest to dress. The jugular approach is less likely to cause a pneumothorax. Femoral vein catheterization is strongly discouraged because of the risk of catheter infection. For long-term feeding at home, tunneled catheters and implanted ports are used to reduce infection risk and are more acceptable to patients. Tunneled catheters require placement in the operating room. Catheters are made of Silastic®, polyurethane, or polyvinyl chloride. Silastic catheters are less thrombogenic and are best for tunneled catheters. Polyurethane is best for temporary catheters. To avoid infection, dressing changes with dry gauze should be performed at regular intervals by nurses skilled in catheter care. Chlorhexidine solution is more effective than alcohol or iodine compounds. Appropriate monitoring for patients receiving PN is summarized in Table 98e-6. Even though premixed solutions of crystalline amino acids and dextrose are in common use, the future of evidence-based PN lies in computer-controlled sterile compounders that rapidly and inexpensively General sense of well-being Strength, as evidenced by getting out of bed, walking, and resistance exercise as appropriate Vital signs, including temperature, blood pressure, pulse, and respiratory rate Fluid balance: weight (recorded at least several times weekly); fluid intake (parenteral and enteral) vs. fluid output (urine, stool, gastric drainage, wound, ostomy) Parenteral nutrition delivery equipment: tubing, pump, filter, catheter, dressing Blood glucose, Na, K, Cl, HCO3, BUN Daily until stable and fully advanced; then twice weekly Serum creatinine, albumin, PO4, Ca, Baseline; then twice weekly Mg, Hb/Hct, WBC count aParameters are assessed daily unless otherwise specified. Abbreviations: BUN, blood urea nitrogen; Hb, hemoglobin; Hct, hematocrit; INR, international normalized ratio; WBC, white blood cell. Source: Adapted from the chapter on this topic in Harrison’s Principles of Internal Medicine, 16e, by Lyn Howard, MD. generate personalized solutions that meet the specific protein and calorie goals for different patients in different clinical situations. For example, 1 L of a standard mixture of 5% amino acids/25% dextrose solution provides 50 g of amino acids (41.5 g of protein substrate) and 1000 kcal; the use of this solution to meet the 1.5–to 2.0-g/kg protein requirement of an acutely ill 70-kg patient requires the infusion of 2.5–3.4 L of fluid and a potentially excessively high energy dose of 2500-3300 kcal. When the body fat store is adequate, clinical evidence increasingly supports the greater safety and efficacy of high-protein, moderately hypocaloric SNS in such patients. A sterile compounder can accurately generate an appropriate recipe for such a patient. For example, 1 L of a solution including 600 mL of 15% amino acids, 300 mL of 50% dextrose, and 100 mL of electrolyte/micronutrient mix contains 75 g of protein substrate and 800 kcal; thus it is feasible to meet the patient’s protein requirement with only 1.4–1.9 L of solution and a more appropriate 1100–1520 kcal; any mild gap in energy provision is easily filled by use of intravenous lipid. COMPLICATIONS Mechanical The insertion of a central venous catheter should be performed by trained and experienced personnel using aseptic techniques to limit the major common complications of pneumothorax and inadvertent arterial puncture or injury. The catheter’s position should be radiographically confirmed to be in the superior vena cava distal to the junction with the jugular or subclavian vein and not directly against the vessel wall. Thrombosis related to the catheter may occur at the site of entry into the vein and extend to encase the catheter. Catheter infection predisposes to thrombosis, as does the SRI. The addition of 6000 U of heparin to the daily parenteral formula for hospitalized patients with temporary catheters reduces the risk of fibrin sheath formation and catheter infection. Temporary catheters that develop a thrombus should be removed and, according to clinical findings, treated with anticoagulants. Thrombolytic therapy can be considered for patients with permanent catheters, depending on the ease of replacement and the presence of alternative, reasonably acceptable venous access sites. Low-dose warfarin therapy (1 mg/d) reduces the risk of thrombosis in permanent catheters used for at-home parenteral SNS, but full anticoagulation may be required for patients who have recurrent thrombosis related to permanent catheters. A recent U.S. Food and Drug Administration mandate to reformulate parenteral multivitamins to include vitamin K at a dose of 150 μg/d may affect the efficacy of low-dose warfarin therapy. A “no vitamin K” version is available for patients receiving this therapy. Catheters can become occluded due to mechanical factors; by fibrin at the tip; or by fat, minerals, or drugs intraluminally. These occlusions can be managed with low-dose alteplase for fibrin, with indwelling 70% alcohol for fat, with 0.1 N hydrochloric acid for mineral precipitates, and with either 0.1 N hydrochloric acid or 0.1 N sodium hydroxide for drugs, depending on the pH of the drug. Metabolic The most common problems caused by parenteral SNS are fluid overload and hyperglycemia (Table 98e-7). Hypertonic dextrose stimulates a much higher insulin level than meal feeding. Because insulin is a potent antinatriuretic and antidiuretic hormone, Corrective Action Disturbance Cause with PN longed periods. Abbreviation: PN, parenteral nutrition. hyperinsulinemia leads to sodium and fluid retention. Consequently, 98e-7 in the absence of gastrointestinal losses or renal dysfunction, net fluid retention is likely when total fluid intake exceeds 2000 mL/d. Close monitoring of body mass as well as of fluid intake and output is necessary to prevent this complication. In the absence of significant renal impairment, the sodium content of the urine is likely to be <10 meq/L. Provision of sodium in limited amounts (40 meq/d) and the use of both glucose and fat in the PN mixture will reduce serum glucose levels and help reduce fluid retention. The elevated insulin level also increases the intracellular transport of potassium, magnesium, and phosphorus, which can precipitate a dangerous re-feeding syndrome if the total glucose content of the PN solution is advanced too quickly in severely malnourished patients. To assess glucose tolerance, it is generally best to start PN with <200 g of glucose/d. Regular insulin can be added to the PN formula to establish glycemic control, and the insulin doses can be increased proportionately as the glucose content is advanced. As a general rule, patients with insulin-dependent diabetes require about twice their usual at-home insulin dose when receiving PN at 20–25 kcal/kg, largely as a consequence of parenteral glucose administration and some loss of insulin to the formula’s container. As a rough estimate, the amount of insulin provided can be proportionately similar to the number of calories provided as total parenteral nutrition (TPN) relative to full feeding, and the insulin can be placed in the TPN formula. Subcutaneous regular insulin can be provided to improve glucose control as assessed by measurements of blood glucose every 6 h. About two-thirds of the total 24-h amount can be added to the next day’s order, with SC insulin supplements as needed. Advances in the TPN glucose concentration should be made when reasonable glucose control is established, and the insulin dose can be adjusted proportionately to the calories added as glucose and amino acids. These are general rules, and they are conservative. Given the adverse clinical impact of hyperglycemia, it may be necessary to use intensive insulin therapy as a separate infusion with a standard protocol to initially establish control. Once control is established, this insulin dose can be added to the PN formula. Acid-base imbalance is also common during parenteral SNS. Amino acid formulas are buffered, but critically ill patients are prone to metabolic acidosis, often due to renal tubular impairment. The use of sodium and potassium acetate salts in the PN formula may address this problem. Bicarbonate salts should not be used because they are incompatible with TPN formulations. Nasogastric drainage produces hypochloremic alkalosis that can be managed by attention to chloride balance. Occasionally, hydrochloric acid may be required for a more rapid response or when diuretic therapy limits the ability to provide substantial sodium chloride. Up to 100 meq/L and up to 150 meq of hydrochloric acid per day may be placed in a fat-free TPN formula. Infectious Infections of the central access catheter rarely occur in the first 72 h. Fever during this period is usually attributable to infection elsewhere or another cause. Fever that develops during parenteral SNS can be addressed by checking the catheter site and, if the site looks clean, exchanging the catheter over a wire, with cultures taken through the catheter and at the catheter tip. If these cultures are negative, as they usually are, the new catheter can continue to be used. If a culture is positive for a relatively nonpathogenic bacterium like Staphylococcus epidermidis, a second exchange over a wire with repeat cultures or replacement of the catheter can be considered in light of the clinical circumstances. If cultures are positive for more pathogenic bacteria or for fungi like Candida albicans, it is generally best to replace the catheter at a new site. Whether antibiotic treatment is required is a clinical decision, but C. albicans grown from the blood culture in a patient receiving PN should always be treated with an antifungal drug because the consequences of failure to treat can be dire. Catheter infections can be minimized by dedicating the feeding catheter to TPN, without blood sampling or medication administration. Central catheter infections are a serious complication, with an attributed mortality rate of 12–25%. Fewer than three infections per 1000 catheter-days should occur in central venous catheters dedicated to feeding. At-home TPN catheter infections may be treated through 98e-8 the catheter without its removal, particularly if the offending organism is S. epidermidis. Clearing of the biofilm and fibrin sheath by local treatment of the catheter with indwelling alteplase may increase the likelihood of eradication. Antibiotic lock therapy with high concentrations of antibiotic, with or without heparin in addition to systemic therapy, may improve efficacy. Sepsis with hypotension should precipitate catheter removal in either the temporary or the permanent TPN setting. The types of enteral feeding tubes, methods of insertion, their clinical uses, and potential complications are outlined in Table 98e-8. The different types of enteral formulas are listed in Table 98e-9. Patients receiving EN are at risk for many of the same metabolic complications as those who receive PN and should be monitored in the same manner. EN can be a source of similar problems, but not to the same degree, because the insulin response to EN is about half of that to PN. Enteral feeding formulas have fixed electrolyte compositions that are generally modest in sodium and somewhat higher in potassium. Acid-base disturbances can be addressed to a more limited extent with EN. External measurement: Short-term clinical situ-Aspiration; ulceration nostril, ear, xiphister-ation (weeks) or longer of nasal and esophanum; tube stiffened by periods with intermit-geal tissues, leading to ice water or stylet; posi-tent insertion; bolus stricture tion verified by air injec-feeding is simpler, but tion and auscultation or continuous drip with by x-ray pump is better tolerated External measurement: Short-term clinical Spontaneous pulling nostril, ear, anterior situations where gastric back into stomach superior iliac spine; tube emptying is impaired (position verified by stiffened by stylet and or proximal leak is aspirating content, pH passed through pylorus suspected; requires con->6); diarrhea common, under fluoroscopy or tinuous drip with pump fiber-containing formuwith endoscopic loop Percutaneous place-Long-term clinical Aspiration; irritation ment endoscopically, situations, swallowing around tube exit site; radiologically, or surgi-disorders, or impaired peritoneal leak; balloon cally; after track is estab-small-bowel absorption migration and obstruclished, can be converted requiring continuous tion of pylorus to a gastric “button” drip Percutaneous place-Long-term clinical Clogging or displacement endoscopically or situations where gastric ment of tube; jejunal radiologically via pylorus emptying is impaired; fistula if large-bore tube or endoscopically or requires continuous drip is used; diarrhea from surgically directly into with pump; direct endo-dumping; irritation the jejunum scopic placement (PEJ) of surgical anchoring Abbreviation: PEJ, percutaneous endoscopic jejunostomy. Note: All small tubes are at risk for clogging, especially if used for crushed medications. In long-term enteral nutrition patients, gastrostomy and jejunostomy tubes can be exchanged for a low-profile “button” once the track is established. Source: Adapted from the chapter on this topic in Harrison’s Principles of Internal Medicine, 16e, by Lyn Howard, MD. 1. Caloric density: 1 kcal/mL 2. Protein: ~14% cals (caseinates, soy, lactalbumin) 3. Carbohydrate: ~60% cals (hydrolyzed corn starch, maltodextrin, sucrose) 4. Fat: ~30% cals (corn, soy, safflower oils) 5. Recommended daily intake of all minerals and vitamins in >1500 kcal/d 6. Osmolality: ~300 mosmol/kg 1. Caloric density: 1.5–2 kcal/mL (+) Fluid-restricted patients 2. a. b. Hydrolyzed protein to small Impaired absorption peptides (+) c. ↑ Arginine, glutamine, Immune-enhancing diets nucleotides, ω3 fat (+++) d. ↑ Branched-chain amino acids, Liver failure patients intolerant of aromatic amino acids (+++) 0.8 g of protein/kg e. Low protein of high biologic Renal failure patients for brief periods value if critically ill 3. Fat a. b. ↑ Fat (>40% cals) (++) Pulmonary failure with CO2 retention on standard formula, limited utility c. d. 4. Fiber: provided as soy Improved laxation polysaccharide (+) aCost: +, inexpensive; ++, moderately expensive; +++, very expensive. Note: ARDS, acute respiratory distress syndrome; MCT, medium-chain triglyceride; MUFA, monounsaturated fatty acids; ω3 or ω6, polyunsaturated fat with first double bond at carbon 3 (fish oils) or carbon 6 (vegetable oils). Source: Adapted from the chapter on this topic in Harrison’s Principles of Internal Medicine, 16e, by Lyn Howard, MD. Acetate salts can be added to the formula to treat chronic metabolic acidosis. Calcium chloride can be added to treat mild chronic metabolic alkalosis. Medications and other additives to enteral feeding formulas can clog the tubes (e.g., calcium chloride may interact with casein-based formulas to form insoluble calcium caseinate products) and may reduce the efficacy of some drugs (e.g., phenytoin). Since small-bore tubes are easily displaced, tube position should be checked at intervals by aspirating and measuring the pH of the gut fluid (normal: <4 in the stomach, >6 in the jejunum). COMPLICATIONS Aspiration The debilitated patient with poor gastric emptying and impairment of swallowing and cough is at risk for aspiration; this complication is particularly common among patients who are mechanically ventilated. Tracheal suctioning induces coughing and gastric regurgitation, and cuffs on endotracheal or tracheostomy tubes seldom protect against aspiration. Preventive measures include elevating the head of the bed to 30°, using nurse-directed algorithms for formula advancement, combining enteral with parenteral feeding, and using post–ligament of Treitz feeding. Tube feeding should not be discontinued for gastric residuals of <300 mL unless there are other signs of gastrointestinal intolerance, such as nausea, vomiting, or abdominal distention. Continuous feeding using pumps is better tolerated intragastrically than bolus feeding and is essential for feeding into the jejunum. For small-bowel feeding, residuals are not assessed, but abdominal pain and distention should be monitored. Diarrhea Enteral feeding often leads to diarrhea, especially if bowel function is compromised by disease or drugs (most often, broad-spectrum antibiotics). Sorbitol used to flavor some medications can also cause diarrhea. Diarrhea may be controlled by the use of a continuous drip, with a fiber-containing formula, or by the addition of an antidiarrheal agent to the formula. However, Clostridium difficile, which is a common cause of diarrhea in patients being tube-fed, should be ruled out as the etiology before antidiarrheal agents are used. H2 blockers may help reduce the net volume of fluid presented to the colon. Diarrhea associated with enteral feeding does not necessarily imply inadequate absorption of nutrients other than water and electrolytes. Amino acids and glucose are particularly well absorbed in the upper small bowel except in the most diseased or shortest bowel. Since luminal nutrients exert trophic effects on the gut mucosa, it is often appropriate to persist with tube feeding despite diarrhea, even when this course necessitates supplemental parenteral fluid support. Apart from conditions with drastically diminished small-intestinal 98e-9 absorptive function, there are no established indications for short peptide–based or elemental formulas. In the United States, the only parenteral lipid emulsion avail able is made with soybean oil, whose constituent fatty acids have been suggested to be immunosuppressive under certain circumstances. In Europe and Japan, a number of other lipid emulsions are available, including those containing fish oil only; mixtures of fish oil, medium-chain triglycerides, and long-chain triglycerides as olive oil and/or soybean oil; mixtures of medium-chain triglycerides and long-chain triglycerides as soybean oil; and long-chain triglyceride mixtures as olive oil and soybean oil, which may be more beneficial in terms of metabolism and hepatic and immune function. Furthermore, a glutamine-containing dipeptide for inclusion in TPN formulas is available in Europe and may be helpful in terms of immune function and resistance to infection, although a recent study using a largerthan-recommended dose was associated with net harm. The authors acknowledge the contributions of Lyn Howard, MD, the author in earlier editions of HPIM, to material in this chapter. approach to the patient with Cancer Dan L. Longo The application of current treatment techniques (surgery, radiation therapy, chemotherapy, and biologic therapy) results in the cure of nearly two of three patients diagnosed with 99 seCtion 1 cancer. Nevertheless, patients experience the diagnosis of cancer as one of the most traumatic and revolutionary events that has ever happened to them. Independent of prognosis, the diagnosis brings with it a change in a person’s self-image and in his or her role in the home and workplace. The prognosis of a person who has just been found to have pancreatic cancer is the same as the prognosis of the person with aortic stenosis who develops the first symptoms of congestive heart failure (median survival, ~8 months). However, the patient with heart disease may remain functional and maintain a self-image as a fully intact person with just a malfunctioning part, a diseased organ (“a bum ticker”). By contrast, the patient with pancreatic cancer has a completely altered self-image and is viewed differently by family and anyone who knows the diagnosis. He or she is being attacked and invaded by a disease that could be anywhere in the body. Every ache or pain takes on desperate significance. Cancer is an exception to the coordinated interaction among cells and organs. In general, the cells of a multicellular organism are programmed for collaboration. Many diseases occur because the specialized cells fail to perform their assigned task. Cancer takes this malfunction one step further. Not only is there a failure of the cancer cell to maintain its specialized function, but it also strikes out on its own; the cancer cell competes to survive using natural mutability and natural selection to seek advantage over normal cells in a recapitulation of evolution. One consequence of the traitorous behavior of cancer cells is that the patient feels betrayed by his or her body. The cancer patient feels that he or she, and not just a body part, is diseased. No nationwide cancer registry exists; therefore, the incidence of cancer is estimated on the basis of the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) database, which tabulates cancer incidence and death figures from 13 sites, accounting for about 10% of the U.S. population, and from population data from the U.S. Census Bureau. In 2014, 1.665 million new cases of invasive cancer (855,220 men, 810,320 women) were diagnosed, and 585,720 persons (310,010 men, 275,710 women) died from cancer. The percent distribution of new cancer cases and cancer deaths by site for men and women is shown in Table 99-1. Cancer incidence has been declining by about 2% each year since 1992. Cancer is the cause of one in four deaths in the United States. The most significant risk factor for cancer overall is age; two-thirds of all cases were in those older than age 65 years. Cancer incidence increases as the third, fourth, or fifth power of age in different sites. For the interval between birth and age 49 years, 1 in 29 men and 1 in 19 women will develop cancer; for the interval between ages 50 and 59 years, 1 in 15 men and 1 in 17 women will develop cancer; for the interval between ages 60 and 69 years, 1 in 6 men and 1 in 10 women will develop cancer; and for people age 70 and older, 1 in 3 men and 1 in 4 women will develop cancer. Overall, men have a 44% risk of developing cancer at some time during their lives; women have a 38% lifetime risk. Source: From R Siegel et al: Cancer statistics, 2014. CA Cancer J Clin 64:9, 2014. Cancer is the second leading cause of death behind heart disease. Deaths from heart disease have declined 45% in the United States since 1950 and continue to decline. Cancer has overtaken heart disease as the number one cause of death in persons younger than age 85 years. Incidence trends over time are shown in Fig. 99-1. After a 70-year period of increase, cancer deaths began to decline in 1990–1991 (Fig. 99-2). Between 1990 and 2010, cancer deaths decreased by 21% among men and 12.3% among women. The magnitude of the decline is illustrated in Fig. 99-3. The five leading causes of cancer deaths are shown for various populations in Table 99-2. The 5-year survival for white patients was 39% in 1960–1963 and 69% in 2003–2009. Cancers are more often deadly in blacks; the 5-year survival was 61% for the 2003–2009 interval; however, the racial differences are narrowing over time. Incidence and mortality vary among racial and ethnic groups (Table 99-3). The basis for these differences is unclear. In 2008, 12.7 million new cancer cases and 7.6 million cancer deaths were estimated worldwide, according to estimates of GLOBOCAN 2008, developed by the International Agency for Research on Cancer (IARC). When broken down by region of the world, ~45% of cases were in Asia, 26% in Europe, 14.5% in North America, 7.1% in Central/South America, 6% in Africa, and 1% in Australia/New Zealand (Fig. 99-4). Lung cancer is the most common cancer and the most common cause of cancer death in the world. Its Approach to the Patient with Cancer Rate per 100,000 population Year of diagnosis Year of diagnosis FIGURE 99-1 Incidence rates for particular types of cancer over the last 35 years in men (A) and women (B). (From R Siegel et al: CA Cancer J Clin 64:9, 2014.) incidence is highly variable, affecting only 2 per 100,000 African women but as many as 61 per 100,000 North American men. Breast cancer is the second most common cancer worldwide; however, it ranks fifth as a cause of death behind lung, stomach, liver, and colorectal cancer. Among the eight most common forms of cancer, lung (2-fold), breast (3-fold), prostate (2.5-fold), and colorectal (3-fold) cancers are more common in more developed countries than in less developed countries. By contrast, liver (2-fold), cervical (2-fold), and esophageal (2to 3-fold) cancers are more common in less developed countries. Stomach cancer incidence is similar in more and less developed countries but is much more common in Asia than North America or Africa. The most common cancers in Africa are cervical, breast, and liver cancers. It has been estimated that nine modifiable risk factors are responsible for more than one-third of cancers worldwide. These include smoking, alcohol consumption, obesity, physical inactivity, low fruit and vegetable consumption, unsafe sex, air pollution, indoor smoke from household fuels, and contaminated injections. Important information is obtained from every portion of the routine history and physical examination. The duration of symptoms may reveal the chronicity of disease. The past medical history may alert the physician to the presence of underlying diseases that may affect the choice of therapy or the side effects of treatment. The social history may reveal occupational exposure to carcinogens or habits, such as smoking or alcohol consumption, that may influence the course of disease and its treatment. The family history may suggest an underlying familial cancer predisposition and point out the need to begin surveillance or other preventive therapy for unaffected siblings of the patient. The review of systems may suggest early symptoms of metastatic disease or a paraneoplastic syndrome. The diagnosis of cancer relies most heavily on invasive tissue biopsy and should never be made without obtaining tissue; no noninvasive diagnostic test is sufficient to define a disease process as cancer. Although in rare clinical settings (e.g., thyroid nodules), fine-needle aspiration is an acceptable diagnostic procedure, the diagnosis generally depends on obtaining adequate tissue to permit careful evaluation of the histology of the tumor, its grade, and its invasiveness and to yield further molecular diagnostic information, such as the expression of cell-surface markers or intracellular proteins that typify a particular cancer, or the presence of a molecular marker, such as the t(8;14) translocation of Burkitt’s lymphoma. Increasing evidence links the expression of certain genes with the prognosis and response to therapy (Chaps. 101e and 102e). Occasionally a patient will present with a metastatic disease process that is defined as cancer on biopsy but has no apparent primary site of disease. Efforts should be made to define the primary site based on age, sex, sites of involvement, histology and tumor markers, and personal and family history. Particular attention should be focused on ruling out the most treatable causes (Chap. 120e). Once the diagnosis of cancer is made, the management of the patient is best undertaken as a multidisciplinary collaboration among the primary care physician, medical oncologists, surgical oncologists, radiation oncologists, oncology nurse specialists, pharmacists, social workers, rehabilitation medicine specialists, and a number of other consulting professionals working closely with each other and with the patient and family. Deaths per 100,000 women 0 20 40 60 C. Females, by site 80 100 Ovary Lung and bronchus Colorectum Uterus Breast Pancreas Stomach Deaths per 100,000 men Deaths per 100,000 people A. All sites combined B. Males, by site 1930 1940 1950 1960 1970 1980 1990 2000 2010 Year of death FIGURE 99-2 Eighty-year trend in cancer death rates for (A) women and (B) men by site in the United States, 1930–2010. Rates are per 100,000 age-adjusted to the 2000 U.S. standard population. All sites combined (A), individual sites in men (B) and individual sites in women (C) are shown. (From R Siegel et al: CA Cancer J Clin 64:9, 2014.) Approach to the Patient with Cancer histologic examination of all tissues removed during the surgical procedure. Surgical procedures performed may include a simple lymph node biopsy or more extensive procedures such as thoracotomy, mediastinoscopy, or laparotomy. Surgical staging may occur in a separate procedure or may be done at the time of definitive surgical resection of the primary tumor. 60–69 Knowledge of the predilection of particular tumors for spreading to adjacent or distant organs helps direct the staging evaluation. 50–59 Information obtained from staging is used to define the extent of disease as localized, as exhibiting spread outside of the organ of origin to regional 40–49 but not distant sites, or as metastatic to distant sites. The most widely used system of staging is the TNM (tumor, node, metastasis) system codified by the International Union Against Cancer and the American Joint Committee on Cancer. The TNM classification is an anatomically based system that categorizes the tumor on the basis of the size of the primary tumor lesion (T1–4, where a higher number indicates a tumor of larger size), the presence of nodal involvement (usually N0 and N1 for the absence and presence, respectively, of involved nodes, although some tumors have more elaborate systems of nodal grading), and the presence of metastatic disease (M0 FIGURE 99-3 The decline in death rates from cancer is shown for different age ranges and M1 for the absence and presence, respectively, by sex and race for the 20-year period between 1991 and 2010 expressed as a percent-of metastases). The various permutations of T, N, age of the 1991 rate. (From R Siegel et al: CA Cancer J Clin 64:9, 2014.) The first priority in patient management after the diagnosis of cancer is established and shared with the patient is to determine the extent of disease. The curability of a tumor usually is inversely proportional to the tumor burden. Ideally, the tumor will be diagnosed before symptoms develop or as a consequence of screening efforts (Chap. 100). A very high proportion of such patients can be cured. However, most patients with cancer present with symptoms related to the cancer, caused either by mass effects of the tumor or by alterations associated with the production of cytokines or hormones by the tumor. For most cancers, the extent of disease is evaluated by a variety of noninvasive and invasive diagnostic tests and procedures. This process is called staging. There are two types. Clinical staging is based on physical examination, radiographs, isotopic scans, computed tomography (CT) scans, and other imaging procedures; pathologic staging takes into account information obtained during a surgical procedure, which might include intraoperative palpation, resection of regional lymph nodes and/or tissue adjacent to the tumor, and inspection and biopsy of organs commonly involved in disease spread. Pathologic staging includes and M scores (sometimes including tumor histologic grade [G]) are then broken into stages, usually desig nated by the roman numerals I through IV. Tumor burden increases and curability decreases with increasing stage. Other anatomic staging systems are used for some tumors, e.g., the Dukes classification for colorectal cancers, the International Federation of Gynecologists and Obstetricians classification for gynecologic cancers, and the Ann Arbor classification for Hodgkin’s disease. Certain tumors cannot be grouped on the basis of anatomic considerations. For example, hematopoietic tumors such as leukemia, myeloma, and lymphoma are often disseminated at presentation and do not spread like solid tumors. For these tumors, other prognostic factors have been identified (Chaps. 132-136). In addition to tumor burden, a second major determinant of treatment outcome is the physiologic reserve of the patient. Patients who are bedridden before developing cancer are likely to fare worse, stage for stage, than fully active patients. Physiologic reserve is a determinant of how a patient is likely to cope with the physiologic stresses imposed by the cancer and its treatment. This factor is difficult to assess directly. Instead, surrogate markers for physiologic reserve are used, such as the patient’s age or Karnofsky performance status (Table 99-4) or Eastern Cooperative Oncology Group (ECOG) performance status the five LeaDing primary tumor sites for patients Dying of CanCer BaseD on age anD sex in 2010 Age, years 471taBLe 99-3 CanCer inCiDenCe anD mortaLity in raCiaL anD ethniC groups, uniteD states, 2006–2010 All M 217.3 276.6 132.4 191.0 152.2 F 153.6 171.2 92.1 139.0 101.3 Breast 22.7 30.8 11.5 15.5 14.8 Colorectal M 19.2 28.7 13.1 18.7 16.1 F 13.6 19.0 9.7 15.4 10.2 Kidney M 5.9 5.7 3.0 9.5 5.1 F 2.6 2.6 1.2 4.4 2.3 Liver M 7.1 11.8 14.4 13.2 12.3 F 2.9 4.1 6.0 6.1 5.4 Lung M 65.7 78.5 35.5 49.6 31.3 F 42.7 37.2 18.4 33.1 14.1 Prostate 21.3 50.9 10.1 20.7 19.2 Cervix 2.1 4.2 1.9 3.5 2.9 aBased on Indian Health Service delivery areas. Abbreviations: F, female; M, male. Source: From R Siegel R et al: Cancer statistics, 2014. CA Cancer J Clin 64:9, 2014. (Table 99-5). Older patients and those with a Karnofsky performance treatment is of the utmost importance in treatment planning. For some status <70 or ECOG performance status ≥3 have a poor prognosis cancers, chemotherapy or chemotherapy plus radiation therapy delivunless the poor performance is a reversible consequence of the tumor. ered before the use of definitive surgical treatment (so-called neoadju- Increasingly, biologic features of the tumor are being related to prog-vant therapy) may improve the outcome, as seems to be the case for nosis. The expression of particular oncogenes, drug-resistance genes, locally advanced breast cancer and head and neck cancers. In certain apoptosis-related genes, and genes involved in metastasis is being settings in which combined-modality therapy is intended, coordinafound to influence response to therapy and prognosis. The presence tion among the medical oncologist, radiation oncologist, and surgeon of selected cytogenetic abnormalities may influence survival. Tumors is crucial to achieving optimal results. Sometimes the chemotherapy with higher growth fractions, as assessed by expression of proliferation-and radiation therapy need to be delivered sequentially, and other related markers such as proliferating cell nuclear antigen, behave more times concurrently. Surgical procedures may precede or follow other aggressively than tumors with lower growth fractions. Information treatment approaches. It is best for the treatment plan either to follow obtained from studying the tumor itself will increasingly be used to a standard protocol precisely or else to be part of an ongoing clinical influence treatment decisions. Host genes involved in drug metabolism research protocol evaluating new treatments. Ad hoc modifications of can influence the safety and efficacy of particular treatments. standard protocols are likely to compromise treatment results. Enormous heterogeneity has been noted by studying tumors; we The choice of treatment approaches was formerly dominated by the have learned that morphology is not capable of discerning certain local culture in both the university and the practice settings. However, distinct subsets of patients whose tumors have different sets of abnor-it is now possible to gain access electronically to standard treatment malities. Tumors that look the same by light microscopy can be very protocols and to every approved clinical research study in North different. Similarly, tumors that look quite different from one another America through a personal computer interface with the Internet.1 histologically can share genetic lesions that predict responses to treatments. Furthermore, tumor cells vary enormously within a single 1The National Cancer Institute maintains a database called PDQ (Physician patient even though the cells share a common origin. Data Query) that is accessible on the Internet under the name CancerNet at www.cancer.gov/cancertopics/pdq/cancerdatabase. Information can be obtained MAKING A TREATMENT PLAN through a facsimile machine using CancerFax by dialing 301-402-5874. Patient From information on the extent of disease and the prognosis and information is also provided by the National Cancer Institute in at least in conjunction with the patient’s wishes, it is determined whether three formats: on the Internet via CancerNet at www.cancer.gov, through the the treatment approach should be curative or palliative in intent. CancerFax number listed above, or by calling 1-800-4-CANCER. The quality Cooperation among the various professionals involved in cancer control for the information provided through these services is rigorous. Approach to the Patient with Cancer Because cancer therapies are toxic (Chap. 103e), patient manage ment involves addressing complications of both the disease and its treatment as well as the complex psychosocial problems asso ciated with cancer. In the short term during a course of curative therapy, the patient’s functional status may decline. Treatment induced toxicity is less acceptable if the goal of therapy is palliation. The most common side effects of treatment are nausea 453.3 270.1385.3 246.3 378.1 302.3 358.2 285.2 354.0 273.5265.9 348.0 288.0 332.9 240.7 332.0 239.7 331.8 266.3 325.2 204.8 299.0 251.1 298.5 187.2 293.8 246.4 290.6 246.7 288.0 270.7 283.0 274.3 281.9 162.7 276.2 220.9 272.2 221.3 265.6 224.2 260.3 212.1 259.6 190.7 256.2 227.9 251.7 189.5 224.0 211.2 223.9 174.1 221.5 189.7 209.8 205.0 202.6 194.9 200.9 114.7 162.3 187.9 156.1 119.3 153.6 171.4155.4148.9 114.6105.386.3 80.3 Females Males and vomiting (see below), febrile neutropenia (Chap. 104), and myelosuppression (Chap. 103e). Tools are now available to minimize the acute toxicity of cancer treatment. New symptoms developing in the course of cancer treatment should always be assumed to be reversible until proven other wise. The fatalistic attribution of anorexia, weight loss, and jaun dice to recurrent or progressive tumor could result in a patient dying from a reversible intercurrent cholecystitis. Intestinal obstruction may be due to reversible adhesions rather than progressive tumor. Systemic infections, sometimes with unusual pathogens, may be a consequence of the immunosuppression associated with cancer therapy. Some drugs used to treat cancer or its complications (e.g., nausea) may produce central nervous Rate per 100,000 paraneoplastic syndromes such as the syndrome of inappropriate antidiuretic hormone. A definitive diagnosis should be pursued and may even require a repeat biopsy. A critical component of cancer management is assessing the response to treatment. In addition to a careful physical examination in which all sites of disease are physically measured and recorded in a flow chart by date, response assessment usually requires periodic repeating of imaging tests that were abnormal Algeria, SetifIndia, ChennaiEcuador, QuitoUganda, KyadondoEgypt, GharbiahPakistan, South KarachiTurkey, IzmirZimbabwe, Harare: AfricanPhilippines, ManilaSingaporeChina, ShanghaiColombia, CaliRussia, St PetersburgArgentina, Bahia BlancaChina, Hong KongBrazil, GoianiaPoland, CracowFinlandUSA, SEER 14 (Hispanic)KoreaDenmarkIsraelThe NetherlandsNorwayJapan, MiyagiUK, Merseyside and CheshireSpain, NavarraCanadaCzech RepublicGermany, SaarlandItaly, ParmaSwitzerland, GenevaAustralia, SouthNew ZealandUSA, SEER 14 (White)France, Bas-RhinUSA, SEER 14 (Black) Incidence (n = 10,864,499) Mortality (n = 6,724,931) Prevalence (n = 24,576,453) at the time of staging. If imaging tests have become normal, repeat biopsy of previously involved tissue is performed to docu-FIGURE 99-4 Worldwide overall annual cancer incidence, mortality, and ment complete response by pathologic criteria. Biopsies are not5-year prevalence for the period of 1993–2001. (Adapted from A Jemal et al: usually required if there is macroscopic residual disease. A com- Cancer Epidemiol Biomarkers Prev 19:1893, 2010.) The skilled physician also has much to offer the patient for whom curative therapy is no longer an option. Often a combination of guilt and frustration over the inability to cure the patient and the pressure of a busy schedule greatly limit the time a physician spends with a patient who is receiving only palliative care. Resist these forces. In addition to the medicines administered to alleviate symptoms (see below), it is important to remember the comfort that is provided by holding the patient’s hand, continuing regular examinations, and taking time to talk. plete response is defined as disappearance of all evidence of disease, and a partial response as >50% reduction in the sum of the products of the perpendicular diameters of all measurable lesions. The determination of partial response may also be based on a 30% decrease in the sums of the longest diameters of lesions (Response Evaluation Criteria in Solid Tumors [RECIST]). Progressive disease is defined as the appearance of any new lesion or an increase of >25% in the sum of the products of the perpendicular diameters of all measurable lesions (or an increase of 20% in the sums of the longest diameters by RECIST). Tumor shrinkage or growth that does not meet any of these criteria is considered stable disease. Some sites of involvement (e.g., bone) or patterns of involvement (e.g., lymphangitic lung or diffuse pulmonary infiltrates) are considered unmeasurable. No response is complete without biopsy documentation of their resolution, but partial responses may exclude their assessment unless clear objective progression has occurred. Status Functional Capability of the Patient 100 Normal; no complaints; no evidence of disease 90 Able to carry on normal activity; minor signs or symptoms of disease 80 Normal activity with effort; some signs or symptoms of Cares for self; unable to carry on normal activity or do the eastern Cooperative onCoLogy group (eCog) ECOG Grade 0: Fully active, able to carry on all predisease performance with out restriction ECOG Grade 1: Restricted in physically strenuous activity but ambulatory and able to carry out work of a light or sedentary nature, e.g., light housework, office work ECOG Grade 2: Ambulatory and capable of all self-care but unable to carry out any work activities. Up and about more than 50% of waking hours ECOG Grade 3: Capable of only limited self-care, confined to bed or chair more than 50% of waking hours ECOG Grade 4: Completely disabled. Cannot carry on any self-care. Totally confined to bed or chair ECOG Grade 5: Dead Source: From MM Oken et al: Am J Clin Oncol 5:649, 1982. Human chorionic Gestational trophoblas-Pregnancy gonadotropin tic disease, gonadal Calcitonin Medullary cancer of the thyroid α Fetoprotein Hepatocellular carci-Cirrhosis, hepatitis noma, gonadal germ cell tumor Carcinoembryonic Adenocarcinomas of the Pancreatitis, hepatitis, antigen colon, pancreas, lung, inflammatory bowel breast, ovary disease, smoking Prostatic acid phos-Prostate cancer Prostatitis, prostatic phatase Neuron-specific enolase Small-cell cancer of the lung, neuroblastoma Lactate dehydrogenase Lymphoma, Ewing’s Hepatitis, hemolytic sarcoma anemia, many others Abbreviation: MGUS, monoclonal gammopathy of uncertain significance. Tumor markers may be useful in patient management in certain tumors. Response to therapy may be difficult to gauge with certainty. However, some tumors produce or elicit the production of markers that can be measured in the serum or urine, and in a particular patient, rising and falling levels of the marker are usually associated with increasing or decreasing tumor burden, respectively. Some clinically useful tumor markers are shown in Table 99-6. Tumor markers are not in themselves specific enough to permit a diagnosis of malignancy to be made, but once a malignancy has been diagnosed and shown to be associated with elevated levels of a tumor marker, the marker can be used to assess response to treatment. The recognition and treatment of depression are important components of management. The incidence of depression in cancer patients is ~25% overall and may be greater in patients with greater debility. This diagnosis is likely in a patient with a depressed mood (dysphoria) and/or a loss of interest in pleasure (anhedonia) for at least 2 weeks. In addition, three or more of the following symptoms are usually present: appetite change, sleep problems, psychomotor retardation or agitation, fatigue, feelings of guilt or worthlessness, inability to concentrate, and suicidal ideation. Patients with these symptoms should receive therapy. Medical therapy with a serotonin reuptake inhibitor such as fluoxetine (10–20 mg/d), sertraline (50–150 mg/d), or paroxetine (10–20 mg/d) or a tricyclic antidepressant such as amitriptyline (50–100 mg/d) or desipramine (75–150 mg/d) should be tried, allowing 4–6 weeks for response. Effective therapy should be continued at least 6 months after resolution of symptoms. If therapy is unsuccessful, other classes of antidepressants may be used. In addition to medication, psychosocial interventions such as support groups, psychotherapy, and guided 473 imagery may be of benefit. Many patients opt for unproven or unsound approaches to treatment when it appears that conventional medicine is unlikely to be curative. Those seeking such alternatives are often well educated and may be early in the course of their disease. Unsound approaches are usually hawked on the basis of unsubstantiated anecdotes and not only cannot help the patient but may be harmful. Physicians should strive to keep communications open and nonjudgmental, so that patients are more likely to discuss with the physician what they are actually doing. The appearance of unexpected toxicity may be an indication that a supplemental therapy is being taken.2 At the completion of treatment, sites originally involved with tumor are reassessed, usually by radiography or imaging techniques, and any persistent abnormality is biopsied. If disease persists, the multidisciplinary team discusses a new salvage treatment plan. If the patient has been rendered disease-free by the original treatment, the patient is followed regularly for disease recurrence. The optimal guidelines for follow-up care are not known. For many years, a routine practice has been to follow the patient monthly for 6–12 months, then every other month for a year, every 3 months for a year, every 4 months for a year, every 6 months for a year, and then annually. At each visit, a battery of laboratory and radiographic and imaging tests were obtained on the assumption that it is best to detect recurrent disease before it becomes symptomatic. However, where follow-up procedures have been examined, this assumption has been found to be untrue. Studies of breast cancer, melanoma, lung cancer, colon cancer, and lymphoma have all failed to support the notion that asymptomatic relapses are more readily cured by salvage therapy than symptomatic relapses. In view of the enormous cost of a full battery of diagnostic tests and their manifest lack of impact on survival, new guidelines are emerging for less frequent follow-up visits, during which the history and physical examination are the major investigations performed. As time passes, the likelihood of recurrence of the primary cancer diminishes. For many types of cancer, survival for 5 years without recurrence is tantamount to cure. However, important medical problems can occur in patients treated for cancer and must be examined (Chap. 125). Some problems emerge as a consequence of the disease and some as a consequence of the treatment. An understanding of these diseaseand treatment-related problems may help in their detection and management. Despite these concerns, most patients who are cured of cancer return to normal lives. In many ways, the success of cancer therapy depends on the success of the supportive care. Failure to control the symptoms of cancer and its treatment may lead patients to abandon curative therapy. Of equal importance, supportive care is a major determinant of quality of life. Even when life cannot be prolonged, the physician must strive to preserve its quality. Quality-of-life measurements have become common endpoints of clinical research studies. Furthermore, palliative care has been shown to be cost-effective when approached in an organized fashion. A credo for oncology could be to cure sometimes, to extend life often, and to comfort always. Pain Pain occurs with variable frequency in the cancer patient: 25–50% of patients present with pain at diagnosis, 33% have pain associated with treatment, and 75% have pain with progressive disease. The pain may have several causes. In ~70% of cases, pain is caused by the tumor itself—by invasion of bone, nerves, blood vessels, or mucous 2Information about unsound methods may be obtained from the National Council Against Health Fraud, Box 1276, Loma Linda, CA 92354, or from the Center for Medical Consumers and Health Care Information, 237 Thompson Street, New York, NY 10012. Approach to the Patient with Cancer 474 membranes or obstruction of a hollow viscus or duct. In ~20% of cases, pain is related to a surgical or invasive medical procedure, to radiation injury (mucositis, enteritis, or plexus or spinal cord injury), or to chemotherapy injury (mucositis, peripheral neuropathy, phlebitis, steroid-induced aseptic necrosis of the femoral head). In 10% of cases, pain is unrelated to cancer or its treatment. Assessment of pain requires the methodical investigation of the history of the pain, its location, character, temporal features, provocative and palliative factors, and intensity (Chap. 18); a review of the oncologic history and past medical history as well as personal and social history; and a thorough physical examination. The patient should be given a 10-division visual analogue scale on which to indicate the severity of the pain. The clinical condition is often dynamic, making it necessary to reassess the patient frequently. Pain therapy should not be withheld while the cause of pain is being sought. A variety of tools are available with which to address cancer pain. About 85% of patients will have pain relief from pharmacologic intervention. However, other modalities, including antitumor therapy (such as surgical relief of obstruction, radiation therapy, and strontium-89 or samarium-153 treatment for bone pain), neurostimulatory techniques, regional analgesia, or neuroablative procedures, are effective in an additional 12% or so. Thus, very few patients will have inadequate pain relief if appropriate measures are taken. A specific approach to pain relief is detailed in Chap. 10. Nausea Emesis in the cancer patient is usually caused by chemotherapy (Chap. 103e). Its severity can be predicted from the drugs used to treat the cancer. Three forms of emesis are recognized on the basis of their timing with regard to the noxious insult. Acute emesis, the most common variety, occurs within 24 h of treatment. Delayed emesis occurs 1–7 days after treatment; it is rare, but, when present, usually follows cisplatin administration. Anticipatory emesis occurs before administration of chemotherapy and represents a conditioned response to visual and olfactory stimuli previously associated with chemotherapy delivery. Acute emesis is the best understood form. Stimuli that activate signals in the chemoreceptor trigger zone in the medulla, the cerebral cortex, and peripherally in the intestinal tract lead to stimulation of the vomiting center in the medulla, the motor center responsible for coordinating the secretory and muscle contraction activity that leads to emesis. Diverse receptor types participate in the process, including dopamine, serotonin, histamine, opioid, and acetylcholine receptors. The serotonin receptor antagonists ondansetron and granisetron are the most effective drugs against highly emetogenic agents, but they are expensive. As with the analgesia ladder, emesis therapy should be tailored to the situation. For mildly and moderately emetogenic agents, prochlorperazine, 5–10 mg PO or 25 mg PR, is effective. Its efficacy may be enhanced by administering the drug before the chemotherapy is delivered. Dexamethasone, 10–20 mg IV, is also effective and may enhance the efficacy of prochlorperazine. For highly emetogenic agents such as cisplatin, mechlorethamine, dacarbazine, and streptozocin, combinations of agents work best and administration should begin 6–24 h before treatment. Ondansetron, 8 mg PO every 6 h the day before therapy and IV on the day of therapy, plus dexamethasone, 20 mg IV before treatment, is an effective regimen. Addition of oral aprepitant (a substance P/neurokinin 1 receptor antagonist) to this regimen (125 mg on day 1, 80 mg on days 2 and 3) further decreases the risk of both acute and delayed vomiting. Like pain, emesis is easier to prevent than to alleviate. Delayed emesis may be related to bowel inflammation from the therapy and can be controlled with oral dexamethasone and oral metoclopramide, a dopamine receptor antagonist that also blocks serotonin receptors at high dosages. The best strategy for preventing anticipatory emesis is to control emesis in the early cycles of therapy to prevent the conditioning from taking place. If this is unsuccessful, prophylactic antiemetics the day before treatment may help. Experimental studies are evaluating behavior modification. Effusions Fluid may accumulate abnormally in the pleural cavity, pericardium, or peritoneum. Asymptomatic malignant effusions may not require treatment. Symptomatic effusions occurring in tumors responsive to systemic therapy usually do not require local treatment but respond to the treatment for the underlying tumor. Symptomatic effusions occurring in tumors unresponsive to systemic therapy may require local treatment in patients with a life expectancy of at least 6 months. Pleural effusions due to tumors may or may not contain malignant cells. Lung cancer, breast cancer, and lymphomas account for ~75% of malignant pleural effusions. Their exudative nature is usually gauged by an effusion/serum protein ratio of ≥0.5 or an effusion/serum lactate dehydrogenase ratio of ≥0.6. When the condition is symptomatic, thoracentesis is usually performed first. In most cases, symptomatic improvement occurs for <1 month. Chest tube drainage is required if symptoms recur within 2 weeks. Fluid is aspirated until the flow rate is <100 mL in 24 h. Then either 60 units of bleomycin or 1 g of doxycycline is infused into the chest tube in 50 mL of 5% dextrose in water; the tube is clamped; the patient is rotated on four sides, spending 15 min in each position; and, after 1–2 h, the tube is again attached to suction for another 24 h. The tube is then disconnected from suction and allowed to drain by gravity. If <100 mL drains over the next 24 h, the chest tube is pulled, and a radiograph is taken 24 h later. If the chest tube continues to drain fluid at an unacceptably high rate, sclerosis can be repeated. Bleomycin may be somewhat more effective than doxycycline but is very expensive. Doxycycline is usually the drug of first choice. If neither doxycycline nor bleomycin is effective, talc can be used. Symptomatic pericardial effusions are usually treated by creating a pericardial window or by stripping the pericardium. If the patient’s condition does not permit a surgical procedure, sclerosis can be attempted with doxycycline and/or bleomycin. Malignant ascites is usually treated with repeated paracentesis of small volumes of fluid. If the underlying malignancy is unresponsive to systemic therapy, peritoneovenous shunts may be inserted. Despite the fear of disseminating tumor cells into the circulation, widespread metastases are an unusual complication. The major complications are occlusion, leakage, and fluid overload. Patients with severe liver disease may develop disseminated intravascular coagulation. Nutrition Cancer and its treatment may lead to a decrease in nutrient intake of sufficient magnitude to cause weight loss and alteration of intermediary metabolism. The prevalence of this problem is difficult to estimate because of variations in the definition of cancer cachexia, but most patients with advanced cancer experience weight loss and decreased appetite. A variety of both tumor-derived factors (e.g., bombesin, adrenocorticotropic hormone) and host-derived factors (e.g., tumor necrosis factor, interleukins 1 and 6, growth hormone) contribute to the altered metabolism, and a vicious cycle is established in which protein catabolism, glucose intolerance, and lipolysis cannot be reversed by the provision of calories. It remains controversial how to assess nutritional status and when and how to intervene. Efforts to make the assessment objective have included the use of a prognostic nutritional index based on albumin levels, triceps skinfold thickness, transferrin levels, and delayed-type hypersensitivity skin testing. However, a simpler approach has been to define the threshold for nutritional intervention as <10% unexplained body weight loss, serum transferrin level <1500 mg/L (150 mg/dL), and serum albumin <34 g/L (3.4 g/dL). The decision is important, because it appears that cancer therapy is substantially more toxic and less effective in the face of malnutrition. Nevertheless, it remains unclear whether nutritional intervention can alter the natural history. Unless some pathology is affecting the absorptive function of the gastrointestinal tract, enteral nutrition provided orally or by tube feeding is preferred over parenteral supplementation. However, the risks associated with the tube may outweigh the benefits. Megestrol acetate, a progestational agent, has been advocated as a pharmacologic intervention to improve nutritional status. Research in this area may provide more tools in the future as cytokinemediated mechanisms are further elucidated. Psychosocial Support The psychosocial needs of patients vary with their situation. Patients undergoing treatment experience fear, anxiety, and depression. Self-image is often seriously compromised by deforming surgery and loss of hair. Women who receive cosmetic advice that enables them to look better also feel better. Loss of control over how one spends time can contribute to the sense of vulnerability. Juggling the demands of work and family with the demands of treatment may create enormous stresses. Sexual dysfunction is highly prevalent and needs to be discussed openly with the patient. An empathetic health care team is sensitive to the individual patient’s needs and permits negotiation where such flexibility will not adversely affect the course of treatment. Cancer survivors have other sets of difficulties. Patients may have fears associated with the termination of a treatment they associate with their continued survival. Adjustments are required to physical losses and handicaps, real and perceived. Patients may be preoccupied with minor physical problems. They perceive a decline in their job mobility and view themselves as less desirable workers. They may be victims of job and/or insurance discrimination. Patients may experience difficulty reentering their normal past life. They may feel guilty for having survived and may carry a sense of vulnerability to colds and other illnesses. Perhaps the most pervasive and threatening concern is the ever-present fear of relapse (the Damocles syndrome). Patients in whom therapy has been unsuccessful have other problems related to the end of life. Death and Dying The most common causes of death in patients with cancer are infection (leading to circulatory failure), respiratory failure, hepatic failure, and renal failure. Intestinal blockage may lead to inanition and starvation. Central nervous system disease may lead to seizures, coma, and central hypoventilation. About 70% of patients develop dyspnea preterminally. However, many months usually pass between the diagnosis of cancer and the occurrence of these complications, and during this period, the patient is severely affected by the possibility of death. The path of unsuccessful cancer treatment usually occurs in three phases. First, there is optimism at the hope of cure; when the tumor recurs, there is the acknowledgment of an incurable disease, and the goal of palliative therapy is embraced in the hope of being able to live with disease; finally, at the disclosure of imminent death, another adjustment in outlook takes place. The patient imagines the worst in preparation for the end of life and may go through stages of adjustment to the diagnosis. These stages include denial, isolation, anger, bargaining, depression, acceptance, and hope. Of course, patients do not all progress through all the stages or proceed through them in the same order or at the same rate. Nevertheless, developing an understanding of how the patient has been affected by the diagnosis and is coping with it is an important goal of patient management. It is best to speak frankly with the patient and the family regarding the likely course of disease. These discussions can be difficult for the physician as well as for the patient and family. The critical features of the interaction are to reassure the patient and family that everything that can be done to provide comfort will be done. They will not be abandoned. Many patients prefer to be cared for in their homes or in a hospice setting rather than a hospital. The American College of Physicians has published a book called Home Care Guide for Cancer: How to Care for Family and Friends at Home that teaches an approach to successful problem-solving in home care. With appropriate planning, it should be possible to provide the patient with the necessary medical care as well as the psychological and spiritual support that will prevent the isolation and depersonalization that can attend in-hospital death. The care of dying patients may take a toll on the physician. A “burnout” syndrome has been described that is characterized by fatigue, disengagement from patients and colleagues, and a loss of self-fulfillment. Efforts at stress reduction, maintenance of a balanced life, and setting realistic goals may combat this disorder. End-of-Life Decisions Unfortunately, a smooth transition in treatment goals from curative to palliative may not be possible in all cases because of the occurrence of serious treatment-related complications or rapid disease progression. Vigorous and invasive medical support for a reversible disease or treatment complication is assumed to be justified. However, if the reversibility of the condition is in doubt, 475 the patient’s wishes determine the level of medical care. These wishes should be elicited before the terminal phase of illness and reviewed periodically. Information about advance directives can be obtained from the American Association of Retired Persons, 601 E Street, NW, Washington, DC 20049, 202-434-2277, or Choice in Dying, 250 West 57th Street, New York, NY 10107, 212-366-5540. Some states allow physicians to assist patients who choose to end their lives. This subject is challenging from an ethical and a medical point of view. Discussions of end-of-life decisions should be candid and involve clear informed consent, waiting periods, second opinions, and documentation. A full discussion of end-of-life management is in Chap. 10. Jennifer M. Croswell, Otis W. Brawley, Barnett S. Kramer Improved understanding of carcinogenesis has allowed cancer prevention and early detection (also known as cancer control) to expand beyond the identification and avoidance of carcinogens. Specific interventions to prevent cancer in those at risk, and effective screening for early detection of cancer, are the goals. Carcinogenesis is not an event but a process, a continuum of discrete tissue and cellular changes over time resulting in aberrant physiologic processes. Prevention concerns the identification and manipulation of the biologic, environmental, social, and genetic factors in the causal pathway of cancer. Public education on the avoidance of identified risk factors for cancer and encouraging healthy habits contributes to cancer prevention and control. The clinician is a powerful messenger in this process. The patient-provider encounter provides an opportunity to teach patients about the hazards of smoking, the features of a healthy lifestyle, use of proven cancer screening methods, and avoidance of excessive sun exposure. Tobacco smoking is a strong, modifiable risk factor for cardiovascular disease, pulmonary disease, and cancer. Smokers have an approximately 1 in 3 lifetime risk of dying prematurely from a tobacco-related cancer, cardiovascular, or pulmonary disease. Tobacco use causes more deaths from cardiovascular disease than from cancer. Lung cancer and cancers of the larynx, oropharynx, esophagus, kidney, bladder, pancreas, and stomach are all tobacco-related. The number of cigarettes smoked per day and the level of inhalation of cigarette smoke are correlated with risk of lung cancer mortality. Lightand low-tar cigarettes are not safer, because smokers tend to inhale them more frequently and deeply. Those who stop smoking have a 30–50% lower 10-year lung cancer mortality rate compared to those who continue smoking, despite the fact that some carcinogen-induced gene mutations persist for years after smoking cessation. Smoking cessation and avoidance would save more lives than any other public health activity. The risk of tobacco smoke is not limited to the smoker. Environmental tobacco smoke, known as secondhand or passive smoke, causes lung cancer and other cardiopulmonary diseases in nonsmokers. Tobacco use prevention is a pediatric issue. More than 80% of adult American smokers began smoking before the age of 18 years. Approximately 20% of Americans in grades 9 through 12 have smoked a cigarette in the past month. Counseling of adolescents and young adults is critical to prevent smoking. A clinician’s simple advice can be of benefit. Providers should query patients on tobacco use and offer smokers assistance in quitting. Prevention and Early Detection of Cancer 476 Current approaches to smoking cessation recognize smoking as an addiction (Chap. 470). The smoker who is quitting goes through identifiable stages that include contemplation of quitting, an action phase in which the smoker quits, and a maintenance phase. Smokers who quit completely are more likely to be successful than those who gradually reduce the number of cigarettes smoked or change to lower-tar or lower-nicotine cigarettes. More than 90% of the Americans who have successfully quit smoking did so on their own, without participation in an organized cessation program, but cessation programs are helpful for some smokers. The Community Intervention Trial for Smoking Cessation (COMMIT) was a 4-year program showing that light smokers (<25 cigarettes per day) were more likely to benefit from simple cessation messages and cessation programs than those who did not receive an intervention. Quit rates were 30.6% in the intervention group and 27.5% in the control group. The COMMIT interventions were unsuccessful in heavy smokers (<25 cigarettes per day). Heavy smokers may need an intensive broad-based cessation program that includes counseling, behavioral strategies, and pharmacologic adjuncts, such as nicotine replacement (gum, patches, sprays, lozenges, and inhalers), bupropion, and/or varenicline. The health risks of cigars are similar to those of cigarettes. Smoking one or two cigars daily doubles the risk for oral and esophageal cancers; smoking three or four cigars daily increases the risk of oral cancers more than eightfold and esophageal cancer fourfold. The risks of occasional use are unknown. Smokeless tobacco also represents a substantial health risk. Chewing tobacco is a carcinogen linked to dental caries, gingivitis, oral leukoplakia, and oral cancer. The systemic effects of smokeless tobacco (including snuff) may increase risks for other cancers. Esophageal cancer is linked to carcinogens in tobacco dissolved in saliva and swallowed. The net effects of e-cigarettes on health are poorly studied. Whether they aid in smoking cessation or serve as a “gateway” for nonsmoking children to acquire a smoking habit is debated. Physical activity is associated with a decreased risk of colon and breast cancer. A variety of mechanisms have been proposed. However, such studies are prone to confounding factors such as recall bias, association of exercise with other health-related practices, and effects of preclinical cancers on exercise habits (reverse causality). International epidemiologic studies suggest that diets high in fat are associated with increased risk for cancers of the breast, colon, prostate, and endometrium. These cancers have their highest incidence and mortalities in Western cultures, where fat composes an average of one-third of the total calories consumed. Despite correlations, dietary fat has not been proven to cause cancer. Case-control and cohort epidemiologic studies give conflicting results. In addition, diet is a highly complex exposure to many nutrients and chemicals. Low-fat diets are associated with many dietary changes beyond simple subtraction of fat. Other lifestyle changes are also associated with adherence to a low-fat diet. In observational studies, dietary fiber is associated with a reduced risk of colonic polyps and invasive cancer of the colon. However, cancer-protective effects of increasing fiber and lowering dietary fat have not been proven in the context of a prospective clinical trial. The putative protective mechanisms are complex and speculative. Fiber binds oxidized bile acids and generates soluble fiber products, such as butyrate, that may have differentiating properties. Fiber does not increase bowel transit times. Two large prospective cohort studies of >100,000 health professionals showed no association between fruit and vegetable intake and risk of cancer. The Polyp Prevention Trial randomly assigned 2000 elderly persons, who had polyps removed, to a low-fat, high-fiber diet versus routine diet for 4 years. No differences were noted in polyp formation. The U.S. National Institutes of Health Women’s Health Initiative, launched in 1994, was a long-term clinical trial enrolling >100,000 women age 45–69 years. It placed women in 22 intervention groups. Participants received calcium/vitamin D supplementation; hormone replacement therapy; and counseling to increase exercise, eat a low-fat diet with increased consumption of fruits, vegetables, and fiber, and cease smoking. The study showed that although dietary fat intake was lower in the diet intervention group, invasive breast cancers were not reduced over an 8-year follow-up period compared to the control group. No reduction was seen in the incidence of colorectal cancer in the dietary intervention arm. The difference in dietary fat averaged ∼10% between the two groups. Evidence does not currently establish the anticarcinogenic value of vitamin, mineral, or nutritional supplements in amounts greater than those provided by a balanced diet. Risk of cancer appears to increase as body mass index increases beyond 25 kg/m2. Obesity is associated with increased risk for cancers of the colon, breast (female postmenopausal), endometrium, kidney (renal cell), and esophagus, although causality has not been established. In observational studies, relative risks of colon cancer are increased in obesity by 1.5–2 for men and 1.2–1.5 for women. Obese postmenopausal women have a 30–50% increased relative risk of breast cancer. An unproven hypothesis for the association is that adipose tissue serves as a depot for aromatase that facilitates estrogen production. Nonmelanoma skin cancers (basal cell and squamous cell) are induced by cumulative exposure to ultraviolet (UV) radiation. Intermittent acute sun exposure and sun damage have been linked to melanoma, but the evidence is inconsistent. Sunburns, especially in childhood and adolescence, may be associated with an increased risk of melanoma in adulthood. Reduction of sun exposure through use of protective clothing and changing patterns of outdoor activities can reduce skin cancer risk. Sunscreens decrease the risk of actinic keratoses, the precursor to squamous cell skin cancer, but melanoma risk may not be reduced. Sunscreens prevent burning, but they may encourage more prolonged exposure to the sun and may not filter out wavelengths of energy that cause melanoma. Educational interventions to help individuals assess their risk of developing skin cancer have some impact. In particular, appearance-focused behavioral interventions in young women can decrease indoor tanning use and other UV exposures. Self-examination for skin pigment characteristics associated with skin cancer, such as freckling, may be useful in identifying people at high risk. Those who recognize themselves as being at risk tend to be more compliant with sun-avoidance recommendations. Risk factors for melanoma include a propensity to sunburn, a large number of benign melanocytic nevi, and atypical nevi. Chemoprevention involves the use of specific natural or synthetic chemical agents to reverse, suppress, or prevent carcinogenesis before the development of invasive malignancy. Cancer develops through an accumulation of tissue abnormalities associated with genetic and epigenetic changes, and growth regulatory pathways that are potential points of intervention to prevent cancer. The initial changes are termed initiation. The alteration can be inherited or acquired through the action of physical, infectious, or chemical carcinogens. Like most human diseases, cancer arises from an interaction between genetics and environmental exposures (Table 100-1). Influences that cause the initiated cell and its surrounding tissue microenvironment to progress through the carcinogenic process and change phenotypically are termed promoters. Promoters include hormones such as androgens, linked to prostate cancer, and estrogen, linked to breast and endometrial cancer. The distinction between an initiator and promoter is indistinct; some components of cigarette smoke are “complete carcinogens,” acting as both initiators and promoters. Cancer can be prevented or controlled through interference with the factors that cause cancer initiation, promotion, or progression. Compounds of interest in chemoprevention often have antimutagenic, hormone modulation, anti-inflammatory, antiproliferative, or proapoptotic activity (or a combination). Alkylating agents Acute myeloid leukemia, bladder cancer Androgens Prostate cancer Aromatic amines (dyes) Bladder cancer Arsenic Cancer of the lung, skin Asbestos Cancer of the lung, pleura, peritoneum Benzene Acute myelocytic leukemia Chromium Lung cancer Diethylstilbestrol (prenatal) Vaginal cancer (clear cell) Epstein-Barr virus Burkitt’s lymphoma, nasal T cell lymphoma Estrogens Cancer of the endometrium, liver, breast Ethyl alcohol Cancer of the breast, liver, esophagus, head and neck Helicobacter pylori Gastric cancer, gastric MALT lymphoma Hepatitis B or C virus Liver cancer Human immunodeficiency Non-Hodgkin’s lymphoma, Kaposi’s sarcoma, virus squamous cell carcinomas (especially of the urogenital tract) Human papilloma virus Cancers of the cervix, anus, oropharynx Human T cell lymphotropic Adult T cell leukemia/lymphoma virus type 1 (HTLV-1) Immunosuppressive agents Non-Hodgkin’s lymphoma (azathioprine, cyclosporine, glucocorticoids) Ionizing radiation (thera-Breast, bladder, thyroid, soft tissue, bone, peutic or diagnostic) hematopoietic, and many more Nitrogen mustard gas Cancer of the lung, head and neck, nasal sinuses Nickel dust Cancer of the lung, nasal sinuses Diesel exhaust Lung cancer (miners) Phenacetin Cancer of the renal pelvis and bladder Polycyclic hydrocarbons Cancer of the lung, skin (especially squamous cell carcinoma of scrotal skin) Radon gas Lung cancer Schistosomiasis Bladder cancer (squamous cell) Sunlight (ultraviolet) Skin cancer (squamous cell and melanoma) Tobacco (including Cancer of the upper aerodigestive tract, smokeless) bladder Vinyl chloride Liver cancer (angiosarcoma) aAgents that are thought to act as cancer initiators and/or promoters. Smoking causes diffuse epithelial injury in the oral cavity, neck, esophagus, and lung. Patients cured of squamous cell cancers of the lung, esophagus, oral cavity, and neck are at risk (as high as 5% per year) of developing second cancers of the upper aerodigestive tract. Cessation of cigarette smoking does not markedly decrease the cured cancer patient’s risk of second malignancy, even though it does lower the cancer risk in those who have never developed a malignancy. Smoking cessation may halt the early stages of the carcinogenic process (such as metaplasia), but it may have no effect on late stages of carcinogenesis. This “field carcinogenesis” hypothesis for upper aerodigestive tract cancer has made “cured” patients an important population for chemoprevention of second malignancies. Oral human papilloma virus (HPV) infection, particularly HPV-16, increases the risk for cancers of the oropharynx. This association exists even in the absence of other risk factors such as smoking or alcohol use (although the magnitude of increased risk appears greater than additive when HPV infection and smoking are both present). Oral HPV infection is believed to be largely sexually acquired. Although no direct evidence currently exists to confirm the hypothesis, the introduction of the HPV vaccine may eventually reduce oropharyngeal cancer rates. Oral leukoplakia, a premalignant lesion commonly found in smokers, has been used as an intermediate marker of chemopreventive activity in smaller shorter-duration, randomized, placebo-controlled 477 trials. Response was associated with upregulation of retinoic acid receptor-β (RAR-β). Therapy with high, relatively toxic doses of isotretinoin (13-cis-retinoic acid) causes regression of oral leukoplakia. However, the lesions recur when the therapy is withdrawn, suggesting the need for long-term administration. More tolerable doses of isotretinoin have not shown benefit in the prevention of head and neck cancer. Isotretinoin also failed to prevent second malignancies in patients cured of early-stage non-small cell lung cancer; mortality rates were actually increased in current smokers. Several large-scale trials have assessed agents in the chemoprevention of lung cancer in patients at high risk. In the α-tocopherol/βcarotene (ATBC) Lung Cancer Prevention Trial, participants were male smokers, age 50–69 years at entry. Participants had smoked an average of one pack of cigarettes per day for 35.9 years. Participants received α-tocopherol, β-carotene, and/or placebo in a randomized, two-by-two factorial design. After median follow-up of 6.1 years, lung cancer incidence and mortality were statistically significantly increased in those receiving β-carotene. α-Tocopherol had no effect on lung cancer mortality, and no evidence suggested interaction between the two drugs. Patients receiving α-tocopherol had a higher incidence of hemorrhagic stroke. The β-Carotene and Retinol Efficacy Trial (CARET) involved 17,000 American smokers and workers with asbestos exposure. Entrants were randomly assigned to one of four arms and received β-carotene, retinol, and/or placebo in a two-by-two factorial design. This trial also demonstrated harm from β-carotene: a lung cancer rate of 5 per 1000 subjects per year for those taking placebo and of 6 per 1000 subjects per year for those taking β-carotene. The ATBC and CARET results demonstrate the importance of testing chemoprevention hypotheses thoroughly before their widespread implementation because the results contradict a number of observational studies. The Physicians’ Health Trial showed no change in the risk of lung cancer for those taking β-carotene; however, fewer of its participants were smokers than those in the ATBC and CARET studies. Many colon cancer prevention trials are based on the premise that most colorectal cancers develop from adenomatous polyps. These trials use adenoma recurrence or disappearance as a surrogate endpoint (not yet validated) for colon cancer prevention. Early clinical trial results suggest that nonsteroidal anti-inflammatory drugs (NSAIDs), such as piroxicam, sulindac, and aspirin, may prevent adenoma formation or cause regression of adenomatous polyps. The mechanism of action of NSAIDs is unknown, but they are presumed to work through the cyclooxygenase pathway. Although two randomized controlled trials (the Physicians’ Health Study and the Women’s Health Study) did not show an effect of aspirin on colon cancer or adenoma incidence in persons with no previous history of colonic lesions after 10 years of therapy, these trials did show an approximately 18% relative risk reduction for colonic adenoma incidence in persons with a previous history of adenomas after 1 year. Pooled findings from observational cohort studies do demonstrate a 22% and 28% relative reduction in colorectal cancer and adenoma incidence, respectively, with regular aspirin use, and a well-conducted meta-analysis of four randomized controlled trials (albeit primarily designed to examine aspirin’s effects on cardiovascular events) found that aspirin at doses of at least 75 mg resulted in a 24% relative reduction in colorectal cancer incidence after 20 years, with no clear increase in efficacy at higher doses. Cyclooxygenase-2 (COX-2) inhibitors have also been considered for colorectal cancer and polyp prevention. Trials with COX-2 inhibitors were initiated, but an increased risk of cardiovascular events in those taking the COX-2 inhibitors was noted, suggesting that these agents are not suitable for chemoprevention in the general population. Epidemiologic studies suggest that diets high in calcium lower colon cancer risk. Calcium binds bile and fatty acids, which cause proliferation of colonic epithelium. It is hypothesized that calcium reduces intraluminal exposure to these compounds. The randomized controlled Calcium Polyp Prevention Study found that calcium supplementation Prevention and Early Detection of Cancer 478 decreased the absolute risk of adenomatous polyp recurrence by 7% at 4 years; extended observational follow-up demonstrated a 12% absolute risk reduction 5 years after cessation of treatment. However, in the Women’s Health Initiative, combined use of calcium carbonate and vitamin D twice daily did not reduce the incidence of invasive colorectal cancer compared with placebo after 7 years. The Women’s Health Initiative demonstrated that postmenopausal women taking estrogen plus progestin have a 44% lower relative risk of colorectal cancer compared to women taking placebo. Of >16,600 women randomized and followed for a median of 5.6 years, 43 invasive colorectal cancers occurred in the hormone group and 72 in the placebo group. The positive effect on colon cancer is mitigated by the modest increase in cardiovascular and breast cancer risks associated with combined estrogen plus progestin therapy. A case-control study suggested that statins decrease the incidence of colorectal cancer; however, several subsequent case-control and cohort studies have not demonstrated an association between regular statin use and a reduced risk of colorectal cancer. No randomized controlled trials have addressed this hypothesis. A meta-analysis of statin use showed no protective effect of statins on overall cancer incidence or death. Tamoxifen is an antiestrogen with partial estrogen agonistic activity in some tissues, such as endometrium and bone. One of its actions is to upregulate transforming growth factor β, which decreases breast cell proliferation. In randomized placebo-controlled trials to assess tamoxifen as adjuvant therapy for breast cancer, tamoxifen reduced the number of new breast cancers in the opposite breast by more than a third. In a randomized placebo-controlled prevention trial involving >13,000 preand postmenopausal women at high risk, tamoxifen decreased the risk of developing breast cancer by 49% (from 43.4 to 22 per 1000 women) after a median follow-up of nearly 6 years. Tamoxifen also reduced bone fractures; a small increase in risk of endometrial cancer, stroke, pulmonary emboli, and deep vein thrombosis was noted. The International Breast Cancer Intervention Study (IBIS-I) and the Italian Randomized Tamoxifen Prevention Trial also demonstrated a reduction in breast cancer incidence with tamoxifen use. A trial comparing tamoxifen with another selective estrogen receptor modulator, raloxifene, in postmenopausal women showed that raloxifene is comparable to tamoxifen in cancer prevention. This trial only included postmenopausal women. Raloxifene was associated with more invasive breast cancers and a trend toward more noninvasive breast cancers, but fewer thromboembolic events than tamoxifen; the drugs are similar in risks of other cancers, fractures, ischemic heart disease, and stroke. Both tamoxifen and raloxifene (the latter for post-menopausal women only) have been approved by the U.S. Food and Drug Administration (FDA) for reduction of breast cancer in women at high risk for the disease (1.66% risk at 5 years based on the Gail risk model: http://www.cancer.gov/bcrisktool/). Because the aromatase inhibitors are even more effective than tamoxifen in adjuvant breast cancer therapy, it has been hypothesized that they would be more effective in breast cancer prevention. A randomized, placebo-controlled trial of exemestane reported a 65% relative reduction (from 5.5 to 1.9 per 1000 women) in the incidence of invasive breast cancer in women at elevated risk after a median follow-up of about 3 years. Common adverse effects included arthralgias, hot flashes, fatigue, and insomnia. No trial has directly compared aromatase inhibitors with selective estrogen receptor modulators for breast cancer chemoprevention. Finasteride and dutasteride are 5-α-reductase inhibitors. They inhibit conversion of testosterone to dihydrotestosterone (DHT), a potent stimulator of prostate cell proliferation. The Prostate Cancer Prevention Trial (PCPT) randomly assigned men age 55 years or older at average risk of prostate cancer to finasteride or placebo. All men in the trial were being regularly screened with prostate-specific antigen (PSA) levels and digital rectal examination. After 7 years of therapy, the incidence of prostate cancer was 18.4% in the finasteride arm, compared with 24.4% in the placebo arm, a statistically significant difference. However, the finasteride group had more patients with tumors of Gleason score 7 and higher compared with the placebo arm (6.4 vs 5.1%). Reassuringly, long-term (10–15 years) follow-up did not reveal any statistically significant differences in overall mortality between all men in the finasteride and placebo arms or in men diagnosed with prostate cancer; differences in prostate cancer in favor of finasteride persisted. Dutasteride has also been evaluated as a preventive agent for prostate cancer. The Reduction by Dutasteride of Prostate Cancer Events (REDUCE) trial was a randomized double-blind trial in which approximately 8200 men with an elevated PSA (2.5–10 ng/mL for men age 50–60 years and 3–10 ng/mL for men age 60 years or older) and negative prostate biopsy on enrollment received daily 0.5 mg of dutasteride or placebo. The trial found a statistically significant 23% relative risk reduction in the incidence of biopsy-detected prostate cancer in the dutasteride arm at 4 years of treatment (659 cases vs 858 cases, respectively). Overall, across years 1 through 4, there was no difference between the arms in the number of tumors with a Gleason score of 7 to 10; however, during years 3 and 4, there was a statistically significant difference in tumors with Gleason score of 8 to 10 in the dutasteride arm (12 tumors vs 1 tumor, respectively). The clinical importance of the apparent increased incidence of higher-grade tumors in the 5-α-reductase inhibitor arms of these trials is controversial. It may likely represent an increased sensitivity of PSA and digital rectal exam for high-grade tumors in men receiving these agents. The FDA has analyzed both trials, and it determined that the use of a 5-α-reductase inhibitor for prostate cancer chemoprevention would result in one additional high-grade (Gleason score 8 to 10) prostate cancer for every three to four lower-grade (Gleason score <6) tumors averted. Although it acknowledged that detection bias may have accounted for the finding, it stated that it could not conclusively dismiss a causative role for 5-α-reductase inhibitors. These agents are therefore not FDA-approved for prostate cancer prevention. Because all men in both the PCPT and REDUCE trials were being screened and because screening approximately doubles the rate of prostate cancer, it is not known if finasteride or dutasteride decreases the risk of prostate cancer in men who are not being screened. Several favorable laboratory and observational studies led to the formal evaluation of selenium and α-tocopherol (vitamin E) as potential prostate cancer preventives. The Selenium and Vitamin E Cancer Prevention Trial (SELECT) assigned 35,533 men to receive 200 μg/d selenium, 400 IU/d α-tocopherol, selenium plus vitamin E, or placebo. After a median follow-up of 7 years, a trend toward an increased risk of developing prostate cancer was observed for those men taking vitamin E alone as compared to the placebo arm (hazard ratio 1.17; 95% confidence interval, 1.004–1.36). A number of infectious agents cause cancer. Hepatitis B and C are linked to liver cancer; some HPV strains are linked to cervical, anal, and head and neck cancer; and Helicobacter pylori is associated with gastric adenocarcinoma and gastric lymphoma. Vaccines to protect against these agents may reduce the risk of their associated cancers. The hepatitis B vaccine is effective in preventing hepatitis and hepatomas due to chronic hepatitis B infection. A quadrivalent HPV vaccine (covering HPV strains 6, 11, 16, and 18) and a bivalent vaccine (covering HPV strains 16 and 18) are available for use in the United States. HPV types 16 and 18 cause cervical and anal cancer; reduction in these HPV types could prevent >70% of cervical cancers worldwide. HPV types 6 and 11 cause genital papillomas. For individuals not previously infected with these HPV strains, the vaccines demonstrate high efficacy in preventing persistent strain-specific HPV infections; however, the trials and substudies that evaluated the vaccines’ ability to prevent cervical and anal cancer relied on surrogate outcome measures (cervical or anal intraepithelial neoplasia [CIN/AIN] I, II, and III), and the degree of durability of the immune response beyond 5 years is not currently known. The vaccines do not appear to impact preexisting infections and the efficacy appears to Prevention and Early Detection of Cancer be markedly lower for populations that had previously been exposed to vaccine-specific HPV strains. The vaccine is recommended in the United States for females and males age 9–26 years. Some organs in some individuals are at such high risk of developing cancer that surgical removal of the organ at risk may be considered. Women with severe cervical dysplasia are treated with laser or loop electrosurgical excision or conization and occasionally even hysterectomy. Colectomy is used to prevent colon cancer in patients with familial polyposis or ulcerative colitis. Prophylactic bilateral mastectomy may be chosen for breast cancer prevention among women with genetic predisposition to breast cancer. In a prospective series of 139 women with BRCA1 and BRCA2 mutations, 76 chose to undergo prophylactic mastectomy and 63 chose close surveillance. At 3 years, no cases of breast cancer had been diagnosed in those opting for surgery, but eight patients in the surveillance group had developed breast cancer. A larger (n = 639) retrospective cohort study reported that three patients developed breast cancer after prophylactic mastectomy compared with an expected incidence of 30–53 cases: a 90–94% reduction in breast cancer risk. Postmastectomy breast cancer–related deaths were reduced by 81–94% for high-risk women compared with sister controls and by 100% for moderate-risk women when compared with expected rates. Prophylactic oophorectomy may also be employed for the prevention of ovarian and breast cancers among high-risk women. A prospective cohort study evaluating the outcomes of BRCA mutation carriers demonstrated a statistically significant association between prophylactic oophorectomy and a reduced incidence of ovarian or primary peritoneal cancer (36% relative risk reduction, or a 4.5% absolute difference). Studies of prophylactic oophorectomy for prevention of breast cancer in women with genetic mutations have shown relative risk reductions of approximately 50%; the risk reduction may be greatest for women having the procedure at younger (i.e., <50 years) ages. All of the evidence concerning the use of prophylactic mastectomy and oophorectomy for prevention of breast and ovarian cancer in high-risk women has been observational in nature; such studies are prone to a variety of biases, including case selection bias, family relationships between patients and controls, and inadequate information about hormone use. Thus, they may give an overestimate of the magnitude of benefit. Screening is a means of detecting disease early in asymptomatic individuals, with the goal of decreasing morbidity and mortality. While screening can potentially reduce disease-specific deaths and has been shown to do so in cervical, colon, lung, and breast cancer, it is also subject to a number of biases that can suggest a benefit when actually there is none. Biases can even mask net harm. Early detection does not in itself confer benefit. Cause-specific mortality, rather than survival after diagnosis, is the preferred endpoint (see below). Because screening is done on asymptomatic, healthy persons, it should offer substantial likelihood of benefit that outweighs harm. Screening tests and their appropriate use should be carefully evaluated before their use is widely encouraged in screening programs, as a matter of public policy. A large and increasing number of genetic mutations and nucleotide polymorphisms have been associated with an increased risk of cancer. Testing for these genetic mutations could in theory define a high-risk population. However, most of the identified mutations have very low penetrance and individually provide minimal predictive accuracy. The ability to predict the development of a particular cancer may some day present therapeutic options as well as ethical dilemmas. It may eventually allow for early intervention to prevent a cancer or limit its severity. People at high risk may be ideal candidates for chemoprevention and screening; however, efficacy of these interventions in the high-risk population should be investigated. Currently, persons at high risk for a particular cancer can engage in intensive screening. While this course is clinically reasonable, it is not known if it reduces mortality in these populations. Sensitivity The proportion of persons with the condition who test positive: a /(a + c) Specificity The proportion of persons without the condition who test negative: d /(b + d) Positive predictive value The proportion of persons with a positive test who (PPV) have the condition: a /(a + b) Negative predictive The proportion of persons with a negative test value who do not have the condition: d /(c + d) Prevalence, sensitivity, and specificity determine PPV aFor diseases of low prevalence, such as cancer, poor specificity has a dramatic adverse effect on PPV such that only a small fraction of positive tests are true positives. The Accuracy of Screening A screening test’s accuracy or ability to discriminate disease is described by four indices: sensitivity, specificity, positive predictive value, and negative predictive value (Table 100-2). Sensitivity, also called the true-positive rate, is the proportion of persons with the disease who test positive in the screen (i.e., the ability of the test to detect disease when it is present). Specificity, or 1 minus the false-positive rate, is the proportion of persons who do not have the disease that test negative in the screening test (i.e., the ability of a test to correctly identify that the disease is not present). The positive predictive value is the proportion of persons who test positive that actually have the disease. Similarly, negative predictive value is the proportion testing negative that do not have the disease. The sensitivity and specificity of a test are independent of the underlying prevalence (or risk) of the disease in the population screened, but the predictive values depend strongly on the prevalence of the disease. Screening is most beneficial, efficient, and economical when the target disease is common in the population being screened. Specificity is at least as important to the ultimate feasibility and success of a screening test as sensitivity. Potential Biases of Screening Tests Common biases of screening are lead time, length-biased sampling, and selection. These biases can make a screening test seem beneficial when actually it is not (or even causes net harm). Whether beneficial or not, screening can create the false impression of an epidemic by increasing the number of cancers diagnosed. It can also produce a shift in the proportion of patients diagnosed at an early stage and inflate survival statistics without reducing mortality (i.e., the number of deaths from a given cancer relative to the number of those at risk for the cancer). In such a case, the apparent duration of survival (measured from date of diagnosis) increases without lives being saved or life expectancy changed. Lead-time bias occurs whether or not a test influences the natural history of the disease; the patient is merely diagnosed at an earlier date. Survival appears increased even if life is not really prolonged. The screening test only prolongs the time the subject is aware of the disease and spends as a patient. Length-biased sampling occurs because screening tests generally can more easily detect slow-growing, less aggressive cancers than fast-growing cancers. Cancers diagnosed due to the onset of symptoms between scheduled screenings are on average more aggressive, and treatment outcomes are not as favorable. An extreme form of length bias sampling is termed overdiagnosis, the detection of “pseudo disease.” The reservoir of some undetected slow-growing tumors is large. Many of these tumors fulfill the histologic criteria of cancer but will 480 never become clinically significant or cause death. This problem is compounded by the fact that the most common cancers appear most frequently at ages when competing causes of death are more frequent. Selection bias must be considered in assessing the results of any screening effort. The population most likely to seek screening may differ from the general population to which the screening test might be applied. In general, volunteers for studies are more health conscious and likely to have a better prognosis or lower mortality rate, irrespective of the screening result. This is termed the healthy volunteer effect. Potential Drawbacks of Screening Risks associated with screening include harm caused by the screening intervention itself, harm due to the further investigation of persons with positive tests (both true and false positives), and harm from the treatment of persons with a true-positive result, whether or not life is extended by treatment (e.g., even if a screening test reduces relative cause-specific mortality by 20–30%, 70–80% of those diagnosed still go on to die of the target cancer). The diagnosis and treatment of cancers that would never have caused medical problems can lead to the harm of unnecessary treatment and give patients the anxiety of a cancer diagnosis. The psychosocial impact of cancer screening can also be substantial when applied to the entire population. Assessment of Screening Tests Good clinical trial design can offset some biases of screening and demonstrate the relative risks and benefits of a screening test. A randomized controlled screening trial with cause-specific mortality as the endpoint provides the strongest support for a screening intervention. Overall mortality should also be reported to detect an adverse effect of screening and treatment on other disease outcomes (e.g., cardiovascular disease). In a randomized trial, two like populations are randomly established. One is given the usual standard of care (which may be no screening at all) and the other receives the screening intervention being assessed. The two populations are compared over time. Efficacy for the population studied is established when the group receiving the screening test has a better cause-specific mortality rate than the control group. Studies showing a reduction in the incidence of advanced-stage disease, improved survival, or a stage shift are weaker (and possibly misleading) evidence of benefit. These latter criteria are early indicators but not sufficient to establish the value of a screening test. Although a randomized, controlled screening trial provides the strongest evidence to support a screening test, it is not perfect. Unless the trial is population-based, it does not remove the question of generalizability to the target population. Screening trials generally involve thousands of persons and last for years. Less definitive study designs are therefore often used to estimate the effectiveness of screening practices. However, every nonrandomized study design is subject to strong confounders. In descending order of strength, evidence may also be derived from the findings of internally controlled trials using intervention allocation methods other than randomization (e.g., allocation by birth date, date of clinic visit); the findings of analytic observational studies; or the results of multiple time series studies with or without the intervention. Screening for Specific Cancers Screening for cervical, colon, and breast cancer is beneficial for certain age groups. Depending on age and smoking history, lung cancer screening can also be beneficial in specific settings. Special surveillance of those at high risk for a specific cancer because of a family history or a genetic risk factor may be prudent, but few studies have assessed the influence on mortality. A number of organizations have considered whether or not to endorse routine use of certain screening tests. Because these groups have not used the same criteria to judge whether a screening test should be endorsed, they have arrived at different recommendations. The American Cancer Society (ACS) and the U.S. Preventive Services Task Force (USPSTF) publish screening guidelines (Table 100-3); the American Academy of Family Practitioners (AAFP) generally follow/ endorse the USPSTF recommendations; and the American College of Physicians (ACP) develops recommendations based on structured reviews of other organizations’ guidelines. Breast cancer Breast self-examination, clinical breast examination by a caregiver, mammography, and magnetic resonance imaging (MRI) have all been variably advocated as useful screening tools. A number of trials have suggested that annual or biennial screening with mammography or mammography plus clinical breast examination in normal-risk women older than age 50 years decreases breast cancer mortality. Each trial has been criticized for design flaws. In most trials, breast cancer mortality rate is decreased by 15–30%. Experts disagree on whether average-risk women age 40–49 years should receive regular screening (Table 100-3). The U.K. Age Trial, the only randomized trial of breast cancer screening to specifically evaluate the impact of mammography in women age 40–49 years, found no statistically significant difference in breast cancer mortality for screened women versus controls after about 11 years of follow-up (relative risk 0.83; 95% confidence interval 0.66–1.04); however, <70% of women received screening in the intervention arm, potentially diluting the observed effect. A meta-analysis of eight large randomized trials showed a 15% relative reduction in mortality (relative risk 0.85; 95% confidence interval 0.75–0.96) from mammography screening for women age 39–49 years after 11–20 years of follow-up. This is equivalent to a number needed to invite to screening of 1904 over 10 years to prevent one breast cancer death. At the same time, nearly half of women age 40–49 years screened annually will have false-positive mammograms necessitating further evaluation, often including biopsy. Estimates of overdiagnosis range from 10 to 40% of diagnosed invasive cancers. In the United States, widespread screening over the last several decades has not been accompanied by a reduction in incidence of metastatic breast cancer despite a large increase in early-stage disease, suggesting a substantial amount of overdiagnosis at the population level. No study of breast self-examination has shown it to decrease mortality. A randomized controlled trial of approximately 266,000 women in China demonstrated no difference in breast cancer mortality between a group that received intensive breast self-exam instruction and reinforcement/reminders and controls at 10 years of follow-up. However, more benign breast lesions were discovered and more breast biopsies were performed in the self-examination arm. Genetic screening for BRCA1 and BRCA2 mutations and other markers of breast cancer risk has identified a group of women at high risk for breast cancer. Unfortunately, when to begin and the optimal frequency of screening have not been defined. Mammography is less sensitive at detecting breast cancers in women carrying BRCA1 and BRCA2 mutations, possibly because such cancers occur in younger women, in whom mammography is known to be less sensitive. MRI screening may be more sensitive than mammography in women at high risk due to genetic predisposition or in women with very dense breast tissue, but specificity may be lower. An increase in overdiagnosis may accompany the higher sensitivity. The impact of MRI on breast cancer mortality with or without concomitant use of mammography has not been evaluated in a randomized controlled trial. cervical cancer Screening with Papanicolaou (Pap) smears decreases cervical cancer mortality. The cervical cancer mortality rate has fallen substantially since the widespread use of the Pap smear. With the onset of sexual activity comes the risk of sexual transmission of HPV, the fundamental etiologic factor for cervical cancer. Screening guidelines recommend regular Pap testing for all women who have reached the age of 21 (prior to this age, even in individuals that have begun sexual activity, screening may cause more harm than benefit). The recommended interval for Pap screening is 3 years. Screening more frequently adds little benefit but leads to important harms, including unnecessary procedures and overtreatment of transient lesions. Beginning at age 30, guidelines also offer the alternative of combined Pap smear and HPV testing for women. The screening interval for women who test normal using this approach may be lengthened to 5 years. An upper age limit at which screening ceases to be effective is not known, but women age 65 years with no abnormal results in the previous 10 years may choose to stop screening. Screening should be Women ≥40 years: “I” (as a standalone without mammography) Women 40–49 years: The decision should be an individual one, and take patient context/values into account (“C”) Women 50–74 years: Every 2 years (“B”) Women ≥75 years: “I” Women 21–65 years: Screen every 3 years (“A”) Women <21 years: “D” Women >65 years, with adequate, normal prior Pap screenings: “D” Women after total hysterectomy for noncancerous causes: “D” Women 30–65 years: Screen in combination with cytology every 5 years if woman desires to lengthen the screening interval (see Pap test, above) (“A”) Women <30 years: “D” Women >65 years, with adequate, normal prior Pap screenings: “D” Women after total hysterectomy for noncancerous causes: “D” Adults 50–75 years: every 5 years in combination with high-sensitivity FOBT every 3 years (“A”)b Adults 76–85 years: “C” Adults ≥85 years: “D” Adults 50–75 years: Annually, for high-sensitivity FOBT (“A”) Adults 76–85 years: “C” Adults ≥85 years: “D” Adults 50–75 years: every 10 years (“A”) Adults 76–85 years: “C” Adults ≥85 years: “D” Adults 55–80 years, with a ≥30 pack-year smoking history, still smoking or have quit within past 15 years. Discontinue once a person has not smoked for 15 years or develops a health problem that substantially limits life expectancy or the ability to have curative lung surgery: “B” “D” Women ≥20 years: Breast self-exam is an option Women 20–39 years: Perform every 3 years Women ≥40 years: Perform annually Women ≥40 years: Screen annually for as long as the woman is in good health Women with >20% lifetime risk of breast cancer: Screen with MRI plus mammography annually Women with 15–20% lifetime risk of breast cancer: Discuss option of MRI plus mammography annually Women with <15% lifetime risk of breast cancer: Do not screen annually with MRI Women 21–29 years: Screen every 3 years Women 30–65 years: Acceptable approach to screen with cytology every 3 years (see HPV test below) Women <21 years: No screening Women >65 years: No screening following adequate negative prior screening Women after total hysterectomy for noncancerous causes: Do not screen Women 30–65 years: Preferred approach to screen with HPV and cytology co-testing every 5 years (see Pap test above) Women <30 years: Do not use HPV testing Women >65 years: No screening following adequate negative prior screening Women after total hysterectomy for noncancerous causes: Do not screen Adults ≥50 years: Screen every 5 years Adults ≥50 years: Screen every year Adults ≥50 years: Screen every 10 years Adults ≥50 years: Screen, but interval uncertain Adults ≥50 years: Screen every year Adults ≥50 years: Screen every 5 years Men and women, 55–74 years, with ≥30 pack-year smoking history, still smoking or have quit within past 15 years: Discuss benefits, limitations, and potential harms of screening; only perform screening in facilities with the right type of CT scanner and with high expertise/specialists There is no sufficiently accurate test proven effective in the early detection of ovarian cancer. For women at high risk of ovarian cancer and/or who have unexplained, persistent symptoms, the combination of CA-125 and transvaginal ultrasound with pelvic exam may be offered. Prevention and Early Detection of Cancer Skin Complete skin examination by clinician or patient Men, all ages: “D” Starting at age 50, men should talk to a doctor about the pros and cons of testing so they can decide if testing is the right choice for them. If African American or have a father or brother who had prostate cancer before age 65, men should have this talk starting at age 45. How often they are tested will depend on their PSA level. As for PSA; if men decide to be tested, they should have the PSA blood test with or without a rectal exam Self-examination monthly; clinical exam as part of routine cancer-related checkup aSummary of the screening procedures recommended for the general population by the USPSTF and the ACS. These recommendations refer to asymptomatic persons who are not known to have risk factors, other than age or gender, for the targeted condition. bUSPSTF lettered recommendations are defined as follows: “A”: The USPSTF recommends the service, because there is high certainty that the net benefit is substantial; “B”: The USPSTF recommends the service, because there is high certainty that the net benefit is moderate or moderate certainty that the net benefit is moderate to substantial; “C”: The USPSTF recommends selectively offering or providing this service to individual patients based on professional judgment and patient preferences; there is at least moderate certainty that the net benefit is small; “D”: The USPSTF recommends against the service because there is moderate or high certainty that the service has no net benefit or that the harms outweigh the benefits; “I”: The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of the service. Abbreviations: ACS, American Cancer Society; USPSTF, U.S. Preventive Services Task Force. discontinued in women who have undergone a hysterectomy for noncancerous reasons. Although the efficacy of the Pap smear in reducing cervical cancer mortality has never been directly confirmed in a randomized, controlled setting, a clustered randomized trial in India evaluated the impact of one-time cervical visual inspection and immediate colposcopy, biopsy, and/or cryotherapy (where indicated) versus counseling on cervical cancer deaths in women age 30–59 years. After 7 years of follow-up, the age-standardized rate of death due to cervical cancer was 39.6 per 100,000 person-years in the intervention group versus 56.7 per 100,000 person-years in controls. colorectal cancer Fecal occult blood testing (FOBT), digital rectal examination (DRE), rigid and flexible sigmoidoscopy, colonoscopy, and computed tomography (CT) colonography have been considered for colorectal cancer screening. A meta-analysis of four randomized controlled trials demonstrated a 15% relative reduction in colorectal cancer mortality with FOBT. The sensitivity for FOBT is increased if specimens are rehydrated before testing, but at the cost of lower specificity. The false-positive rate for rehydrated FOBT is high; 1–5% of persons tested have a positive test. Only 2–10% of those with occult blood in the stool have cancer. The high false-positive rate of FOBT dramatically increases the number of colonoscopies performed. Fecal immunochemical tests appear to have higher sensitivity for colorectal cancer than nonrehydrated FOBT tests. Fecal DNA testing is an emerging testing modality; it may have increased sensitivity and comparable specificity to FOBT and could potentially reduce harms associated with follow-up of false-positive tests. The body of evidence on the operating characteristics and effectiveness of fecal DNA tests in reducing colorectal cancer mortality is limited. Two meta-analyses of five randomized controlled trials of sigmoidoscopy (i.e., the NORCCAP, SCORE, PLCO, Telemark, and U.K. trials) found an 18% relative reduction in colorectal cancer incidence and a 28% relative reduction in colorectal cancer mortality. Participant ages ranged from 50 to 74 years, with follow-up ranging from 6 to 13 years. Diagnosis of adenomatous polyps by sigmoidoscopy should lead to evaluation of the entire colon with colonoscopy. The most efficient interval for screening sigmoidoscopy is unknown, but an interval of 5 years is often recommended. Case-control studies suggest that intervals of up to 15 years may confer benefit; the U.K. trial demonstrated benefit with one-time screening. One-time colonoscopy detects ∼25% more advanced lesions (polyps >10 mm, villous adenomas, adenomatous polyps with high-grade dysplasia, invasive cancer) than one-time FOBT with sigmoidoscopy; comparative programmatic performance of the two modalities over time is not known. Perforation rates are about 3/1000 for colonoscopy and 1/1000 for sigmoidoscopy. Debate continues on whether colonoscopy is too expensive and invasive and whether sufficient provider capacity exists to be recommended as the preferred screening tool in standard-risk populations. Some observational studies suggest that efficacy of colonoscopy to decrease colorectal cancer mortality is primarily limited to the left side of the colon. CT colonography, if done at expert centers, appears to have a sensitivity for polyps ≥6 mm comparable to colonoscopy. However, the rate of extracolonic findings of abnormalities of uncertain significance that must nevertheless be worked up is high (∼15–30%); the long-term cumulative radiation risk of repeated colonography screenings is also a concern. lung cancer Chest x-ray and sputum cytology have been evaluated in several randomized lung cancer screening trials. The most recent and largest (n = 154,901) of these, one substudy of the Prostate, Lung, Colorectal, and Ovarian (PLCO) cancer screening trial, found that, compared with usual care, annual chest x-ray did not reduce the risk of dying from lung cancer (relative risk 0.99; 95% confidence interval 0.87–1.22) after 13 years. Low-dose CT has also been evaluated in several randomized trials. The largest and longest of these, the National Lung Screening Trial (NLST), was a randomized controlled trial of screening for lung cancer in approximately 53,000 persons age 55–74 years with a 30+ pack-year smoking history. It demonstrated a statistically significant relative reduction of about 15–20% in lung cancer mortality in the CT arm compared to the chest x-ray arm (or about 3 fewer deaths per 1000 people screened with CT). However, the harms include the potential radiation risks associated with multiple scans, the discovery of incidental findings of unclear significance, and a high rate of false-positive test results. Both incidental findings and false-positive tests can lead to invasive diagnostic procedures associated with anxiety, expense, and complications (e.g., pneumoor hemothorax after lung biopsy). The NLST was performed at experienced screening centers, and the balance of benefits and harms may differ in the community setting at less experienced centers. ovarian cancer Adnexal palpation, transvaginal ultrasound (TVUS), and serum CA-125 assay have been considered for ovarian cancer screening. A large randomized controlled trial has shown that an annual screening program of TVUS and CA-125 in average-risk women does not reduce deaths from ovarian cancer (relative risk 1.21; 95% confidence interval 0.99–1.48). Adnexal palpation was dropped early in the study because it did not detect any ovarian cancers that were not detected by either TVUS or CA-125. The risks and costs associated with the high number of false-positive results are impediments to routine use of these modalities for screening. In the PLCO trial, 10% of participants had a false-positive result from TVUS or CA-125, and one-third of these women underwent a major surgical procedure; the ratio of surgeries to screen-detected ovarian cancer was approximately 20:1. Prostate cancer The most common prostate cancer screening modalities are DRE and serum PSA assay. An emphasis on PSA screening has caused prostate cancer to become the most common nonskin cancer diagnosed in American males. This disease is prone to lead-time bias, length bias, and overdiagnosis, and substantial debate continues among experts as to whether screening should be offered unless the patient specifically asks to be screened. Virtually all organizations stress the importance of informing men about the uncertainty regarding screening efficacy and the harms associated with screening. Prostate cancer screening clearly detects many asymptomatic cancers, but the ability to distinguish tumors that are lethal but still curable from those that pose little or no threat to health is limited, and randomized trials indicate that the effect of PSA screening on prostate cancer mortality across a population is, at best, small. Men older than age 50 years have a high prevalence of indolent, clinically insignificant prostate cancers (about 30–50% of men, increasing further as men age). Two major randomized controlled trials of the impact of PSA screening on prostate cancer mortality have been published. The PLCO Cancer Screening Trial was a multicenter U.S. trial that randomized almost 77,000 men age 55–74 years to receive either annual PSA testing for 6 years or usual care. At 13 years of follow-up, no statistically significant difference in the number of prostate cancer deaths were noted between the arms (rate ratio 1.09; 95% confidence interval 0.87–1.36). Approximately 50% of men in the control arm received at least one PSA test during the trial, which may have potentially diluted a small effect. The European Randomized Study of Screening for Prostate Cancer (ERSPC) was a multinational study that randomized approximately 182,000 men between age 50 and 74 years (with a predefined “core” screening group of men age 55–69 years) to receive PSA testing or no screening. Recruitment and randomization procedures, as well as actual frequency of PSA testing, varied by country. After a median follow-up of 11 years, a 20% relative reduction in the risk of prostate cancer death in the screened arm was noted in the “core” screening group. The trial found that 1055 men would need to be invited to screening, and 37 cases of prostate cancer detected, to avert 1 death from prostate cancer. Of the seven countries included in the mortality analysis, two demonstrated statistically significant reductions in prostate cancer deaths, whereas five did not. There was also an imbalance in treatment between the two study arms, with a higher proportion of men with clinically localized cancer receiving radical prostatectomy in the screening arm and receiving it at experienced referral centers. Treatments for low-stage prostate cancer, such as surgery and radiation therapy, can cause significant morbidity, including impotence and urinary incontinence. In a trial conducted in the United States after the initiation of widespread PSA testing, random assignment to radical prostatectomy compared with “watchful waiting” did not result in a statistically significant decrease in prostate cancer deaths (absolute risk reduction 2.7%; 95% confidence interval–1.3 to 6.2%). skIn cancer Visual examination of all skin surfaces by the patient or by a health care provider is used in screening for basal and squamous cell cancers and melanoma. No prospective randomized study has been performed to look for a mortality decrease. Unfortunately, screening is associated with a substantial rate of overdiagnosis. Prevention and Early Detection of Cancer Cancer Genetics Pat J. Morin, Jeffrey M. Trent, Francis S. Collins, Bert Vogelstein CANCER IS A GENETIC DISEASE Cancer arises through a series of somatic alterations in DNA that 101e APC inactivation or ˜-catenin activation Early adenoma Intermed adenoma Late adenoma Carcinoma Metastasis K-RAS or BRAFactivation SMAD4or TGF˜ II inactivation P53inactivation Other alterations Microsatellite Instability (MIN) or Chromosomal Instability (CIN) Normal epithelium FIGURE 101e-2 Progressive somatic mutational steps in the development of colon carcinoma. The accumulation of alterations in a num-ber of different genes results in the progression from normal epithelium through adenoma to full-blown carcinoma. Genetic instability (micro-satellite or chromosomal) accelerates the progression by increasing the likelihood of mutation at each step. Patients with familial polyposis are already one step into this pathway, because they inherit a germline alteration of the APC gene. TGF, transforming growth factor. result in unrestrained cellular proliferation. Most of these alterations involve actual sequence changes in DNA (i.e., mutations). They may originate as a consequence of random replication errors, exposure to carcinogens (e.g., radiation), or faulty DNA repair processes. While most cancers arise sporadically, familial clustering of cancers occurs in certain families that carry a germline mutation in a cancer gene. The idea that cancer progression is driven by sequential somatic mutations in specific genes has only gained general acceptance in the past 25 years. Before the advent of the microscope, cancer was believed to be composed of aggregates of mucus or other noncellular matter. By the middle of the nineteenth century, it became clear that tumors were masses of cells and that these cells arose from the normal cells of the tissue from which the cancer originated. However, the molecular basis for the uncontrolled proliferation of cancer cells was to remain a mystery for another century. During that time, a number of theories for the origin of cancer were postulated. The great biochemist Otto Warburg proposed the combustion theory of cancer, which stipulated that cancer was due to abnormal oxygen metabolism. In addition, some believed that all cancers were caused by viruses, and that cancer was in fact a contagious disease. In the end, observations of cancer occurring in chimney sweeps, studies of x-rays, and the overwhelming data demonstrating cigarette smoke as a causative agent in lung cancer, together with Ames’s work on chemical mutagenesis, provided convincing evidence that cancer originated through changes in DNA. Although the viral theory of cancer did not prove to be generally accurate (with the exception of human papillomaviruses, which can cause cervical and other cancers in human), the study of retroviruses led to the discovery of the first human oncogenes in the late 1970s. Soon after, the study of families with genetic predisposition to cancer was instrumental in the discovery of tumor-suppressor genes. The field that studies the type of mutations, as well as the consequence of these mutations in tumor cells, is now known as cancer genetics. Nearly all cancers originate from a single cell; this clonal origin is a critical discriminating feature between neoplasia and hyperplasia. FIGURE 101e-1 Multistep clonal development of malignancy. In this diagram a series of five cumulative mutations (T , T , T , T , T ), each with a modest growth advantage acting alone, eventually results in a malignant tumor. Note that not all such alterations result in progression; for example, the T3 clone is a dead end. The actual number of cumulative mutations necessary to transform from the normal to the malignant state is unknown in most tumors. (After P Nowell: Science 194:23, 1976, with permission.) Multiple cumulative mutational events are invariably required for the progression of a tumor from normal to fully malignant phenotype. The process can be seen as Darwinian microevolution in which, at each successive step, the mutated cells gain a growth advantage resulting in an increased representation relative to their neighbors (Fig. 101e-1). Based on observations of cancer frequency increases during aging, as well as molecular genetics work, it is believed that 5 to 10 accumulated mutations are necessary for a cell to progress from the normal to the fully malignant phenotype. We are beginning to understand the precise nature of the genetic alterations responsible for some malignancies and to get a sense of the order in which they occur. The best-studied example is colon cancer, in which analyses of DNA from tissues extending from normal colon epithelium through adenoma to carcinoma have identified some of the genes mutated in the process (Fig. 101e-2). Other malignancies are believed to progress in a similar stepwise fashion, although the order and identity of genes affected may be different. TWO TYPES OF CANCER GENES: ONCOGENES AND TUMORSUPPRESSOR GENES There are two major types of cancer genes. The first type comprises genes that positively influence tumor formation and are known as oncogenes. The second type of cancer genes negatively impact tumor growth and have been named tumor-suppressor genes. Both oncogenes and tumor-suppressor genes exert their effects on tumor growth Abbreviations: AML, acute myeloid leukemia; CML, chronic myeloid leukemia. through their ability to control cell division (cell birth) or cell death (apoptosis), although the mechanisms can be extremely complex. While tightly regulated in normal cells, oncogenes acquire mutations in cancer cells, and the mutations typically relieve this control and lead to increased activity of the gene products. This mutational event typically occurs in a single allele of the oncogene and acts in a dominant fashion. In contrast, the normal function of tumor-suppressor genes is usually to restrain cell growth, and this function is lost in cancer. Because of the diploid nature of mammalian cells, both alleles must be inactivated for a cell to completely lose the function of a tumor-suppressor gene, leading to a recessive mechanism at the cellular level. From these ideas and studies on the inherited form of retinoblastoma, Knudson and others formulated the two-hit hypothesis, which in its modern version states that both copies of a tumor-suppressor gene must be inactivated in cancer. There is a subset of tumor-suppressor genes, the caretaker genes, that do not affect cell growth directly, but rather control the ability of the cell to maintain the integrity of its genome. Cells with a deficiency in these genes have an increased rate of mutations throughout their genomes, including in oncogenes and tumor-suppressor genes. This “mutator” phenotype was first hypothesized by Loeb to explain how the multiple mutational events required for tumorigenesis can occur in the lifetime of an individual. A mutator phenotype has now been observed in some forms of cancer, such as those associated with deficiencies in DNA mismatch repair. The great majority of cancers, however, do not harbor repair deficiencies, and their rate of mutation is similar to that observed in normal cells. Many of these cancers, however, appear to harbor a different kind of genetic instability, affecting the loss or gains of whole chromosomes or large parts thereof (as explained in more detail below). Work by Peyton Rous in the early 1900s revealed that a chicken sarcoma could be transmitted from animal to animal in cell-free extracts, suggesting that cancer could be induced by an agent acting positively to promote tumor formation. The agent responsible for the transmission of the cancer was a retrovirus (Rous sarcoma virus, RSV) and the oncogene responsible was identified 75 years later as v-src. Other oncogenes were also discovered through their presence in the genomes of retroviruses that are capable of causing cancers in chickens, mice, and rats. The cellular homologues of these viral genes are called protooncogenes and are often targets of mutation or aberrant regulation in human cancer. Whereas many oncogenes were discovered because of their presence in retroviruses, other oncogenes, particularly those involved in translocations characteristic of particular leukemias and lymphomas, were isolated through genomic approaches. Investigators cloned the sequences surrounding the chromosomal translocations observed cytogenetically and then deduced the nature of the genes that were the targets of these translocations (see below). Some of these were oncogenes known from retroviruses (like ABL, involved in chronic myeloid leukemia [CML]), whereas others were new (like BCL2, involved in B cell lymphoma). In the normal cellular environment, protooncogenes have crucial roles in cell proliferation and differentiation. Table 101e-1 is a partial list of oncogenes known to be involved in human cancer. The normal growth and differentiation of cells is controlled by growth factors that bind to receptors on the surface of the cell. The signals generated by the membrane receptors are transmitted inside the cells through signaling cascades involving kinases, G proteins, and other regulatory proteins. Ultimately, these signals affect the activity of transcription factors in the nucleus, which regulate the expression of genes crucial in cell proliferation, cell differentiation, and cell death. Oncogene products have been found to function at critical steps in these pathways (Chap. 102e), and inappropriate activation of these pathways can lead to tumorigenesis. Point mutation is a common mechanism of oncogene activation. For example, mutations in one of the RAS genes (HRAS, KRAS, or NRAS) are present in up to 85% of pancreatic cancers and 45% of colon cancers but are less common in other cancer types, although they can occur at significant frequencies in leukemia, lung, and thyroid cancers. Remarkably—and in contrast to the diversity of mutations found in tumor-suppressor genes (see below)—most of the activated RAS genes contain point mutations in codons 12, 13, or 61 (these mutations reduce RAS GTPase activity, leading to constitutive activation of the mutant RAS protein). The restricted pattern of mutations observed in oncogenes compared to that of tumor-suppressor genes reflects the fact that gain-of-function mutations are less likely to occur than mutations that simply lead to loss of activity. Indeed, inactivation of a gene can in theory be accomplished through the introduction of a stop codon anywhere in the coding sequence, whereas activations require precise substitutions at residues that can somehow lead to an increase in the activity of the encoded protein. Importantly, the specificity of oncogene mutations provides diagnostic opportunities, as tests that identify mutations at defined positions are easier to design than tests aimed at detecting random changes in a gene. The second mechanism for activation of oncogenes is DNA sequence amplification, leading to overexpression of the gene product. This increase in DNA copy number may cause cytologically recognizable chromosome alterations referred to as homogeneous staining regions (HSRs) if integrated within chromosomes, or double minutes (dmins) if extrachromosomal. The recognition of DNA amplification is accomplished through various cytogenetic techniques such as comparative genomic hybridization (CGH) or fluorescence in situ hybridization (FISH), which allow the visualization of chromosomal aberrations using fluorescent dyes. In addition, noncytogenetic, microarray-based approaches are now available for identifying changes in copy number at high resolution. Newer short-tag–based sequencing approaches have been used to evaluate amplifications. When paired with next-generation sequencing, this approach offers the highest degree of resolution and quantification available. With both microarray and sequencing technologies, the entire genome can be surveyed for gains and losses of DNA sequences, thus pinpointing chromosomal regions likely to contain genes important in the development or progression of cancer. Numerous genes have been reported to be amplified in cancer. Several of these genes, including NMYC and LMYC, were identified through their presence within the amplified DNA sequences of a tumor and had homology to known oncogenes. Because the region amplified often includes hundreds of thousands of base pairs, multiple oncogenes may be amplified in a single amplicon in some cancers (particularly in sarcomas). Indeed, MDM2, GLI, CDK4, and SAS at chromosomal location 12q13-15 have been shown to be simultaneously amplified in several types of sarcomas and other tumors. Amplification of a cellular gene is often a predictor of poor prognosis; for example, ERBB2/HER2 and NMYC are often amplified in aggressive breast cancers and neuroblastoma, respectively. Chromosomal alterations provide important clues to the genetic changes in cancer. The chromosomal alterations in human solid tumors such as carcinomas are heterogeneous and complex and occur as a result of the frequent chromosomal instability (CIN) observed in these tumors (see below). In contrast, the chromosome alterations in myeloid and lymphoid tumors are often simple translocations, i.e., reciprocal transfers of chromosome arms from one chromosome to another. Consequently, many detailed and informative chromosome analyses have been performed on hematopoietic cancers. The breakpoints of recurring chromosome abnormalities usually occur at the site of cellular oncogenes. Table 101e-2 lists representative examples of recurring chromosome alterations in malignancy and the associated gene(s) rearranged or deregulated by the chromosomal rearrangement. Translocations are particularly common in lymphoid tumors, probably because these cell types have the capability to rearrange their DNA to generate antigen receptors. Indeed, antigen receptor genes are commonly involved in the translocations, implying that an imperfect regulation of receptor gene rearrangement may be involved in the pathogenesis. An interesting example is Burkitt’s lymphoma, a B cell tumor characterized by a reciprocal translocation between chromosomes 8 and 14. Molecular analysis of Burkitt’s lymphomas demonstrated that the breakpoints occurred within or near the MYC locus on chromosome 8 and within the immunoglobulin heavy chain locus on chromosome 14, resulting in the transcriptional activation of MYC. Enhancer activation by translocation, although not universal, appears to play an important role in malignant progression. In addition to transcription factors and signal transduction molecules, translocation may result in the overexpression of cell cycle Source: From R Hesketh: The Oncogene and Tumour Suppressor Gene Facts Book, 2nd ed. San Diego, Academic Press, 1997; with permission. regulatory proteins or proteins such as cyclins and of proteins that regulate cell death. The first reproducible chromosome abnormality detected in human malignancy was the Philadelphia chromosome detected in CML. This cytogenetic abnormality is generated by reciprocal translocation involving the ABL oncogene on chromosome 9, encoding a tyrosine kinase, being placed in proximity to the BCR (breakpoint cluster region) gene on chromosome 22. Figure 101e-3 illustrates the generation of the translocation and its protein product. The consequence of expression of the BCR-ABL gene product is the activation of signal transduction pathways leading to cell growth independent of normal external signals. Imatinib (marketed as Gleevec), a drug that specifically blocks the activity of Abl tyrosine kinase, has shown remarkable efficacy with little toxicity in patients with CML. It is hoped that knowledge of genetic alterations in other cancers will likewise lead to mechanism-based design and development of a new generation of chemotherapeutic agents. Solid tumors are generally highly aneuploid, containing an abnormal number of chromosomes; these chromosomes also exhibit structural alterations such as translocations, deletions, and amplifications. These abnormalities are collectively referred to as chromosomal instability (CIN). Normal cells possess several cell cycle checkpoints, essentially quality-control requirements that have to be met before subsequent events are allowed to take place. The mitotic checkpoint, which ensures proper chromosome attachment to the mitotic spindle before allowing the sister chromatids to separate, is altered in certain cancers. The molecular basis of CIN remains unclear, although a number of mitotic checkpoint genes are found mutated or abnormally expressed in various tumors. The exact effects of these changes on the mitotic checkpoint are unknown, and both weakening and overactivation of the checkpoint have been proposed. The identification of the cause of CIN FIGURE 101e-3 Specific translocation seen in chronic myeloid leukemia (CML). The Philadelphia chromosome (Ph) is derived from a reciprocal translocation between chromosomes 9 and 22 with the breakpoint joining the sequences of the ABL oncogene with the BCR gene. The fusion of these DNA sequences allows the generation of an entirely novel fusion protein with modified function. in tumors will likely be a formidable task, considering that several hundred genes are thought to control the mitotic checkpoint and other cellular processes ensuring proper chromosome segregation. Regardless of the mechanisms underlying CIN, the measurement of the number of chromosomal alterations present in tumors is now possible with both cytogenetic and molecular techniques, and several studies have shown that this information can be useful for prognostic purposes. In addition, because the mitotic checkpoint is essential for cellular viability, it may become a target for novel therapeutic approaches. The first indication of the existence of tumor-suppressor genes came from experiments showing that fusion of mouse cancer cells with normal mouse fibroblasts led to a nonmalignant phenotype in the fused cells. The normal role of tumor-suppressor genes is to restrain cell growth, and the function of these genes is inactivated in cancer. The two major types of somatic lesions observed in tumor-suppressor genes during tumor development are point mutations and large deletions. Point mutations in the coding region of tumor-suppressor genes will frequently lead to truncated protein products or otherwise nonfunctional proteins. Similarly, deletions lead to the loss of a functional product and sometimes encompass the entire gene or even the entire chromosome arm, leading to loss of heterozygosity (LOH) in the tumor DNA compared to the corresponding normal tissue DNA (Fig. 101e-4). LOH in tumor DNA is considered a hallmark for the presence of a tumor-suppressor gene at a particular chromosomal location, and LOH studies have been useful in the positional cloning of many tumor-suppressor genes. Gene silencing, an epigenetic change that leads to the loss of gene expression and occurs in conjunction with hypermethylation of the promoter and histone deacetylation, is another mechanism of tumor-suppressor gene inactivation. (An epigenetic modification refers to a change in the genome, heritable by cell progeny, that does not involve a change in the DNA sequence. The inactivation of the second X chromosome in female cells is an example of an epigenetic silencing that prevents gene expression from the inactivated chromosome.) During embryologic development, regions of chromosomes from one parent are silenced and gene expression proceeds from the chromosome of the other parent. For most genes, expression occurs from both alleles or randomly from one allele or the other. The preferential expression of a particular gene exclusively from the allele contributed by one parent is called parental imprinting and is thought to be regulated by covalent modifications of chromatin protein and DNA (often methylation) of the silenced allele. The role of epigenetic control mechanisms in the development of human cancer is unclear. However, a general decrease in the level of DNA methylation has been noted as a common change in cancer. In addition, numerous genes, including some tumor-suppressor genes, appear to become hypermethylated and silenced during tumorigenesis. VHL and p16INK4 are well-studied examples of such tumor-suppressor genes. Overall, epigenetic mechanisms may be responsible for reprogramming the expression of a large number of genes in cancer and, together with the mutation of specific genes, are likely to be crucial in the development of human malignancies. The use of drugs that can reverse epigenetic changes in cancer cells may represent a novel therapeutic option in certain cancers or premalignant conditions. For example, demethylating agents (azacitidine or decitabine) are now approved by the U.S. Food and Drug Administration (FDA) for the treatment of patients with high-risk myelodysplastic syndrome (MDS). A small fraction of cancers occur in patients with a genetic predisposition. In these families, the affected individuals have a predisposing loss-of-function mutation in one allele of a tumor-suppressor gene. The tumors in these patients show a loss of the remaining normal allele as a result of somatic events (point mutations or deletions), in agreement with the two-hit hypothesis (Fig. 101e-4). Thus, most cells of an individual with an inherited loss-of-function mutation in a tumor-suppressor gene are functionally normal, and only the rare cells that develop a mutation in the remaining normal allele will exhibit uncontrolled regulation. Roughly 100 syndromes of familial cancer have been reported, although many are rare. The majority are inherited as autosomal dominant traits, although some of those associated with DNA repair abnormalities (xeroderma pigmentosum, Fanconi’s anemia, ataxia telangiectasia) are autosomal recessive. Table 101e-3 shows a number of cancer predisposition syndromes and the responsible genes. The current paradigm states that the genes mutated in familial syndromes can also be targets for somatic mutations in sporadic (noninherited) tumors. The study of cancer syndromes has thus provided invaluable insights into the mechanisms of progression for many tumor types. This section examines the case of inherited colon cancer in detail, but A1 Loss of normal chr 13 A3 B1 B3 Markers: FIGURE 101e-4 Diagram of possible mechanisms for tumor formation in an individual with hereditary (familial) retinoblastoma. On the left is shown the pedigree of an affected individual who has inherited the abnormal (Rb) allele from her affected mother. The normal allele is shown as a (+). The four chromosomes of her two parents are drawn to indicate their origin. Flanking the retinoblastoma locus are microsatellite markers (A and B) also analyzed in this family. Markers A3 and B3 are on the chromosome carrying the retinoblastoma disease gene. Tumor formation results when the normal allele, which this patient inherited from her father, is inactivated. On the right are shown four possible ways in which this could occur. In each case, the resulting chromosome 13 arrangement is shown, as well as the results of PCR typing using the microsatellite markers comparing normal tissue (N) with tumor tissue (T). Note that in the first three situations, the normal allele (B1) has been lost in the tumor tissue, which is referred to as loss of heterozygosity (LOH) at this locus. similar lessons can be applied to many of the cancer syndromes listed in Table 101e-3. In particular, the study of inherited colon cancer will clearly illustrate the difference between two types of tumor-suppressor genes: the gatekeepers, which directly regulate the growth of tumors, and the caretakers, which, when mutated, lead to genetic instability and therefore act indirectly on tumor growth. Familial adenomatous polyposis (FAP) is a dominantly inherited colon cancer syndrome due to germline mutations in the adenomatous polyposis coli (APC) tumor-suppressor gene on chromosome 5. Patients with this syndrome develop hundreds to thousands of adenomas in the colon. Each of these adenomas has lost the normal remaining allele of APC but has not yet accumulated the required additional mutations to generate fully malignant cells (Fig. 101e-2). The loss of the second functional APC allele in tumors from FAP families often occurs through loss of heterozygosity. However, out of these thousands of benign adenomas, several will invariably acquire further abnormalities and a subset will even develop into fully malignant cancers. APC is thus considered to be a gatekeeper for colon tumorigenesis: in the absence of mutation of this gatekeeper (or a gene acting within the same pathway), a colorectal tumor simply cannot form. Figure 101e-5 shows germline and somatic mutations found in the APC gene. The function of the APC protein is still not completely understood, but it likely provides differentiation and apoptotic cues to colonic cells as they migrate up the crypts. Defects in this process may lead to abnormal accumulation of cells that should normally undergo apoptosis. In contrast to patients with FAP, patients with hereditary nonpolyposis colon cancer (HNPCC, or Lynch’s syndrome) do not develop multiple polyposis, but instead develop only one or a small number of adenomas that rapidly progress to cancer. Most HNPCC cases are due to mutations in one of four DNA mismatch repair genes (Table 101e-3), which are components of a repair system that is normally responsible for correcting errors in freshly replicated DNA. Germline mutations in MSH2 and MLH1 account for more than 90% of HNPCC cases, whereas mutations in MSH6 and PMS2 are much less frequent. When a somatic mutation inactivates the remaining wild-type allele of a mismatch repair gene, the cell develops a hypermutable phenotype characterized by profound genomic instability, especially for the short repeated sequences called microsatellites. This microsatellite instability (MSI) favors the development of cancer by increasing the rate of mutations in many genes, including oncogenes and tumor-suppressor genes (Fig. 101e-2). These genes can thus be considered caretakers. Interestingly, CIN can also be found in colon cancer, but MSI and CIN appear to be mutually exclusive, suggesting that they represent alternative mechanisms for the generation of a mutator phenotype in this cancer (Fig. 101e-2). Other cancer types rarely exhibit MSI, but most exhibit CIN. Although most autosomal dominant inherited cancer syndromes are due to mutations in tumor-suppressor genes (Table 101e-3), there are a few interesting exceptions. Multiple endocrine neoplasia type 2, a dominant disorder characterized by pituitary adenomas, medullary carcinoma of the thyroid, and (in some pedigrees) pheochromocytoma, is due to gain-of-function mutations in the protooncogene RET on chromosome 10. Similarly, gain-of-function mutations in the tyrosine kinase domain of the MET oncogene lead to hereditary papillary renal carcinoma. Interestingly, loss-of-function mutations in the RET gene cause a completely different disease, Hirschsprung’s disease (aganglionic megacolon [Chaps. 353 and 408]). Tuberous sclerosis TSC1 9q34 AD Angiofibroma, renal angiomyolipoma TSC2 16p13.3 von Hippel–Lindau VHL 3p25-26 AD Kidney, cerebellum, pheochromocytoma Abbreviations: AD, autosomal dominant; AR, autosomal recessive Although the Mendelian forms of cancer have taught us much addition, the decision to test should depend on whether effective inter-about the mechanisms of growth control, most forms of cancer do not ventions exist for the particular type of cancer to be tested. Despite follow simple patterns of inheritance. In many instances (e.g., lung these caveats, genetic cancer testing for some cancer syndromes cancer), a strong environmental contribution is at work. Even in such already appears to have greater benefits than risks. Companies offer circumstances, however, some individuals may be more genetically genetic testing for many of the cancer syndromes listed in Table 83-3, susceptible to developing cancer, given the appropriate exposure, due including FAP (APC gene), hereditary breast and ovarian cancer to the presence of modifier alleles. syndrome (BRCA1 and BRCA2 genes), Lynch’s syndrome (mismatch repair genes), Li-Fraumeni syndrome (TP53 gene), Cowden syndrome (PTEN gene), hereditary retinoblastoma (RB1 gene), and others. Because of the inherent problems of genetic testing such as cost, The discovery of cancer susceptibility genes raises the possibility of specificity, and sensitivity, it is not yet appropriate to offer these tests to DNA testing to predict the risk of cancer in individuals of affected fam-the general population. However, testing may be appropriate in some ilies. An algorithm for cancer risk assessment and decision making in subpopulations with a known increased risk, even without a defined high-risk families using genetic testing is shown in Fig. 101e-6. Once family history. For example, two mutations in the breast cancer susa mutation is discovered in a family, subsequent testing of asymptom-ceptibility gene BRCA1, 185delAG and 5382insC, exhibit a sufficiently atic family members can be crucial in patient management. A nega-high frequency in the Ashkenazi Jewish population that genetic testing tive gene test in these individuals can prevent years of anxiety in the of an individual of this ethnic group may be warranted. knowledge that their cancer risk is no higher than that of the general As noted above, it is important that genetic test results be com-population. On the other hand, a positive test may lead to alteration of municated to families by trained genetic counselors, especially for clinical management, such as increased frequency of cancer screening high-risk high-penetrance conditions such as the hereditary breast and and, when feasible and appropriate, prophylactic surgery. Potential ovarian cancer syndrome (BRCA1/BRCA2). To ensure that the families negative consequences of a positive test result include psychologi-clearly understand its advantages and disadvantages and the impact it cal distress (anxiety, depression) and discrimination, although the may have on disease management and psyche, genetic testing should Genetic Information Nondiscrimination Act (GINA) makes it illegal never be done before counseling. Significant expertise is needed to for predictive genetic information to be used to discriminate in health communicate the results of genetic testing to families. For example, insurance or employment. Testing should therefore not be conducted one common mistake is to misinterpret the result of negative genetic without counseling before and after disclosure of the test result. In tests. For many cancer predisposition genes, the sensitivity of genetic Number of mutations oncogene, leading to its downregula-101e-7 tion in leukemic cells and apoptosis. As another example of miRNAs’ involvement in oncogenic pathways, the p53 tumor suppressor can transcriptionally induce miR-34 following genotoxic stress, and this induction is important in mediating p53 function. The expression of miRNAs is extremely specific, and there is evi- Number of mutations lineage and differentiation state, as well as cancer diagnosis and outcome prediction. Certain human malignancies are asso ciated with viruses. Examples include Burkitt’s lymphoma (Epstein-Barr virus; Chap. 218), hepatocellular car-Amino acid number cinoma (hepatitis viruses), cervical FIGURE 101e-5 Germline and somatic mutations in the tumor-suppressor gene APC. APC cancer (human papillomavirus [HPV]; encodes a 2843-amino-acid protein with six major domains: an oligomerization region (O), armadillo Chap. 222), and T cell leukemia (retrepeats (ARM), 15-amino-acid repeats (15 aa), 20-amino-acid repeats (20 aa), a basic region, and a roviruses; Chap. 225e). The mechadomain involved in binding EB1 and the Drosophila discs large homologue (E/D). Shown are the nisms of action of these viruses are positions within the APC gene of a total of 650 somatic and 826 germline mutations (from the APC varied but always involve activation of database at http://www.umd.be/APC). The vast majority of these mutations result in the truncation of growth-promoting pathways or inhithe APC protein. Germline mutations are found to be relatively evenly distributed up to codon 1600 bition of tumor-suppressor products except for two mutation hotspots at amino acids 1061 and 1309, which together account for one-in the infected cells. For example, third of the mutations found in familial adenomatous polyposis (FAP) families. Somatic APC mutations HPV proteins E6 and E7 bind and in colon tumors cluster in an area of the gene known as the mutation cluster region (MCR). The loca-inactivate cellular tumor suppressors tion of the MCR suggests that the 20-amino-acid domain plays a crucial role in tumor suppression. p53 and pRB, respectively. There are testing is less than 70% (i.e., of 100 kindreds tested, disease-causing mutations can be identified in 70 at most). Therefore, such testing should in general begin with an affected member of the kindred (the youngest family member still alive who has had the cancer of interest). If a mutation is not identified in this individual, then the test should be reported as noninformative (Fig. 101e-6) rather than negative (because it is possible that, for technical reasons, the mutation in this individual is not detectable by standard genetic assays). On the other hand, if a mutation can be identified in this individual, then testing of other family members can be performed, and the sensitivity of such subsequent tests will be 100% (because the mutation in the family is in this case known to be detectable by the method used). MicroRNAs (miRNAs) are small noncoding RNAs 20–22 nucleotides in length that are involved in posttranscriptional gene regulation. Studies in chronic lymphocytic leukemia first suggested a link between miRNAs and cancer when miR-15 and miR-16 were found to be deleted or downregulated in the vast majority of tumors. Various miRNAs have since been found abnormally expressed in several human malignancies. Aberrant expression of miRNAs in cancer has been attributed to several mechanisms, such as chromosomal rearrangements, genomic copy number change, epigenetic modifications, defects in miRNA biogenesis pathway, and regulation by transcriptional factors. Somatic mutations of miRNAs have been identified in many cancers, but the exact functional consequences of these changes on cancer development remain to be determined. The SomaMir database (http://compbio.uthsc.edu/SomamiR) catalogs somatic and germ-line miRNA mutations that have been identified in cancer. Functionally, miRNAs have been suggested to contribute to tumorigenesis through their ability to regulate oncogenic signaling pathways. For example, miR-15 and miR-16 have been shown to target the BCL2 several HPV types, and some of these types have been associated with the development of several malignancies, including cervical, vulvar, vaginal, penile, anal, and oropharyngeal cancer. Viruses are not sufficient for cancer development, but constitute one alteration in the multistep process of cancer progression. The tumorigenesis process, driven by alterations in tumor suppressors, oncogenes, and epigenetic regulation, is accompanied by changes in gene expression. The advent of powerful techniques for high-throughput gene expression profiling, based on sequencing or microarrays, has allowed the comprehensive study of gene expression in neoplastic cells. It is indeed possible to identify the expression levels of thousands of genes expressed in normal and cancer tissues. Figure 101e-7 shows a typical microarray experiment examining gene expression in cancer. This global knowledge of gene expression allows the identification of differentially expressed genes and, in principle, the understanding of the complex molecular circuitry regulating normal and neoplastic behaviors. Such studies have led to molecular profiling of tumors, which has suggested general methods for distinguishing tumors of various biologic behaviors (molecular classification), elucidating pathways relevant to the development of tumors, and identifying molecular targets for the detection and therapy of cancer. The first practical applications of this technology have suggested that global gene expression profiling can provide prognostic information not evident from other clinical or laboratory tests. The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is a searchable online repository for expression profiling data. With the completion of the Human Genome Project and advances in sequencing technologies, systematic mutational analysis of the cancer genome has become possible. In fact, whole genome sequencing of cancer cells is now possible, and this technology has the potential to Patients (1) from family with a known cancer syndrome, (2) from a family with a history of cancer, (3) with early onset cancer End of testing, noninformative Informed consent Testing of cancer patient Identification of disease-causing mutation Screening of asymptomatic family members Failure to identify mutations Review of family history to confirm/identify possible cancer syndromes and candidate genes Pretest counseling Positive test: family member requires increased screening or other interventions Negative test: family member has no increased risk of cancer FIGURE 101e-6 Algorithm for genetic testing in a family with can-cer predisposition. The key step is the identification of a mutation in a cancer patient, which allows testing of asymptomatic family mem-bers. Asymptomatic family members who test positive may require increased screening or surgery, whereas others are at no greater risk for cancer than the general population. revolutionize our approach to cancer prevention, diagnosis, and treatment. The International Cancer Genome Consortium (http://icgc. org/) was developed by leading cancer agencies worldwide, genome and cancer scientists, and statisticians with the goal to launch and coordinate cancer genomics research projects worldwide and to disseminate the data. Hundreds of cancer genomes from at least 25 cancer types have been sequenced through various collaborative efforts. In addition, exome sequencing (sequencing all the coding regions of the genome) has also been performed on a large number of tumors. These sequencing data have been used to elucidate the mutational profile of cancer, including the identification of driver mutations that are functionally involved in tumor development. There are generally 40 to 100 genetic alterations that affect protein sequence in a typical cancer, although statistical analyses suggest that only 8–15 are functionally involved in tumorigenesis. The picture that emerges from these studies is that most genes found mutated in tumors are actually mutated at relatively low frequencies (<5%), whereas a small number of genes (such as p53, KRAS) are mutated in a large proportion of tumors (Fig. 101e-8). In the past, the focus of research has been on the frequently mutated genes, but it appears that the large number of genes that are infrequently mutated in cancer are major contributors to the cancer phenotype. Understanding the signaling pathways altered by mutations in these genes, as well as the functional relevance of these different mutations, represents the next challenge in the field. Moreover, a detailed knowledge of the genes altered in a particular tumor may allow for a new era of personalized treatment in cancer medicine (see below). A major effort in the United States, The Cancer Genome Atlas (http://cancergenome.nih.gov) is a coordinated effort from the National Cancer Institute and the National Human Genome Prepare cDNA, label FIGURE 101e-7 A microarray experiment. RNA is prepared from cells, reverse transcribed to cDNA, and labeled with fluorescent dyes (typically green for normal cells and red for cancer cells). The fluorescent probes are mixed and hybridized to a cDNA array. Each spot on the array is an oligonucleotide (or cDNA fragment) that represents a different gene. The image is then captured with a fluorescence camera; red spots indicate higher expression in tumor cells compared with reference, while green spots represent the lower expression in tumor cells. Yellow signals indicate equal expression levels in normal and tumor specimens. After clustering analysis of multiple arrays, the results are typically represented graphically using a visualization software, which shows, for each sample, a color-coded representation of gene expression for every gene on the array. Research Institute to systematically characterize the entire spectrum of genomic changes involved in human cancers. Similarly, COSMIC (Catalogue of Somatic Mutations in Cancer) is an initiative from the Welcome Trust Sanger Institute to store and display somatic mutation information and related details regarding human cancers (http:// cancer.sanger.ac.uk/). PERSONALIZED CANCER TREATMENT BASED ON MOLECULAR PROFILES: PRECISION THERAPY Gene expression profiling and genomewide sequencing approaches have allowed for an unprecedented understanding of cancer at the molecular level. It has been suggested that individualized knowledge of pathways or genes deregulated in a given tumor (personalized genomics) may provide a guide for therapeutic options on the tumor, thus leading to personalized therapy (also called precision medicine). Because tumor behavior is highly heterogeneous, even within a tumor type, personalized information-based medicine will likely supplement or perhaps one day supplant the current histology-based therapy, especially in the case of tumors resistant to conventional therapeutic approaches. Molecular nosology has revealed similarities in tumors of diverse histotype. The success of this approach will be dependent on the identification of sufficient actionable changes (mutations or pathways that can be targeted with a specific drug). Examples of currently actionable changes include mutations in BRAF (targeted by the drug vemurafenib) and RET (targeted by sunitinib and sorafenib), and ALK rearrangements (targeted by crizotinib). Interestingly, studies have reported that 20% of triple-negative breast cancers and 60% of lung cancers have potentially actionable genetic changes. Gene expression also offers the potential to predict drug sensitivities as well as provide prognostic information. Commercial diagnostic tests, such as Mammaprint and Oncotype DX for breast cancer, are available FIGURE 101e-8 A two-dimensional maps of genes mutated in colorectal cancer. The two-dimensional landscape represents the positions of the RefSeq genes along the chromosomes and the height of the peaks represents the mutation frequency. On the top map, the taller peaks represent the genes that are commonly mutated in colon cancer, while the large number of smaller hills indicates the genes that are mutated at lower frequency. On the lower map, the mutations of two individual tumors are indicated. Note that there is little overlap between the mutated genes of the two colorectal tumors shown. These differences may represent the basis for the heterogeneity in terms of behavior and responsiveness to therapy observed in human cancer. (From LD Wood et al: Science 318:1108, 2007, with permission.) to help the patients and their physicians make treatment decisions. 101e-9 Personalized medicine is an exciting new avenue for cancer treatment based on matching the unique features of a tumor to an effective therapy, and this concept is in the process of changing our approach to cancer therapy in fundamental ways. On a cautionary note, gene expression can vary enormously within a single person’s cancer and at different anatomic sites in the patient. We have not yet determined whether such clonal variation within an individual tumor will interfere with the goal tailoring therapy to a particular patient’s tumor. A revolution in cancer genetics has occurred in the past 25 years. Identification of cancer genes has led to a deep understanding of the tumorigenesis process and has had important repercussions on all fields of cancer biology. In particular, the advancement of powerful techniques for genomewide expression profiling and mutation analyses has provided a detailed picture of the molecular defects present in individual tumors. Individualized treatment based on the specific genetic alterations within a given tumor has already become possible. Although these advances have not yet translated into overall changes in cancer prevention, prognosis, or treatment, it is expected that breakthroughs in these areas will continue to emerge and be applicable to an ever-increasing number of cancers. phenotypiC CharaCteristiCs of MaLignant CeLLs Deregulated cell proliferation: Loss of function of negative growth regulators (tumor-suppressor genes, i.e., Rb, p53), and increased action of positive 102e Jeffrey W. Clark, Dan L. Longo growth regulators (oncogenes, i.e., Ras, Myc). Leads to aberrant cell cycle con- Cancers are characterized by unregulated cell division, avoidance of cell death, tissue invasion, and the ability to metastasize. A neoplasm is benign when it grows in an unregulated fashion without tissue invasion. The presence of unregulated growth and tissue invasion is characteristic of malignant neoplasms. Cancers are named based on their origin: those derived from epithelial tissue are called carcinomas, those derived from mesenchymal tissues are sarcomas, and those derived from hematopoietic tissue are leukemias, lymphomas, and plasma cell dyscrasias (including multiple myeloma). Cancers nearly always arise as a consequence of genetic alterations, the vast majority of which begin in a single cell and therefore are monoclonal in origin. However, because a wide variety of genetic and epigenetic changes can occur in different cells within malignant tumors over time, most cancers are characterized by marked heterogeneity in the populations of cells. This heterogeneity significantly complicates the treatment of most cancers because it is likely that there are subsets of cells that will be resistant to therapy and will therefore survive and proliferate even if the majority of cells are killed. A few cancers appear to, at least initially, be primarily driven by an alteration in a dominant gene that produces uncontrolled cell proliferation. Examples include chronic myeloid leukemia (abl), about half of melanomas (braf ), Burkitt’s lymphoma (c-myc), and subsets of lung adenocarcinomas (egfr, alk, ros1, and ret). The genes that can promote cell growth when altered are often called oncogenes. They were first identified as critical elements of viruses that cause animal tumors; it was subsequently found that the viral genes had normal counterparts with important functions in the cell and had been captured and mutated by viruses as they passed from host to host. However, the vast majority of human cancers are characterized by a multiple-step process involving many genetic abnormalities, each of which contributes to the loss of control of cell proliferation and differentiation and the acquisition of capabilities, such as tissue invasion, the ability to metastasize, and angiogenesis. These properties are not found in the normal adult cell from which the tumor is derived. Indeed, normal cells have a large number of safeguards against uncontrolled proliferation and invasion. Many cancers go through recognizable steps of progressively more abnormal phenotypes: hyperplasia, to adenoma, to dysplasia, to carcinoma in situ, to invasive cancer with the ability to metastasize (Table 102e-1). For most cancers, these changes occur over a prolonged period of time, usually many years. In most organs, only primitive undifferentiated cells are capable of proliferating and the cells lose the capacity to proliferate as they differentiate and acquire functional capability. The expansion of the primitive cells is linked to some functional need in the host through receptors that receive signals from the local environment or through hormonal and other influences delivered by the vascular supply. In the absence of such signals, the cells are at rest. The signals that keep the primitive cells at rest remain incompletely understood. These signals must be environmental, based on the observations that a regenerating liver stops growing when it has replaced the portion that has been surgically removed after partial hepatectomy and regenerating bone marrow stops growing when the peripheral blood counts return to normal. Cancer cells clearly have lost responsiveness to such controls and do not recognize when they have overgrown the niche normally occupied by the organ from which they are derived. A better understanding of the mechanisms of growth regulation is evolving. Normal cells have a number of control mechanisms that are targeted by specific genetic alterations in cancer. Critical proteins in these control processes that are frequently mutated or otherwise inactivated in cancers are called tumor-suppressor genes. Examples include p53 trol and includes loss of normal checkpoint responses. Failure to differentiate: Arrest at a stage before terminal differentiation. May retain stem cell properties. (Frequently observed in leukemias due to transcriptional repression of developmental programs by the gene products of chromosomal translocations.) Loss of normal apoptosis pathways: Inactivation of p53, increases in Bcl-2 family members. This defect enhances the survival of cells with oncogenic mutations and genetic instability and allows clonal expansion and diversification within the tumor without activation of physiologic cell death pathways. Genetic instability: Defects in DNA repair pathways leading to either single-nucleotide or oligonucleotide mutations (as in microsatellite instability, MIN) or more commonly chromosomal instability (CIN) leading to aneuploidy. Caused by loss of function of p53, BRCA1/2, mismatch repair genes, DNA repair enzymes, and the spindle checkpoint. Leads to accumulation of a variety of mutations in different cells within the tumor and heterogeneity. Loss of replicative senescence: Normal cells stop dividing in vitro after 25–50 population doublings. Arrest is mediated by the Rb, p16INK4a, and p53 pathways. Further replication leads to telomere loss, with crisis. Surviving cells often harbor gross chromosomal abnormalities. Relevance to human in vivo cancer remains uncertain. Many human cancers express telomerase. Nonresponsiveness to external growth-inhibiting signals: Cancer cells have lost responsiveness to signals normally present to stop proliferating when they have overgrown the niche normally occupied by the organ from which they are derived. We know very little about this mechanism of growth regulation. Increased angiogenesis: Due to increased gene expression of proangiogenic factors (VEGF, FGF, IL-8) by tumor or stromal cells, or loss of negative regulators (endostatin, tumstatin, thrombospondin). Invasion: Loss of cell-cell contacts (gap junctions, cadherins) and increased production of matrix metalloproteinases (MMPs). Often takes the form of epithelial-to-mesenchymal transition (EMT), with anchored epithelial cells becoming more like motile fibroblasts. Metastasis: Spread of tumor cells to lymph nodes or distant tissue sites. Limited by the ability of tumor cells to survive in a foreign environment. Evasion of the immune system: Downregulation of MHC class I and II molecules; induction of T cell tolerance; inhibition of normal dendritic cell and/ or T cell function; antigenic loss variants and clonal heterogeneity; increase in regulatory T cells. Shift in cell metabolism: Energy generation shifts to aerobic glycolysis. Abbreviations: FGF, fibroblast growth factor; IL, interleukin; MHC, major histocompatibility complex; VEGF, vascular endothelial growth factor. and Rb (discussed below). The progression of a cell through the cell division cycle is regulated at a number of checkpoints by a wide array of genes. In the first phase, G1, preparations are made to replicate the genetic material. The cell stops before entering the DNA synthesis phase, or S phase, to take inventory. Are we ready to replicate our DNA? Is the DNA repair machinery in place to fix any mutations that are detected? Are the DNA replicating enzymes available? Is there an adequate supply of nucleotides? Is there sufficient energy? The main brake on the process is the retinoblastoma protein, Rb. When the cell determines that it is prepared to move ahead, sequential activation of cyclin-dependent kinases (CDKs) results in the inactivation of the brake, Rb, by phosphorylation. Phosphorylated Rb releases the S phase–regulating transcription factor, E2F/DP1, and genes required for S phase progression are expressed. If the cell determines that it is unready to move ahead with DNA replication, a number of inhibitors are capable of blocking the action of the CDKs, including p21Cip2/Waf1, p16Ink4a, and p27Kip1. Nearly every cancer has one or more genetic lesions in the G1 checkpoint that permits progression to S phase. At the end of S phase, when the cell has exactly duplicated its DNA content, a second inventory is taken at the S checkpoint. Have all of the chromosomes been fully duplicated? Were any segments of DNA copied more than once? Do we have the right number of chromosomes and the right amount of DNA? If so, the cell proceeds to G2, in which the cell prepares for division by synthesizing mitotic spindle and other proteins needed to produce two daughter cells. When DNA damage is 1.DNA DAMAGE CHECKPOINT 2. ONCOGENE CHECKPOINT myc, E2F, EIA Induction of P14ARF Transcriptional activation of p53-responsive genes ATM/ATR chk1/chk2 mdm2 P14ARF mdm2 mdm2 P P P P P p53 FIGuRE 102e-1 Induction of p53 by the DNA damage and oncogene checkpoints. In response to noxious stimuli, p53 and mdm2 are phosphorylated by the ataxia-telangiectasia mutated (ATM) and related (ATR) serine/threonine kinases, as well as the immediate downstream checkpoint kinases, Chk1 and Chk2. This causes dissociation of p53 from mdm2, leading to increased p53 protein levels and transcription of genes leading to cell cycle arrest (p21Cip1/Waf1) or apoptosis (e.g., the proapoptotic Bcl-2 family members Noxa and Puma). Inducers of p53 include hypoxemia, DNA damage (caused by ultraviolet radiation, gamma irradiation, or chemotherapy), ribonucleotide depletion, and telomere shortening. A second mechanism of p53 induction is activated by oncogenes such as Myc, which promote aberrant G1/S transition. This pathway is regulated by a second product of the Ink4a locus, p14ARF (p19 in mice), which is encoded by an alternative reading frame of the same stretch of DNA that codes for p16Ink4a. Levels of ARF are upregulated by Myc and E2F, and ARF binds to mdm2 and rescues p53 from its inhibitory effect. This oncogene checkpoint leads to the death or senescence (an irreversible arrest in G1 of the cell cycle) of renegade cells that attempt to enter S phase without appropriate physiologic signals. Senescent cells have been identified in patients whose premalignant lesions harbor activated oncogenes, for instance, dysplastic nevi that encode an activated form of BRAF (see below), demonstrating that induction of senescence is a protective mechanism that operates in humans to prevent the outgrowth of neoplastic cells. detected, the p53 pathway is normally activated. Called the guardian of the genome, p53 is a transcription factor that is normally present in the cell in very low levels. Its level is generally regulated through its rapid turnover. Normally, p53 is bound to mdm2, a ubiquitin ligase, that both inhibits p53 transcriptional activation and also targets p53 for degradation in the proteasome. When damage is sensed, the ATM (ataxia-telangiectasia mutated) pathway is activated; ATM phosphorylates mdm2, which no longer binds to p53, and p53 then stops cell cycle progression, directs the synthesis of repair enzymes, or if the damage is too great, initiates apoptosis of the cell to prevent the propagation of a damaged cell (Fig. 102e-1). A second method of activating p53 involves the induction of p14ARF by hyperproliferative signals from oncogenes. p14ARF competes with p53 for binding to mdm2, allowing p53 to escape the effects of mdm2 and accumulate in the cell. Then p53 stops cell cycle progression by activating CDK inhibitors such as p21 and/or initiating the apoptosis pathway. Not surprisingly given its critical role in controlling cell cycle progression, mutations in the gene for p53 on chromosome 17p are found in more than 50% of human cancers. Most commonly these mutations are acquired in the malignant tissue in one allele and the second allele is deleted, leaving the cell unprotected from DNA-damaging agents or oncogenes. Some environmental exposures produce signature mutations in p53; for example, aflatoxin exposure leads to mutation of arginine to serine at codon 249 and leads to hepatocellular carcinoma. In rare instances, p53 mutations are in the germline (Li-Fraumeni syndrome) and produce a familial cancer syndrome. The absence of p53 leads to chromosome instability and the accumulation of DNA damage including the acquisition of properties that give the abnormal cell a proliferative and survival advantage. Like Rb dysfunction, most cancers have mutations that disable the p53 pathway. Indeed, the importance of p53 and Rb in the development of cancer is underscored by the neoplastic transformation mechanism of human papillomavirus. This virus has two main oncogenes, E6 and E7. E6 acts to increase the rapid turnover of p53, and E7 acts to inhibit Rb function; inhibition of these two targets is required for transformation of epithelial cells. Another cell cycle checkpoint exists when the cell is undergoing division, the spindle checkpoint. The details of this checkpoint are still being discovered; however, it appears that if the spindle apparatus does not properly align the chromosomes for division, if the chromosome number is abnormal (i.e., greater or less than 4n), or if the centromeres are not properly paired with their duplicated partners, then the cell initiates a cell death pathway to prevent the production of aneuploid progeny (having an altered number of chromosomes). Abnormalities in the spindle checkpoint facilitate the development of aneuploidy. In some tumors, aneuploidy is a predominant genetic feature. In others, a defect in the cells’ ability to repair errors in the DNA due to mutations in genes coding for the proteins critical for mismatched DNA repair is the primary genetic lesion. This is usually detected by finding alterations in repeat sequences of DNA (called microsatellites) or microsatellite instability in malignant cells. In general, tumors either have defects in chromosome number or microsatellite instability, but not both. Defects that lead to cancer include abnormal cell cycle checkpoints, inadequate DNA repair, and failure to preserve genome integrity. Efforts are under way to therapeutically restore the defects in cell cycle regulation that characterize cancer, although this remains a challenging problem because it is much more difficult to restore normal biologic function than to inhibit abnormal function of proteins driving cell proliferation, such as oncogenes. The fundamental cellular defects that create a malignant neoplasm act at the cellular level. However, that is not the entire story. Cancers behave as organs that have lost their specialized function and stopped responding to signals that normally limit their growth. Human cancers usually become clinically detectable when a primary mass is at least 1 cm in diameter—such a mass consists of about 109 cells. More commonly, patients present with tumors that are 1010 cells or greater. A lethal tumor burden is about 1012 to 1013 cells. If all tumor cells were dividing at the time of diagnosis, patients would reach a lethal tumor burden in a very short time. However, human tumors grow by Gompertzian kinetics—this means that not every daughter cell produced by a cell division is itself capable of dividing. The growth fraction of a tumor declines exponentially with time. The growth fraction of the first malignant cell is 100%, and by the time a patient presents for medical care, the growth fraction is 2–3% or less. This fraction is similar to the growth fraction of normal bone marrow and normal intestinal epithelium, the most highly proliferative normal tissues in the human body, a fact that may explain the dose-limiting toxicities of agents that target dividing cells. The implication of these data is that the tumor is slowing its own growth over time. How does it do this? The tumor cells have multiple genetic lesions that tend to promote proliferation, yet by the time the tumor is clinically detectable, its capacity for proliferation has declined. We need to better understand how a tumor slows its own growth. A number of factors are known to contribute to the failure of tumor cells to proliferate in vivo. Some cells are hypoxemic and have inadequate supply of nutrients and energy. Some have sustained too much genetic damage to complete the cell cycle but have lost the capacity to undergo apoptosis and therefore survive but do not proliferate. However, an important subset is not actively dividing but retains the capacity to divide and can start dividing again under certain conditions such as when the tumor mass is reduced by treatments. Just as the bone marrow increases its rate of proliferation in response to bone marrow–damaging agents, the tumor also seems to sense when tumor cell numbers have been reduced and can respond by increasing growth rate. However, the critical difference is that the marrow stops growing when it has reached its production goals, whereas tumors do not. Additional tumor cell vulnerabilities are likely to be detected when we learn more about how normal cells respond to “stop” signals from their environment and why and how tumor cells fail to heed such signals. IS IN VITRO SENESCENCE RELEVANT TO CARCINOGENESIS? When normal cells are placed in culture in vitro, most are not capable of sustained growth. Fibroblasts are an exception to this rule. When they are cultured, fibroblasts may divide 30–50 times and then they undergo what has been termed a “crisis” during which the majority of cells stop dividing (usually due to an increase in p21 expression, a CDK inhibitor), many die, and a small fraction emerge that have acquired genetic changes that permit their uncontrolled growth. The cessation of growth of normal cells in culture has been termed “senescence,” and whether this phenomenon is relevant to any physiologic event in vivo is debated. Among the cellular changes during in vitro propagation is telomere shortening. DNA polymerase is unable to replicate the tips of chromosomes, resulting in the loss of DNA at the specialized ends of chromosomes (called telomeres) with each replication cycle. At birth, human telomeres are 15to 20-kb pairs long and are composed of tandem repeats of a six-nucleotide sequence (TTAGGG) that associates with specialized telomere-binding proteins to form a T-loop structure that protects the ends of chromosomes from being mistakenly recognized as damaged. The loss of telomeric repeats with each cell division cycle causes gradual telomere shortening, leading to growth arrest (called senescence) when one or more critically short telomeres trigger a p53-regulated DNA-damage checkpoint response. Cells can bypass this growth arrest if pRb and p53 are nonfunctional, but cell death usually ensues when the unprotected ends of chromosomes lead to chromosome fusions or other catastrophic DNA rearrangements. The ability to bypass telomere-based growth limitations is thought to be a critical step in the evolution of most malignancies. This occurs by the reactivation of telomerase expression in cancer cells. Telomerase is an enzyme that adds TTAGGG repeats onto the 3′ ends of chromosomes. It contains a catalytic subunit with reverse transcriptase activity (hTERT) and an RNA component that provides the template for telomere extension. Most normal somatic cells do not express sufficient telomerase to prevent telomere attrition with each cell division. Exceptions include stem cells (such as those found in hematopoietic tissues, gut and skin epithelium, and germ cells) that require extensive cell division to maintain tissue homeostasis. More than 90% of human cancers express high levels of telomerase that prevent telomere shortening to critical levels and allow indefinite cell proliferation. In vitro experiments indicate that inhibition of telomerase activity leads to tumor cell apoptosis. Major efforts are under way to develop methods to inhibit telomerase activity in cancer cells. For example, the protein component of telomerase (hTERT) may act as one of the most widely expressed tumor-associated antigens and be targeted by vaccine approaches. Although most of the functions of telomerase relate to cell division, it also has several other effects including interfering with the differentiated functions of at least certain stem cells, although the impact on differentiated function of normal non-stem cells is less clear. Nevertheless, a major growth industry in medical research has been discovering an association between short telomeres and human diseases ranging from diabetes and coronary artery disease to Alzheimer’s disease. The picture is further complicated by the fact that rare genetic defects in the telomerase enzyme seem to cause pulmonary fibrosis, aplastic anemia, or dyskeratosis congenita (characterized by abnormalities in skin, nails, and oral mucosa with increased risk for certain malignancies) but not defects in nutrient absorption in the gut, a site that might be presumed to be highly sensitive to defective cell proliferation. Much remains to be learned about how telomere shortening and telomere maintenance are related to human illness in general and cancer in particular. Signals that affect cell behavior come from adjacent cells, the stroma in which the cells are located, hormonal signals that originate remotely, and from the cells themselves (autocrine signaling). These signals generally exert their influence on the receiving cell through activation of signal transduction pathways that have as their end result the induction of activated transcription factors that mediate a change in cell behavior or function or the acquisition of effector machinery to accomplish a new task. Although signal transduction pathways can lead to a wide variety of outcomes, many such pathways rely on cascades of signals that sequentially activate different proteins or glycoproteins and lipids or glycolipids, and the activation steps often involve the addition or removal of one or more phosphate groups on a downstream target. Other chemical changes can result from signal transduction pathways, but phosphorylation and dephosphorylation play a major role. The proteins that add phosphate groups to proteins are called kinases. There are two major distinct classes of kinases; one class acts on tyrosine residues, and the other acts on serine/threonine residues. The tyrosine kinases often play critical roles in signal transduction pathways; they may be receptor tyrosine kinases, or they may be linked to other cell-surface receptors through associated docking proteins (Fig. 102e-2). Normally, tyrosine kinase activity is short-lived and reversed by protein tyrosine phosphatases (PTPs). However, in many human cancers, tyrosine kinases or components of their downstream pathways are activated by mutation, gene amplification, or chromosomal translocations. Because these pathways regulate proliferation, survival, migration, and angiogenesis, they have been identified as important targets for cancer therapeutics. Inhibition of kinase activity is effective in the treatment of a number of neoplasms. Lung cancers with mutations in the epidermal growth factor receptor are highly responsive to erlotinib and gefitinib (Table 102e-2). Lung cancers with activation of anaplastic lymphoma kinase (ALK) or ROS1 by translocations respond to crizotinib, an ALK and ROS1 inhibitor. A BRAF inhibitor is highly effective in melanomas and thyroid cancers in which BRAF is mutated. Targeting a protein (MEK) downstream of BRAF also has activity against BRAF mutant melanomas. Janus kinase inhibitors are active in myeloproliferative syndromes in which JAK2 activation is a pathogenetic event. Imatinib (which targets a number of tyrosine kinases) is an effective agent in tumors that have translocations of the c-Abl and BCR gene (such as chronic myeloid leukemia), mutant c-Kit (gastrointestinal stromal cell tumors), or mutant platelet-derived growth factor receptor (PDGFR; chronic myelomonocytic leukemia); second-generation inhibitors of BCR-Abl, dasatinib, and nilotinib are even more effective. The third-generation agent bosutinib has activity in some patients who have progressed on other inhibitors, whereas the third-generation agent ponatinib has activity against the T315I mutation, which is resistant to the other agents. Sorafenib and sunitinib, agents that inhibit a large number of kinases, have shown antitumor activity in a number of malignancies, including renal cell cancer (RCC) (both), hepatocellular carcinoma (sorafenib), thyroid cancer (sorafenib), gastrointestinal stromal tumor (GIST) (sunitinib), and pancreatic neuroendocrine tumors (sunitinib). Inhibitors of the mammalian target of rapamycin (mTOR) are active in RCC, pancreatic neuroendocrine tumors, and breast cancer. The list of active agents and treatment indications is growing rapidly. These new agents have ushered in a new era of personalized therapy. It is becoming more routine for resected tumors to be assessed for specific molecular changes that predict response and to have clinical decision-making guided by those results. However, none of these therapies has yet been curative by themselves for any malignancy, although prolonged periods of disease control lasting many years frequently occur in chronic myeloid leukemia. The reasons for the failure to cure are not completely defined, although resistance to the treatment ultimately develops in most patients. In some tumors, resistance to kinase inhibitors is related to an acquired mutation in the target kinase that inhibits drug binding. Many of these kinase inhibitors act as competitive inhibitors of the ATP-binding pocket. ATP is the phosphate donor in these phosphorylation FIGuRE 102e-2 Therapeutic targeting of signal transduction pathways in cancer cells. Three major signal transduction pathways are activated by receptor tyrosine kinases (RTK). 1. The protooncogene Ras is activated by the Grb2/mSOS guanine nucleotide exchange factor, which induces an association with Raf and activation of downstream kinases (MEK and ERK1/2). 2. Activated PI3K phosphorylates the membrane lipid PIP2 to generate PIP3, which acts as a membrane-docking site for a number of cellular proteins including the serine/threonine kinases PDK1 and Akt. PDK1 has numerous cellular targets, including Akt and mTOR. Akt phosphorylates target proteins that promote resistance to apoptosis and enhance cell cycle progression, whereas mTOR and its target p70S6K upregulate protein synthesis to potentiate cell growth. 3. Activation of PLC-γ leads to the formation of diacylglycerol (DAG) and increased intracellular calcium, with activation of multiple isoforms of PKC and other enzymes regulated by the calcium/calmodulin system. Other important signaling pathways involve non-RTKs that are activated by cytokine or integrin receptors. Janus kinases (JAK) phosphorylate STAT (signal transducer and activator of transcription) transcription factors, which translocate to the nucleus and activate target genes. Integrin receptors mediate cellular interactions with the extracellular matrix (ECM), inducing activation of FAK (focal adhesion kinase) and c-Src, which activate multiple downstream pathways, including modulation of the cell cytoskeleton. Many activated kinases and transcription factors migrate into the nucleus, where they regulate gene transcription, thus completing the path from extracellular signals, such as growth factors, to a change in cell phenotype, such as induction of differentiation or cell proliferation. The nuclear targets of these processes include transcription factors (e.g., Myc, AP-1, and serum response factor) and the cell cycle machinery (CDKs and cyclins). Inhibitors of many of these pathways have been developed for the treatment of human cancers. Examples of inhibitors that are currently being evaluated in clinical trials are shown in purple type. reactions. Mutation in the BCR-ABL kinase in the ATP-binding pocket (such as the threonine to isoleucine change at codon 315 [T315I]) can prevent imatinib binding. Other resistance mechanisms include altering other signal transduction pathways to bypass the inhibited pathway. As resistance mechanisms become better defined, rational strategies to overcome resistance will emerge. In addition, many kinase inhibitors are less specific for an oncogenic target than was hoped, and toxicities related to off-target inhibition of kinases limit the use of the agent at a dose that would optimally inhibit the cancer-relevant kinase. Targeted agents can also be used to deliver highly toxic compounds. An important component of the technology for developing effective conjugates is the design of the linker between the two, which needs to be stable. Currently approved antibody drug conjugates include brentuximab vedotin, which links the microtubule toxin monomethyl auristatin E (MMAE) to an antibody targeting the cell surface antigen CD30, which is expressed on a number of malignant cells but especially in Hodgkin’s disease and anaplastic lymphoma. The linker in this case is cleavable, which allows diffusion of the drug out of the cell after delivery. The second approved conjugate is ado-trastuzumab emtansine, which links the microtubule formation inhibitor mertansine and the monoclonal antibody trastuzumab targeted against human epidermal growth factor receptor 2 (HER2) on breast cancer cells. In this case, the linker is noncleavable, thus trapping the chemotherapeutic agent within the cells. There are theoretical pluses and minuses to having either cleavable or noncleavable linkers, and it is likely that both will be used in future developments of antibody-drug conjugates. Another strategy to enhance the antitumor effects of targeted agents is to use them in rational combinations with each other and in empiric combinations with chemotherapy agents that kill cells in ways distinct from targeted agents. Combinations of trastuzumab (a monoclonal antibody that targets the HER2 receptor [member of the epidermal growth factor receptor (EGFR) family]) with chemotherapy have significant activity against breast and stomach cancers that have high levels of expression of the HER2 protein. The activity of trastuzumab Abbreviations: AML, acute myeloid leukemia; CTCL, cutaneous T cell lymphoma; EGFR, epidermal growth factor receptor; FDA, Food and Drug Administration; Flt-3, fms-like tyrosine kinase-3; GIST, gastrointestinal stromal tumor; MTC, medullary thyroid cancer; mTOR, mammalian target of rapamycin; PDGFR, platelet-derived growth factor receptor; PLGF, placental growth factor; PML-RARα, promyelocytic leukemia-retinoic acid receptor-alpha; RCC, renal cell cancer; t(15;17), translocation between chromosomes 15 and 17; TC, thyroid cancer; TGF-α, transforming growth factor-alpha; VEGFR, vascular endothelial growth factor receptor. and chemotherapy can be enhanced further by combinations with another targeted monoclonal antibody (pertuzumab), which prevents dimerization of the HER2 receptor with other HER family members including HER3. Although targeted therapies have not yet resulted in cures when used alone, their use in the adjuvant setting and when combined with other effective treatments has substantially increased the fraction of patients cured. For example, the addition of rituximab, an anti-CD20 antibody, to combination chemotherapy in patients with diffuse large B cell lymphoma improves cure rates by 15–20%. The addition of trastuzumab, antibody to HER2, to combination chemotherapy in the adjuvant treatment of HER2-positive breast cancer reduces relapse rates by 50%. A major effort is under way to develop targeted therapies for mutations in the ras family of genes, which are the most common mutations in oncogenes in cancers (especially kras) but have proved to be very difficult targets for a number of reasons related to how RAS proteins are activated and inactivated. Targeted therapies against proteins downstream of RAS (including mitogen-activated protein [MAP] kinase and ERK) are currently being studied, both individually and in combination. A large number of inhibitors of phospholipid signaling pathways such as the phosphatidylinositol-3-kinase (PI3K) and phospholipase C-gamma pathways, which are involved in a large number of cellular processes that are important in cancer development and progression, are being evaluated. The targeting of a variety of other pathways that are activated in malignant cells, such as the MET pathway, hedgehog pathway, and various angiogenesis pathways, is also being explored. One of the strategies for new drug development is to take advantage of so-called oncogene addiction. This situation (Fig. 102e-3) is created when a tumor cell develops an activating mutation in an oncogene that becomes a dominant pathway for survival and growth with reduced contributions from other pathways, even when there may be abnormalities in those pathways. This dependency on a single pathway creates a cell that is vulnerable to inhibitors of that oncogene pathway. For example, cells harboring mutations in BRAF are very sensitive to MEK inhibitors that inhibit downstream signaling in the BRAF pathway. Targeting proteins critical for transcription of proteins vital for malignant cell survival or proliferation provides another potential target for treating cancers. The transcription factor nuclear factor-κB (NF-κB) is a heterodimer composed of p65 and p50 subunits that associate with an inhibitor, IκB, in the cell cytoplasm. In response to growth factor or cytokine signaling, a multi-subunit kinase called IKK (IκB kinase) phosphorylates IκB and directs its degradation by the ubiquitin/proteasome system. NF-κB, free of its inhibitor, translocates to the nucleus and activates target genes, many of which promote the survival of tumor cells. Novel drugs called proteasome inhibitors block the proteolysis of IκB, thereby preventing NF-κB activation. For unexplained reasons, this is selectively toxic to tumor cells. The antitumor effects of proteasome inhibitors are more complicated and involve the inhibition of the degradation of multiple cellular proteins. Proteasome inhibitors (e.g., bortezomib [Velcade]) have activity in patients with multiple myeloma, including partial and complete remissions. Inhibitors of IKK are also in development, with the hope of more selectively blocking the degradation of IκB, thus “locking” NF-κB in an inhibitory complex and rendering the cancer cell more susceptible to apoptosis-inducing agents. Many other transcription factors are activated by phosphorylation, which can be prevented by tyrosine kinase inhibitors or serine/threonine kinase inhibitors, a number of which are currently in clinical trials. FIGuRE 102e-3 Synthetic lethality. Genes are said to have a synthetic lethal relationship when mutation of either gene alone is tolerated by the cell but mutation of both genes leads to lethality, as originally noted by Bridges and later named by Dobzhansky. Thus, mutant gene a and gene b have a synthetic lethal relationship, implying that the loss of one gene makes the cell dependent on the function of the other gene. In cancer cells, loss of function of a DNA repair gene like BRCA1, which repairs double-strand breaks, makes the cell dependent on base excision repair mediated in part by PARP. If the PARP gene product is inhibited, the cell attempts to repair the break using the error-prone nonhomologous end-joining method, which results in tumor cell death. High-throughput screens can now be performed using isogenic cell line pairs in which one cell line has a defined defect in a DNA repair pathway. Compounds can be identified that selectively kill the mutant cell line; targets of these compounds have a synthetic lethal relationship to the repair pathway and are potentially important targets for future therapeutics. Estrogen receptors (ERs) and androgen receptors (ARs), members of the steroid hormone family of nuclear receptors, are targets of inhibition by drugs used to treat breast and prostate cancers, respectively. Tamoxifen, a partial agonist and antagonist of ER function, can mediate tumor regression in metastatic breast cancer and can prevent disease recurrence in the adjuvant setting. Tamoxifen binds to the ER and modulates its transcriptional activity, inhibiting activity in the breast but promoting activity in bone and uterine epithelium. Selective ER modulators (SERMs) have been developed with the hope of a more beneficial modulation of ER activity, i.e., antiestrogenic activity in the breast, uterus, and ovary, but estrogenic for bone, brain, and cardiovascular tissues. Aromatase inhibitors, which block the conversion of androgens to estrogens in breast and subcutaneous fat tissues, have demonstrated improved clinical efficacy compared with tamoxifen and are often used as first-line therapy in patients with ER-positive disease. A number of approaches have been developed for blocking androgen stimulation of prostate cancer, including decreasing production (e.g., orchiectomy, luteinizing hormone–releasing hormone agonists or antagonists, estrogens, ketoconazole, and inhibitors of enzymes such as CYP17 involved in androgen production) and AR blockers (Chap. 108). The concepts of oncogene addiction and synthetic lethality have spurred new drug development targeting oncogeneand tumor-suppressor pathways. As discussed earlier in this chapter and outlined in Fig. 102e-3, cancer cells can become dependent on signaling pathways containing activated oncogenes; this can effect proliferation (i.e., mutated Kras, Braf, overexpressed Myc, or activated tyrosine kinases), DNA repair (loss of BRCA1 or BRCA2 gene function), survival (overexpression of Bcl-2 or NF-κB), cell metabolism (as occurs when mutant Kras enhances glucose uptake and aerobic glycolysis), and perhaps angiogenesis (production of VEGF in response to HIF-2α in RCC). In such cases, targeted inhibition of the pathway can lead to specific killing of the cancer cells. However, targeting defects in tumor-suppressor genes has been much more difficult, both because the target of mutation is often deleted and because it is much more difficult to restore normal function than to inhibit abnormal function of a protein. Synthetic lethality occurs when loss of function in either of two genes alone has limited effects on cell survival but loss of function in both genes leads to cell death. Identifying genes that have a synthetic lethal relationship to tumor-suppressor pathways that have been mutated in tumor cells may allow targeting of proteins required uniquely by those cells (Fig. 102e-3). Several examples of this have been identified. For instance, cells with mutations in the BRCA1 or BRCA2 tumor-suppressor genes (e.g., a subset of breast and ovarian cancers) are unable to repair DNA damage by homologous recombination. PARP are a family of proteins important for single-strand break (SSB) DNA repair. PARP inhibition results in selective killing of cancer cells with BRCA1 or BRCA2 loss. Preliminary trials have suggested some effectiveness of PARP inhibition, especially in combination with chemotherapy; clinical trials are ongoing. The concept of synthetic lethality provides a framework for genetic screens to identify other synthetic lethal combinations involving known tumor-suppressor genes and development of novel therapeutic agents to target dependent pathways. Chromatin structure regulates the hierarchical order of sequential gene transcription that governs differentiation and tissue homeostasis. Disruption of chromatin remodeling (the process of modifying chromatin structure to control exposure of specific genes to transcriptional proteins, thereby controlling the expression of those genes) leads to aberrant gene expression and can induce proliferation of undifferentiated cells. Epigenetics is defined as changes that alter the pattern of gene expression that persist across at least one cell division but are not caused by changes in the DNA code. Epigenetic changes include alterations of chromatin structure mediated by methylation of cytosine residues in CpG dinucleotides, modification of histones by acetylation or methylation, or changes in higher-order chromosome structure (Fig. 102e-4). The transcriptional regulatory regions of active genes 102e-7 often contain a high frequency of CpG dinucleotides (referred to as CpG islands), which are normally unmethylated. Expression of these genes is controlled by transient association with repressor or activator proteins that regulate transcriptional activation. However, hypermethylation of promoter regions is a common mechanism by which tumor-suppressor loci are epigenetically silenced in cancer cells. Thus one allele may be inactivated by mutation or deletion (as occurs in loss of heterozygosity), while expression of the other allele is epigenetically silenced, usually by methylation. Acetylation of the amino terminus of the core histones H3 and H4 induces an open chromatin conformation that promotes transcription initiation. Histone acetylases are components of coactivator complexes recruited to promoter/enhancer regions by sequence-specific transcription factors during the activation of genes (Fig. 102e-4). Histone deacetylases (HDACs; at least 17 are encoded in the human genome) are recruited to genes by transcriptional repressors and prevent the initiation of gene transcription. Methylated cytosine residues in promoter regions become associated with methyl cytosine–binding proteins that recruit protein complexes with HDAC activity. The balance between permissive and inhibitory chromatin structure is therefore largely determined by the activity of transcription factors in modulating the “histone code” and the methylation status of the genetic regulatory elements of genes. The pattern of gene transcription is aberrant in all human cancers, and in many cases, epigenetic events are responsible. Unlike genetic events that alter DNA primary structure (e.g., deletions), epigenetic changes are potentially reversible and appear amenable to therapeutic intervention. In certain human cancers, including pancreatic cancer and multiple myeloma, the p16Ink4a promoter is inactivated by methylation, thus permitting the unchecked activity of CDK4/cyclin D and rendering pRb nonfunctional. In sporadic forms of renal, breast, and colon cancer, the von Hippel–Lindau (VHL), breast cancer 1 (BRCA1), and serine/threonine kinase 11 (STK11) genes, respectively, are epigenetically silenced. Other targeted genes include the p15Ink4b CDK inhibitor, glutathione-S-transferase (which detoxifies reactive oxygen species), and the E-cadherin molecule (important for junction formation between epithelial cells). Epigenetic silencing can occur in premalignant lesions and can affect genes involved in DNA repair, thus predisposing to further genetic damage. Examples include MLH1 (mut L homologue) in hereditary nonpolyposis colon cancer (HNPCC, also called Lynch’s syndrome), which is critical for repair of mismatched bases that occur during DNA synthesis, and O6-methylguanine-DNA methyltransferase, which removes alkylated guanine adducts from DNA and is often silenced in colon, lung, and lymphoid tumors. Human leukemias often have chromosomal translocations that code for novel fusion proteins with enzymatic activities that alter chromatin structure. The promyelocytic leukemia–retinoic acid receptor (PML-RAR) fusion protein, generated by the t(15;17) observed in most cases of acute promyelocytic leukemia (APL), binds to promoters containing retinoic acid response elements and recruits HDAC to these promoters, effectively inhibiting gene expression. This arrests differentiation at the promyelocyte stage and promotes tumor cell proliferation and survival. Treatment with pharmacologic doses of all-trans retinoic acid (ATRA), the ligand for RARα, results in the release of HDAC activity and the recruitment of coactivators, which overcome the differentiation block. This induced differentiation of APL cells has improved treatment of these patients but also has led to a novel treatment toxicity when newly differentiated tumor cells infiltrate the lungs. However, ATRA represents a treatment paradigm for the reversal of epigenetic changes in cancer. For other leukemia-associated fusion proteins, such as acute myeloid leukemia (AML)-eight-twentyone (ETO) and the MLL fusion proteins seen in AML and acute lymphocytic leukemia, no ligand is known. Therefore, efforts are ongoing to determine the structural basis for interactions between translocation fusion proteins and chromatin-remodeling proteins and to use this information to rationally design small molecules that will disrupt specific protein-protein associations, although this has proven to be technically difficult. Drugs that block the enzymatic activity of HDAC are Nucleosomes permits binding of multiple Nucleosomes gene expression. FIGuRE 102e-4 Epigenetic regulation of gene expression in cancer cells. Tumor-suppressor genes are often epigenetically silenced in cancer cells. In the upper portion, a CpG island within the promoter and enhancer regions of the gene has been methylated, resulting in the recruitment of methyl-cytosine binding proteins (MeCP) and complexes with histone deacetylase (HDAC) activity. Chromatin is in a condensed, non-permissive conformation that inhibits transcription. Clinical trials are under way using the combination of demethylating agents such as 5-aza-2′deoxycytidine plus HDAC inhibitors, which together confer an open, permissive chromatin structure (lower portion). Transcription factors bind to specific DNA sequences in promoter regions and, through protein-protein interactions, recruit coactivator complexes containing histone acetyl transferase (HAT) activity. This enhances transcription initiation by RNA polymerase II and associated general transcription factors. The expression of the tumor-suppressor gene commences, with phenotypic changes that may include growth arrest, differentiation, or apoptosis. being tested. HDAC inhibitors have demonstrated antitumor activity in clinical studies against cutaneous T cell lymphoma (e.g., vorinostat) and some solid tumors. HDAC inhibitors may target cancer cells via a number of mechanisms, including upregulation of death receptors (DR4/5, FAS, and their ligands) and p21Cip1/Waf1, as well as inhibition of cell cycle checkpoints. Efforts are also under way to reverse the hypermethylation of CpG islands that characterizes many malignancies. Drugs that induce DNA demethylation, such as 5-aza-2′-deoxycytidine, can lead to reexpression of silenced genes in cancer cells with restoration of function, and 5-aza-2′-deoxycytidine is approved for use in myelodysplastic syndrome (MDS). However, 5-aza-2′-deoxycytidine has limited aqueous solubility and is myelosuppressive. Other inhibitors of DNA methyltransferases are in development. In ongoing clinical trials, inhibitors of DNA methylation are being combined with HDAC inhibitors. The hope is that by reversing coexisting epigenetic changes, the deregulated patterns of gene transcription in cancer cells will be at least partially reversed. Epigenetic gene regulation can also occur via microRNAs or long non-coding RNAs (lncRNAs). MicroRNAs are short (average 22 nucleotides in length) RNA molecules that silence gene expression after transcription by binding and inhibiting the translation or promoting the degradation of mRNA transcripts. It is estimated that more than 1000 microRNAs are encoded in the human genome. Each tissue has a distinctive repertoire of microRNA expression, and this pattern is altered in specific ways in cancers. However, specific correlations between microRNA expression and tumor biology and clinical behavior are just now emerging. Therapies targeting microRNAs are not currently at hand but represent a novel area of treatment development. lncRNAs are longer than 200 nucleotides and compose the largest group of noncoding RNAs. Some of them have been shown to play important roles in gene regulation. The potential for altering these RNAs for therapeutic benefit is an area of active investigation, although much more needs to be learned before this will be feasible. Tissue homeostasis requires a balance between the death of aged, terminally differentiated cells or severely damaged cells and their renewal by proliferation of committed progenitors. Genetic damage to growth-regulating genes of stem cells could lead to catastrophic results for the host as a whole. Thus, genetic events causing activation of oncogenes or loss of tumor suppressors, which would be predicted to lead to unregulated cell proliferation unless corrected, usually activate signal transduction pathways that block aberrant cell proliferation. These pathways can lead to a form of programmed cell death (apoptosis) or irreversible growth arrest (senescence). Much as a panoply of intra-and extracellular signals impinge upon the core cell cycle machinery to regulate cell division, so too are these signals transmitted to a core enzymatic machinery that regulates cell death and survival. Apoptosis is induced by two main pathways (Fig. 102e-5). The extrinsic pathway of apoptosis is activated by cross-linking members of the tumor necrosis factor (TNF) receptor superfamily, such as CD95 (Fas) and death receptors DR4 and DR5, by their ligands, Fas ligand or TRAIL (TNF-related apoptosis-inducing ligand), respectively. This induces the association of FADD (Fas-associated death domain) and procaspase-8 to death domain motifs of the receptors. Caspase-8 is activated and then cleaves and activates effector caspases-3 and -7, which then target cellular constituents (including caspase-activated DNAse, cytoskeletal proteins, and a number of regulatory proteins), inducing the morphologic appearance characteristic of apoptosis, which pathologists term “karyorrhexis.” The intrinsic pathway of apoptosis is initiated by the release of cytochrome c and SMAC (second APAF-1 dATP Pro-caspase 9 Cyt c Bak BcI2 Bax SMAC IAP BH3-only proteins Matrix Inter-membrane space p65 p50 Proteasome NF-˜B genes activated Cytoskeletal disruption Substrate cleavage Effector caspases BAD Caspase 9 FKHR IKK DNA degradation chromatin condensation Lamin cleavage I˜B Nucleus Outer membrane Mitochondrion receptor Death-inducing signals • DNA damage• Oncogene-induced proliferation • Loss of attachment to ECM • Chemotherapy, radiation therapy 5 8 2 743 FIGuRE 102e-5 Therapeutic strategies to overcome aberrant survival pathways in cancer cells. 1. The extrinsic pathway of apoptosis can be selectively induced in cancer cells by TRAIL (the ligand for death receptors 4 and 5) or by agonistic monoclonal antibodies. 2. Inhibition of antiapoptotic Bcl-2 family members with antisense oligonucleotides or inhibitors of the BH3-binding pocket will promote formation of Bakor Bax-induced pores in the mitochondrial outer membrane. 3. Epigenetic silencing of APAF-1, caspase-8, and other proteins can be overcome using demethylating agents and inhibitors of histone deacetylases. 4. Inhibitor of apoptosis proteins (IAP) blocks activation of caspases; small-molecule inhibitors of IAP function (mimicking SMAC action) should lower the threshold for apoptosis. 5. Signal transduction pathways originating with activation of receptor tyrosine kinase receptors (RTKs) or cytokine receptors promote survival of cancer cells by a number of mechanisms. Inhibiting receptor function with monoclonal antibodies, such as trastuzumab or cetuximab, or inhibiting kinase activity with small-molecule inhibitors can block the pathway. 6. The Akt kinase phosphorylates many regulators of apoptosis to promote cell survival; inhibitors of Akt may render tumor cells more sensitive to apoptosis-inducing signals; however, the possibility of toxicity to normal cells may limit the therapeutic value of these agents. 7 and 8. Activation of the transcription factor NF-κB (composed of p65 and p50 subunits) occurs when its inhibitor, IκB, is phosphorylated by IκB kinase (IKK), with subsequent degradation of IκB by the proteasome. Inhibition of IKK activity should selectively block the activation of NF-κB target genes, many of which promote cell survival. Inhibitors of proteasome function are Food and Drug Administration approved and may work in part by preventing destruction of IκB, thus blocking NF-κB nuclear localization. NF-κB is unlikely to be the only target for proteasome inhibitors. mitochondrial activator of caspases) from the mitochondrial inter-membrane space in response to a variety of noxious stimuli, including DNA damage, loss of adherence to the extracellular matrix (ECM), oncogene-induced proliferation, and growth factor deprivation. Upon release into the cytoplasm, cytochrome c associates with dATP, procaspase-9, and the adaptor protein APAF-1, leading to the sequential activation of caspase-9 and effector caspases. SMAC binds to and blocks the function of inhibitor of apoptosis proteins (IAP), negative regulators of caspase activation. The release of apoptosis-inducing proteins from the mitochondria is regulated by proand antiapoptotic members of the Bcl-2 family. Antiapoptotic members (e.g., Bcl-2, Bcl-XL, and Mcl-1) associate with the mitochondrial outer membrane via their carboxyl termini, exposing to the cytoplasm a hydrophobic binding pocket composed of Bcl-2 homology (BH) domains 1, 2, and 3 that is crucial for their activity. Perturbations of normal physiologic processes in specific cellular compartments lead to the activation of BH3-only proapoptotic family members (such as Bad, Bim, Bid, Puma, Noxa, and others) that can alter the conformation of the outer-membrane proteins Bax and Bak, which then oligomerize to form pores in the mitochondrial outer membrane resulting in cytochrome c release. If proteins composed only of BH3 domains are sequestered by Bcl-2, Bcl-XL, or Mcl-1, pores do not form and apoptosis-inducing proteins are not released from the mitochondria. The ratio of levels of antiapoptotic Bcl-2 family members and the levels of proapoptotic BH3-only proteins at the mitochondrial membrane determines the activation state of the intrinsic pathway. The mitochondrion must therefore be recognized not only as an organelle with vital roles in intermediary metabolism and oxidative phosphorylation but also as a central regulatory structure of the apoptotic process. The evolution of tumor cells to a more malignant phenotype requires the acquisition of genetic changes that subvert apoptosis pathways and promote cancer cell survival and resistance to anticancer therapies. However, cancer cells may be more vulnerable than normal cells to therapeutic interventions that target the apoptosis pathways that cancer cells depend on. For instance, overexpression of Bcl-2 as a result of the t(14;18) translocation contributes to follicular lymphoma. Upregulation of Bcl-2 expression is also observed in prostate, breast, and lung cancers and melanoma. Targeting of antiapoptotic Bcl-2 family members has been accomplished by the identification of several low-molecular-weight compounds that bind to the hydrophobic pockets of either Bcl-2 or Bcl-XL and block their ability to associate with death-inducing BH3-only proteins. These compounds inhibit the antiapoptotic activities of Bcl-2 and Bcl-XL at nanomolar concentrations in the laboratory and are entering clinical trials. Preclinical studies targeting death receptors DR4 and DR5 have demonstrated that recombinant, soluble, human TRAIL or humanized monoclonal antibodies with agonist activity against DR4 or DR5 can induce apoptosis of tumor cells while sparing normal cells. The mechanisms for this selectivity may include expression of decoy receptors or elevated levels of intracellular inhibitors (such as FLIP, which competes with caspase-8 for FADD) by normal cells but not tumor cells. Synergy has been shown between TRAIL-induced apoptosis and chemotherapeutic agents. For instance, some colon cancers encode mutated Bax protein as a result of mismatch repair (MMR) defects and are resistant to TRAIL. However, upregulation of Bak by chemotherapy restores the ability of TRAIL to activate the mitochondrial pathway of apoptosis. However, clinical studies have not yet shown significant activity of approaches targeting the TRAIL pathway. Many of the signal transduction pathways perturbed in cancer promote tumor cell survival (Fig. 102e-5). These include activation of the PI3K/Akt pathway, increased levels of the NF-κB transcription factor, and epigenetic silencing of genes such as APAF-1 and caspase-8. Each of these pathways is a target for therapeutic agents that, in addition to affecting cancer cell proliferation or gene expression, may render cancer cells more susceptible to apoptosis, thus promoting synergy when combined with other chemotherapeutic agents. Some tumor cells resist drug-induced apoptosis by expression of one or more members of the ABC family of ATP-dependent efflux pumps that mediate the multidrug-resistance (MDR) phenotype. The prototype, P-glycoprotein (PGP), spans the plasma membrane 12 times and has two ATP-binding sites. Hydrophobic drugs (e.g., anthracyclines and vinca alkaloids) are recognized by PGP as they enter the cell and are pumped out. Numerous clinical studies have failed to demonstrate that drug resistance can be overcome using inhibitors of PGP. However, ABC transporters have different substrate specificities, and inhibition of a single family member may not be sufficient to overcome the MDR phenotype. Efforts to reverse PGP-mediated drug resistance continue. Cells, including cancer cells, can also undergo other mechanisms of cell death including autophagy (degradation of proteins and organelles by lysosomal proteases) and necrosis (digestion of cellular components and rupturing of the cell membrane). Necrosis usually occurs in response to external forces resulting in release of cellular components, which leads to inflammation and damage to surrounding tissues. Although necrosis was thought to be unprogrammed, evidence now suggests that at least some aspects may be programmed. The exact role of necrosis in cancer cell death in various settings is still being determined. In addition to its role in cell death, autophagy can serve as a homeostatic mechanism to promote survival for the cell by recycling cellular components to provide necessary energy. The mechanisms that control the balance between enhancing survival versus leading to cell death are still not fully understood. Autophagy appears to play conflicting roles in the development and survival of cancer. Early in the carcinogenic process, it can act as a tumor suppressor by preventing the cell from accumulating abnormal proteins and organelles. However, in established tumors, it may serve as a mechanism of survival for cancer cells when they are stressed by damage such as from chemotherapy. Inhibition of this process can enhance the sensitivity of cancer cells to chemotherapy. Better understanding of the factors that control the survival-promoting versus death-inducing aspects of autophagy is required in order to know how to best manipulate it for therapeutic benefit. The metastatic process accounts for the vast majority of deaths from solid tumors, and therefore, an understanding of this process is critical. The biology of metastasis is complex and requires multiple steps. The three major features of tissue invasion are cell adhesion to the basement membrane, local proteolysis of the membrane, and movement of the cell through the rent in the membrane and the ECM. Cells that lose contact with the ECM normally undergo programmed cell death (anoikis), and this process has to be suppressed in cells that metastasize. Another process important for metastasizing epithelial cancer cells is epithelial-mesenchymal transition (EMT). This is a process by which cells lose their epithelial properties and gain mesenchymal properties. This normally occurs during the developmental process in embryos, allowing cells to migrate to their appropriate destinations in the embryo. It also occurs in wound healing, tissue regeneration, and fibrotic reactions, but in all of these processes, cells stop proliferating when the process is complete. Malignant cells that metastasize undergo EMT as an important step in that process but retain the capacity for unregulated proliferation. Malignant cells that gain access to the circulation must then repeat those steps at a remote site, find a hospitable niche in a foreign tissue, avoid detection by host defenses, and induce the growth of new blood vessels. The rate-limiting step for metastasis is the ability for tumor cells to survive and expand in the novel microenvironment of the metastatic site, and multiple host-tumor interactions determine the ultimate outcome (Fig. 102e-6). Few drugs have been developed to attempt to directly target the process of metastasis, in part because the specifics of the critical steps in the process that would be potentially good targets for drugs are still being identified. However, a number of potential targets are known. HER2 can enhance the metastatic potential of breast cancer cells, and as discussed above, the monoclonal antibody trastuzumab, which targets HER2, improves survival in the adjuvant setting for HER2-positive breast cancer patients. Other potential targets that increase metastatic potential of cells in preclinical studies include HIF-1 and -2, transcription factors induced by hypoxia within tumors; growth factors (e.g., cMET and VEGFR); oncogenes (e.g., SRC); adhesion molecules (e.g., focal adhesion kinase [FAK]); ECM proteins (e.g., matrix metalloproteinases-1 and -2); and inflammatory molecules (e.g., COX-2). The metastatic phenotype is likely restricted to a small fraction of tumor cells (Fig. 102e-6). A number of genetic and epigenetic changes are required for tumor cells to be able to metastasize, including activation of metastasis-promoting genes and inhibition of genes that suppress the metastatic ability. Cells with metastatic capability frequently express chemokine receptors that are likely important in the metastatic process. A number of candidate metastasis-suppressor genes have been identified, including genes coding for proteins that enhance apoptosis, suppress cell division, are involved in the interactions of cells with each other or the ECM, or suppress cell migration. The loss of function of these genes enhances metastasis. Gene expression profiling is being used to study the metastatic process and other properties of tumor cells that may predict susceptibilities. An example of the ability of malignant cells to survive and grow in a novel microenvironment is bone metastasis. Bone metastases are extremely painful, cause fractures of weight-bearing bones, can lead to hypercalcemia, and are a major cause of morbidity for cancer patients. Osteoclasts and their monocyte-derived precursors express the surface receptor RANK (receptor activator of NF-κB), which is required for terminal differentiation and activation of osteoclasts. Osteoblasts and other stromal cells express RANK ligand (RANKL), as both a membrane-bound and soluble cytokine. Osteoprotegerin (OPG), a soluble receptor for RANKL produced by stromal cells, acts as a decoy receptor to inhibit RANK activation. The relative balance of RANKL FIGuRE 102e-6 Oncogene signaling pathways are activated during tumor progression and promote metastatic potential. This figure shows a cancer cell that has undergone epithelial to mesenchymal transition (EMT) under the influence of several environmental signals. Critical components include activated transforming growth factor β (TGF-β) and the hepatocyte growth factor (HGF)/c-Met pathways, as well as changes in the expression of adhesion molecules that mediate cell-cell and cell–extracellular matrix interactions. Important changes in gene expression are mediated by the Snail and Twist family of transcriptional repressors (whose expression is induced by the oncogenic pathways), leading to reduced expression of E-cadherin, a key component of adherens junctions between epithelial cells. This, in conjunction with upregulation of N-cadherin, a change in the pattern of expression of integrins (which mediate cell–extracellular matrix associations that are important for cell motility), and a switch in intermediate filament expression from cytokeratin to vimentin, results in the phenotypic change from adherent highly organized epithelial cells to motile and invasive cells with a fibroblast or mesenchymal morphology. EMT is thought to be an important step leading to metastasis in some human cancers. Host stromal cells, including tumor-associated fibroblasts and macrophages, play an important role in modulating tumor cell behavior through secretion of growth factors and proangiogenic cytokines, and matrix metalloproteinases that degrade the basement membrane. VEGF-A, -C, and -D are produced by tumor cells and stromal cells in response to hypoxemia or oncogenic signals and induce production of new blood vessels and lymphatic channels through which tumor cells metastasize to lymph nodes or tissues. and OPG determines the activation state of RANK on osteoclasts. Many tumors increase osteoclast activity by secretion of substances such as parathyroid hormone (PTH), PTH-related peptide, interleukin (IL)-1, or Mip1 that perturb the homeostatic balance of bone remodeling by increasing RANK signaling. One example is multiple myeloma, where tumor cell–stromal cell interactions activate osteoclasts and inhibit osteoblasts, leading to the development of multiple lytic bone lesions. Inhibition of RANKL by an antibody (denosumab) can prevent further bone destruction. Bisphosphonates are also effective inhibitors of osteoclast function that are used in the treatment of cancer patients with bone metastases. Only a small proportion of the cells within a tumor are capable of initiating colonies in vitro or forming tumors at high efficiency when injected into immunocompromised NOD/SCID mice. Acute and chronic myeloid leukemias (AML and CML) have a small population of cells (<1%) that have properties of stem cells, such as unlimited self-renewal and the capacity to cause leukemia when serially transplanted in mice. These cells have an undifferentiated phenotype (Thy1− CD34+CD38– and do not express other differentiation markers) and resemble normal stem cells in many ways, but are no longer under homeostatic control (Fig. 102e-7). Solid tumors may also contain a population of stem cells. Cancer stem cells, like their normal counterparts, have unlimited proliferative capacity and paradoxically traverse the cell cycle at a very slow rate; cancer growth occurs largely due to expansion of the stem cell pool, the unregulated proliferation of an amplifying population, and failure of apoptosis pathways (Fig. 102e-7). Slow cell cycle progression and high levels of expression of antiapoptotic Bcl-2 family members and drug efflux pumps of the MDR family render cancer stem cells less vulnerable to cancer chemotherapy or radiation therapy. Implicit in the cancer stem cell hypothesis is the idea that failure to cure most human cancers is due to the fact that current therapeutic agents do not kill the stem cells. If cancer stem cells can be identified and isolated, then aberrant signaling pathways that distinguish these cells from normal tissue stem cells can be identified and targeted. Evidence that cells with stem cell properties can arise from other epithelial cells within the cancer by processes such as epithelial mesenchymal transition also implies that it is essential to treat all of the cancer cells, and not just those with current stem cell-like properties, in order to eliminate the self-renewing cancer cell population. The exact nature of cancer stem cells remains an area of investigation. One of the unanswered questions is the exact origin of cancer stem cells for the different cancers. Regulated activation of differentiation program Loss of self-renewal capacity Multi-lineage differentiation Partial differentiation Growth arrest No growth arrest Maintenance of tissue Loss of tissue architecture FIGuRE 102e-7 Cancer stem cells play a critical role in the initiation, progression, and resistance to therapy of malignant neoplasms. In normal tissues (left), homeostasis is maintained by asymmetric division of stem cells, leading to one progeny cell that will differentiate and one cell that will maintain the stem cell pool. This occurs within highly specific niches unique to each tissue, such as in close apposition to osteoblasts in bone marrow, or at the base of crypts in the colon. Here, paracrine signals from stromal cells, such as sonic hedgehog or Notch ligands, as well as upregulation of β-catenin and telomerase, help to maintain stem cell features of unlimited self-renewal while preventing differentiation or cell death. This occurs in part through upregulation of the transcriptional repressor Bmi-1 and inhibition of the p16Ink4a/Arf and p53 pathways. Daughter cells leave the stem cells niche and enter a proliferative phase (referred to as transit-amplifying) for a specified number of cell divisions, during which time a developmental program is activated, eventually giving rise to fully differentiated cells that have lost proliferative potential. Cell renewal equals cell death, and homeostasis is maintained. In this hierarchical system, only stem cells are long-lived. The hypothesis is that cancers harbor stem cells that make up a small fraction (i.e., 0.001–1%) of all cancer cells. These cells share several features with normal stem cells, including an undifferentiated phenotype, unlimited self-renewal potential, and a capacity for some degree of differentiation; however, due to initiating mutations (mutations are indicated by lightning bolts), they are no longer regulated by environmental cues. The cancer stem cell pool is expanded, and rapidly proliferating progeny, through additional mutations, may attain stem cell properties, although most of this population is thought to have a limited proliferative capacity. Differentiation programs are dysfunctional due to reprogramming of the pattern of gene transcription by oncogenic signaling pathways. Within the cancer transit-amplifying population, genomic instability generates aneuploidy and clonal heterogeneity as cells attain a fully malignant phenotype with metastatic potential. The cancer stem cell hypothesis has led to the idea that current cancer therapies may be effective at killing the bulk of tumor cells but do not kill tumor stem cells, leading to a regrowth of tumors that is manifested as tumor recurrence or disease progression. Research is in progress to identify unique molecular features of cancer stem cells that can lead to their direct targeting by novel therapeutic agents. Cancer cells, and especially stem cells, have the capacity for significant plasticity, allowing them to alter multiple aspects of cell biology in response to external factors (e.g., chemotherapy, inflammation, immune response). Thus, a major problem in cancer therapy is that malignancies have a wide spectrum of mechanisms for both initial and adaptive resistance to treatments. These include inhibiting drug delivery to the cancer cells, blocking drug uptake and retention, increasing drug metabolism, altering levels of target proteins, acquiring mutations in target proteins, modifying metabolism and cell signaling pathways, using alternate signaling pathways, adjusting the cell replication process including mechanisms by which the cell deals with DNA damage, inhibiting apoptosis, and evading the immune system. Thus, most metastatic cancers (except those curable with chemotherapy such as germ cell tumors) eventually become resistant to the therapy being used. Overcoming resistance is a major area of research. One of the distinguishing characteristics of cancer cells is that they have altered metabolism as compared with normal cells in supporting survival and their high rates of proliferation. These cells must focus a significant fraction of their energy resources on synthesis of proteins and other molecules while still maintaining sufficient ATP production to survive and grow. Although normal proliferating cells also have similar needs, there are differences in how cancer cells metabolize glucose and a number of other compounds, including glutamine, as compared to normal cells. Many cancer cells use aerobic glycolysis (the Warburg effect) (Fig. 102e-8) to metabolize glucose, leading to increased lactic acid production, whereas normal cells use oxidative phosphorylation in mitochondria under aerobic conditions, a much more efficient process. One consequence is increased glucose uptake by cancer cells, a fact used in fluorodeoxyglucose (FDG) positron emission tomography (PET) scanning to detect tumors. A number of proteins in cancer cells, including CMYC, HIF1, RAS, p53, pRB, and AKT, are all involved in modulating glycolytic processes and controlling the Warburg effect. Although these pathways remain difficult to target therapeutically, both the PI3 kinase pathway with signaling through mTOR and the AMP-activated kinase (AMPK) pathway, which inhibits mTOR complex 1 (mTORC1; a protein complex that includes mTOR), are important in controlling the glycolytic process and thus provide potential targets for inhibiting this process. The inefficient utilization of glucose also leads to a need for alternative metabolic pathways for other compounds as well, one of which is glutamine. Similar to glucose, this provides both a source for structural molecules as well as energy production. Glutamine is also inefficiently used by cancer cells. continually evolving. Both the complexity and dynamic nature of the microenvironment enhance the difficulty of treating tumors. or There are also a number of mechanisms by +O which the microenvironment can contribute +/–O2 to resistance to anticancer therapies. One of the critical elements of tumor cell proliferation is delivery of oxygen, nutrients, and circulating factors important for growth and survival. The diffusion limit for oxy- O2 gen in tissues is ~100–200 μm, and thus, a 5% 85% critical aspect in the growth of tumors is the development of new blood vessels, or angio- Lactate genesis. The growth of primary and metastatic tumors to larger than a few millimeters requires the recruitment of blood vessels and vascular endothelial cells to support their metabolic requirements. Thus, a critical ele-Oxidative Anaerobic Aerobic ment in growth of primary tumors and forphosphorylation glycolysis glycolysis mation of metastatic sites is the angiogenic switch: the ability of the tumor to promote the formation of new capillaries from pre existing host vessels. The angiogenic switch FIGuRE 102e-8 Warburg effect versus oxidative phosphorylation. In most normal tissues, is a phase in tumor development when the the vast majority of cells are differentiated and dedicated to a particular function within the dynamic balance of proand antiangiogenic organ in which they reside. The metabolic needs are mainly for energy and not for building factors is tipped in favor of vessel formation blocks for new cells. In these tissues, ATP is generated by oxidative phosphorylation that effi-by the effects of the tumor on its immediate ciently generates about 36 molecules of ATP for each molecule of glucose metabolized. By con-environment. Stimuli for tumor angiogentrast, proliferative tumor tissues, especially in the setting of hypoxia, a typical condition within esis include hypoxemia, inflammation, and tumors, use aerobic glycolysis to generate energy for cell survival and generation of building genetic lesions in oncogenes or tumor sup- blocks for new cells. Mutations in genes involved in the metastatic process occur in a number of cancers. Among the most frequently found to date are mutations in isocitrate dehydrogenases 1 and 2 (IDH1 and IDH2). These have been most commonly seen in gliomas, AML, and intrahepatic cholangiocarcinomas. These mutations lead to the production of an oncometabolite (2-hydroxyglutarate [2HG]) instead of the normal product α-ketoglutarate. Although the exact mechanisms of oncogenesis by 2HG are still being elucidated, α-ketoglutarate is a key cofactor for a number of dioxygenases involved in controlling DNA methylation. 2HG can act as a competitive inhibitor for α-ketoglutarate, leading to alterations in methylation status (primarily hypermethylation) of genes (epigenetic changes) that can have profound effects on a number of cellular processes including differentiation. Inhibitors of mutant IDH1 and IDH2 are being developed. Much needs to be learned about the specific differences in metabolism between cancer cells and normal cells; however, modulators of metabolism are being tested clinically. The first of these is the antidiabetic agent metformin, both alone and in combination with chemotherapeutic agents. Metformin inhibits gluconeogenesis and may have direct effects on tumor cells by activating the 5′-adenosine monophosphate-activated kinase (AMPK), a serine/threonine protein kinase that is downstream of the LKB1 tumor suppressor, and thus inhibiting mTORC1. This leads to decreased protein synthesis and proliferation. A second approach being tested involves dichloracetate (DCA), an inhibitor of pyruvate dehydrogenase kinase (PDK). PDK inhibits pyruvate dehydrogenase in cancer cells, leading to a switch from mitochondrial oxidative phosphorylation of glucose to cytoplasmic glycolysis (the Warburg effect). By blocking PDK, DCA inhibits glycolysis. Additional approaches targeting tumor metabolism will likely emerge. TuMOR MICROENVIRONMENT, ANGIOGENESIS, AND IMMuNE EVASION Tumors consist not only of malignant cells but also of a complex microenvironment including many other types of cells (e.g., inflammatory cells), ECM, secreted factors (e.g., growth factors), reactive oxygen and nitrogen species, mechanical factors, blood vessels, and lymphatics. This microenvironment is not static but rather is dynamic and pressors that alter tumor cell gene expres sion. Angiogenesis consists of several steps, including the stimulation of endothelial cells (ECs) by growth factors, degradation of the ECM by proteases, proliferation and migration of ECs into the tumor, and the eventual formation of new capillary tubes. Tumor blood vessels are not normal; they have chaotic architecture and blood flow. Due to an imbalance of angiogenic regulators such as VEGF and angiopoietins (see below), tumor vessels are tortuous and dilated with an uneven diameter, excessive branching, and shunting. Tumor blood flow is variable, with areas of hypoxemia and acidosis leading to the selection of variants that are resistant to hypoxemiainduced apoptosis (often due to the loss of p53 expression). Tumor vessel walls have numerous openings, widened interendothelial junctions, and discontinuous or absent basement membrane; this contributes to the high vascular permeability of these vessels and, together with lack of functional intratumoral lymphatics, causes increased interstitial pressure within the tumor (which also interferes with the delivery of therapeutics to the tumor; Figs. 102e-9, 102e-10, and 102e-11). Tumor blood vessels lack perivascular cells such as pericytes and smooth-muscle cells that normally regulate flow in response to tissue metabolic needs. Unlike normal blood vessels, the vascular lining of tumor vessels is not a homogeneous layer of ECs but often consists of a mosaic of ECs and tumor cells with upregulated genes seen in ECs and vessel formation that can occur in hypoxic conditions because of their plasticity; the concept of cancer cell–derived vascular channels, which may be lined by ECM secreted by the tumor cells, is referred to as vascular mimicry. During tumor angiogenesis, ECs are highly proliferative and express a number of plasma membrane proteins that are characteristic of activated endothelium, including growth factor receptors and adhesion molecules such as integrins. Tumors use a number of mechanisms to promote vascularization, subverting normal angiogenic processes for this purpose (Fig. 102e-9). Primary or metastatic tumor cells sometimes arise in proximity to host blood vessels and grow around these vessels, parasitizing nutrients by co-opting the local blood supply. However, most tumor blood vessels arise by the process of sprouting, in which tumors secrete trophic Vascular mimicry— CEP contributes newly tumor cells as differentiated EC part of vessel wall vessels Region of 100 ˝m New sprout Follows VEGF gradient to tumor FIGuRE 102e-9 Tumor angiogenesis is a complex process involving many different cell types that must proliferate, migrate, invade, and differentiate in response to signals from the tumor microenvironment. Endothelial cells (ECs) sprout from host vessels in response to VEGF, bFGF, Ang2, and other proangiogenic stimuli. Sprouting is stimulated by VEGF/VEGFR2, Ang2/Tie2, and integrin/extracellular matrix (ECM) interactions. Bone marrow–derived circulating endothelial precursors (CEPs) migrate to the tumor in response to VEGF and differentiate into ECs, while hematopoietic stem cells differentiate into leukocytes, including tumor-associated macrophages that secrete angiogenic growth factors and produce matrix metalloproteinases (MMPs) that remodel the ECM and release bound growth factors. Tumor cells themselves may directly form parts of vascular channels within tumors. The pattern of vessel formation is haphazard: vessels are tortuous, dilated, and leaky and branch in random ways. This leads to uneven blood flow within the tumor, with areas of acidosis and hypoxemia (which stimulate release of angiogenic factors) and high intratumoral pressures that inhibit delivery of therapeutic agents. angiogenic molecules, the most potent being vascular endothelial growth factors (VEGF), that induce the proliferation and migration of host ECs into the tumor. Sprouting in normal and pathogenic angiogenesis is regulated by three families of transmembrane receptor tyrosine kinases (RTKs) expressed on ECs and their ligands (VEGFs, angiopoietins, ephrins; Fig. 102e-10), which are produced by tumor cells, inflammatory cells, or stromal cells in the tumor microenvironment. When tumor cells arise in or metastasize to an avascular area, they grow to a size limited by hypoxemia and nutrient deprivation. Hypoxemia, a key regulator of tumor angiogenesis, causes the transcriptional induction of the gene encoding VEGF. VEGF and its receptors are required for embryonic vasculogenesis (development of new blood vessels when none preexist) and normal (wound healing, corpus luteum formation) and pathologic angiogenesis (tumor angiogenesis, inflammatory conditions such as rheumatoid arthritis). VEGF-A is a heparin-binding glycoprotein with at least four isoforms (splice variants) that regulates blood vessel formation by binding to the RTKs VEGFR1 and VEGFR2, which are expressed on all ECs in addition to a subset of hematopoietic cells (Fig. 102e-9). VEGFR2 regulates EC proliferation, migration, and survival, whereas VEGFR1 may act as an antagonist of R2 in ECs but is probably also important for angioblast differentiation during embryogenesis. Tumor vessels may be more dependent on VEGFR signaling for growth and survival than normal ECs. Although VEGF signaling is a critical initiator of angiogenesis, this is a complex process regulated by additional signaling pathways (Fig. 102e-10). The angiopoietin, Ang1, produced by stromal cells, binds to the EC RTK Tie2 and promotes the interaction of ECs with the ECM and perivascular cells, such as pericytes and smooth-muscle cells, to form tight, nonleaky vessels. Platelet-derived growth factor (PDGF) and basic fibroblast growth factor (bFGF) help to recruit these perivascular cells. Ang1 is required for maintaining the quiescence and stability of mature blood vessels and prevents the vascular permeability normally induced by VEGF and inflammatory cytokines. and differentiation, of smooth-remodeling) vessel Endothelial cell proliferation, migration, survival FIGuRE 102e-10 Critical molecular determinants of endothelial cell biology. Angiogenic endothelium expresses a number of receptors not found on resting endothelium. These include receptor tyrosine kinases (RTKs) and integrins that bind to the extracellular matrix and mediate endothelial cell (EC) adhesion, migration, and invasion. ECs also express RTK (i.e., the FGF and PDGF receptors) that are found on many other cell types. Critical functions mediated by activated RTK include proliferation, migration, and enhanced survival of endothelial cells, as well as regulation of the recruitment of perivascular cells and bloodborne circulating endothelial precursors and hematopoietic stem cells to the tumor. Intracellular signaling via EC-specific RTK uses molecular pathways that may be targets for future antiangiogenic therapies. For tumor cell–derived VEGF to initiate sprouting from host vessels, the stability conferred by the Ang1/Tie2 pathway must be perturbed; this occurs by the secretion of Ang2 by ECs that are undergoing active remodeling. Ang2 binds to Tie2 and is a competitive inhibitor of Ang1 action: under the influence of Ang2, preexisting blood vessels become more responsive to remodeling signals, with less adherence of ECs to stroma and associated perivascular cells and more responsiveness to VEGF. Therefore, Ang2 is required at early stages of tumor angiogenesis for destabilizing the vasculature by making host ECs more sensitive to angiogenic signals. Because tumor ECs are blocked by Ang2, there is no stabilization by the Ang1/Tie2 interaction, and tumor blood vessels are leaky, hemorrhagic, and have poor association of ECs with underlying stroma. Sprouting tumor ECs express high levels of the transmembrane protein ephrin-B2 and its receptor, the RTK EPH, whose signaling appears to work with the angiopoietins during vessel remodeling. During embryogenesis, EPH receptors are expressed on the endothelium of primordial venous vessels while the transmembrane ligand ephrin-B2 is expressed by cells of primordial arteries; the reciprocal expression may regulate differentiation and patterning of the vasculature. A number of ubiquitously expressed host molecules play critical roles in normal and pathologic angiogenesis. Proangiogenic cytokines, chemokines, and growth factors secreted by stromal cells or inflammatory cells make important contributions to neovascularization, including bFGF, transforming growth factor α (TGF-α), TNF-α, and IL-8. In contrast to normal endothelium, angiogenic endothelium overexpresses specific members of the integrin family of ECM-binding proteins that mediate EC adhesion, migration, and survival. Specifically, expression of integrins α β , α β , and α β mediates spreading and migration of ECs and is required for angiogenesis induced by VEGF and bFGF, which in turn can upregulate EC integrin expression. The αvβ3 integrin physically associates with VEGFR2 in the plasma membrane and promotes signal transduction from each receptor to promote EC proliferation (via focal adhesion kinase, src, PI3K, and other pathways) and survival (by inhibition of p53 and increasing the Bcl-2/Bax expression ratio). In addition, αvβ3 forms cell-surface complexes with matrix metalloproteinases (MMPs), zinc-requiring proteases that cleave ECM proteins, leading to enhanced EC migration and the release of heparin-binding growth factors, including VEGF and bFGF. EC adhesion molecules can be upregulated (i.e., by VEGF, TNF-α) or downregulated (by TGF-β); this, together with chaotic blood flow, explains poor leukocyte-endothelial interactions in tumor blood vessels and may help tumor cells avoid immune surveillance. Lymphatic vessels also exist within tumors. Development of tumor lymphatics is associated with expression of VEGFR3 and its ligands VEGF-C and VEGF-D. The role of these vessels in tumor cell metastasis to regional lymph nodes remains to be determined. However, VEGF-C levels correlate significantly with metastasis to regional lymph nodes in lung, prostate, and colorectal cancers. Angiogenesis inhibitors function by targeting the critical molecular pathways involved in EC proliferation, migration, and/or survival, many of which are unique to the activated endothelium in tumors. Inhibition of growth factor and adhesion-dependent signaling pathways can induce EC apoptosis with concomitant inhibition of tumor growth. Different types of tumors can use distinct combinations of molecular mechanisms to activate the angiogenic switch. Therefore, it is doubtful that a single antiangiogenic strategy will suffice for all human cancers; rather, a number of agents or combinations of agents will be needed, depending on distinct programs of angiogenesis used by different human cancers. Despite this, experimental data indicate that for some tumor types, blockade of a single growth factor (e.g., VEGF) may inhibit tumor-induced vascular growth. A. Normal blood vessel B. Tumor blood vessel Pericytes Tumor cells Loss of EC junction complexes Irregular or no BM Absent (or few) pericyte Increased permeability C. Treatment with bevacizumab (Early) D. Treatment with bevacizumab (Late) Pericytes Tumor cells Death of EC due to loss of VEGF survival signals (plus chemotherapy or radiotherapy) Apoptosis of tumor due to starvation and/or effects of chemotherapy. FIGuRE 102e-11 Normalization of tumor blood vessels due to inhibition of VEGF signaling. A. Blood vessels in normal tissues exhibit a regular hierarchical branching pattern that delivers blood to tissues in a spatially and temporally efficient manner to meet the metabolic needs of the tissue (top). At the microscopic level, tight junctions are maintained between endothelial cells (ECs), which are adherent to a thick and evenly distributed basement membrane (BM). Pericytes form a surrounding layer that provides trophic signals to the EC and helps maintain proper vessel tone. Vascular permeability is regulated, interstitial fluid pressure is low, and oxygen tension and pH are physiologic. B. Tumors have abnormal vessels with tortuous branching and dilated, irregular interconnecting branches, causing uneven blood flow with areas of hypoxemia and acidosis. This harsh environment selects genetic events that result in resistant tumor variants, such as the loss of p53. High levels of VEGF (secreted by tumor cells) disrupt gap junction communication, tight junctions, and adherens junctions between EC via src-mediated phosphorylation of proteins such as connexin 43, zonula occludens-1, VE-cadherin, and α/β-catenins. Tumor vessels have thin, irregular BM, and pericytes are sparse or absent. Together, these molecular abnormalities result in a vasculature that is permeable to serum macromolecules, leading to high tumor interstitial pressure, which can prevent the delivery of drugs to the tumor cells. This is made worse by the binding and activation of platelets at sites of exposed BM, with release of stored VEGF and microvessel clot formation, creating more abnormal blood flow and regions of hypoxemia. C. In experimental systems, treatment with bevacizumab or blocking antibodies to VEGFR2 leads to changes in the tumor vasculature that has been termed vessel normalization. During the first week of treatment, abnormal vessels are eliminated or pruned (dotted lines), leaving a more normal branching pattern. ECs partially regain features such as cell-cell junctions, adherence to a more normal BM, and pericyte coverage. These changes lead to a decrease in vascular permeability, reduced interstitial pressure, and a transient increase in blood flow within the tumor. Note that in murine models, this normalization period lasts only for ~5–6 days. D. After continued anti-VEGF/VEGFR therapy (which is often combined with chemoor radiotherapy), ECs die, leading to tumor cell death (either due to direct effects of the chemotherapy or lack of blood flow). Bevacizumab, an antibody that binds VEGF, appears to potentiate the effects of a number of different types of active chemotherapeutic regimens used to treat a variety of different tumor types including colon cancer, lung cancer, cervical cancer, and RCC. Bevacizumab is administered IV every 2–3 weeks (its half-life is nearly 20 days) and is generally well tolerated. Hypertension is the most common side effect of inhibitors of VEGF (or its receptors), but can be treated with antihypertensive agents and rarely requires discontinuation of therapy. Rare but serious potential risks include arterial thromboembolic events, including stroke and myocardial infarction, and hemorrhage. Another serious complication is bowel perforation, which has been observed in 1–3% of patients (mainly those with colon and ovarian cancers). Inhibition of wound healing is also seen. Several small-molecule inhibitors (SMIs) that target VEGFR tyrosine kinase activity but are also inhibitory to other kinases have also been approved to treat certain cancers. Sunitinib (see above and Table 102e-2) has activity directed against mutant c-Kit receptors (approved for GIST), but also targets VEGFR and PDGFR, and has shown significant antitumor activity against metastatic RCC, presumably on the basis of its antiangiogenic activity. Similarly, sorafenib, originally developed as a Raf kinase inhibitor but with potent activity against VEGFR and PDGFR, has activity against RCC, thyroid cancer, and hepatocellular cancer. Other inhibitors of VEGFR approved for the treatment of RCC include axitinib and pazopanib. The success in targeting tumor angiogenesis has led to enhanced enthusiasm for the development of drugs that target other aspects of the angiogenic process; some of these therapeutic approaches are outlined in Fig. 102e-12. Cancers have a number of mechanisms that allow them to evade detection and elimination by the immune system. These include down-regulation of cell surface proteins involved in immune recognition (including MHC proteins and tumor-specific antigens), expression of other cell surface proteins that inhibit immune function (including members of the B7 family of proteins such as PD-L1), secretion of proteins and other molecules that are immunosuppressive, recruitment and expansion of immunosuppressive cells such as regulatory T cells, and induction of T cell tolerance. In addition, the inflammatory effects of some of the immune mediator cells in the tumor microenvironment (especially tissue-associated macrophages and myeloid-derived suppressor cells) can suppress T cell responses to the tumor as well as stimulate inflammation that can enhance tumor growth. Immunotherapy approaches to treat cancer aimed at activating the immune response against tumors using immunostimulatory molecules such as interferons, IL-2, and monoclonal antibodies have had some successes. Another approach that has shown particular clinical promise is the targeting of proteins or cells (such as regulatory T cells) involved in normal homeostatic control to prevent autoimmune damage to the host but that malignant cells and their stroma can also use to inhibit the immune response directed against them. The approach that is furthest along clinically has involved targeting CTLA-4, PD-1, and PDL-1, co-inhibitory molecules that are expressed on the surface of cancer cells, cells of the immune system, and/or stromal cells and are involved in inhibiting the immune response against cancer (Fig. 102e13). Monoclonal antibodies directed against CTLA-4 and PD-1 are approved for the treatment of melanoma, and additional antibodies VEGFR2 VEGF Tie2 receptor EPH receptor Novel inhibitors Novel inhibitors Stromal cell Ang 1 Ang 2 Kinase domain Anti-integrin MoAb, RGD peptides Extracellular matrix (ECM) Specific kinase inhibitors Proliferation survival migration Enhanced binding to ECM, vessel stabilization Ephrin-B2 Nucleus Microtubules 2-Methoxy estradiol ˜v°3 ˜v°5 ˜5°1 MMPs (invasion, growth factor Anti-VEGF MoAb Endothelial cell FIGuRE 102e-12 Knowledge of the molecular events governing tumor angiogenesis has led to a number of therapeutic strategies to block tumor blood vessel formation. The successful therapeutic targeting of VEGF is described in the text. Other endothelial cell– specific receptor tyrosine kinase pathways (e.g., angiopoietin/Tie2 and ephrin/EPH) are likely targets for the future. Ligation of the αvβ3 integrin is required for endothelial cell (EC) survival. Integrins are also required for EC migration and are important regulators of matrix metalloproteinase (MMP) activity, which modulates EC movement through the extracellular matrix (ECM) as well as release of bound growth factors. Targeting of integrins includes development of blocking antibodies, small peptide inhibitors of integrin signaling, and arg-gly-asp–containing peptides that prevent integrin:ECM binding. Peptides derived from normal proteins by proteolytic cleavage, including endostatin and tumstatin, inhibit angiogenesis by mechanisms that include interfering with integrin function. Signal transduction pathways that are dysregulated in tumor cells indirectly regulate EC function. Inhibition of EGF-family receptors, whose signaling activity is upregulated in a number of human cancers (e.g., breast, colon, and lung cancers), results in downregulation of VEGF and IL-8, while increasing expression of the antiangiogenic protein thrombospondin-1. The Ras/MAPK, PI3K/Akt, and Src kinase pathways constitute important antitumor targets that also regulate the proliferation and survival of tumor-derived EC. The discovery that ECs from normal tissues express tissue-specific “vascular addressins” on their cell surface suggests that targeting specific EC subsets may be possible. Tumor cells Elaboration of immunosuppressive cytokines TGF-˜ Interleukin-4 Interleukin-6 Interleukin-10 Induction of CTLA-4 Induction of PD-1 T cell inactivation Cell signaling disruption Degradation of T cell receptor ° chain Class I MHC loss in tumor cells STAT-3 signaling loss in T cells Generation of indoleamine 2, 3-dioxygenase Immunosuppressive immune cells FIGuRE 102e-13 Tumor-host interactions that suppress the immune response to the tumor. targeting PD-1 or PDL-1 have shown activity against melanoma, RCC, and lung cancer and continue to be evaluated against other malignancies as well. Combination approaches targeting more than one protein or involving other anticancer approaches (targeted agents, chemotherapy, radiation therapy) are also being explored and have shown promise in early studies. An important aspect of these approaches is balancing sufficient release of the negative control of the immune response to allow immune-mediated attack on the tumors while not allowing too much release and inducing severe autoimmune effects (such as against skin, thyroid, pituitary gland, or the gastrointestinal tract). The explosion of information on tumor cell biology, metastasis, and tumor-host interactions (including angiogenesis and immune evasion by tumors) has ushered in a new era of rational targeted therapy for cancer. Furthermore, it has become clear that specific molecular factors detected in individual tumors (specific gene mutations, gene-expression profiles, microRNA expression, overexpression of specific proteins) can be used to tailor therapy and maximize antitumor effects. Robert G. Fenton contributed to this chapter in prior editions, and important material from those prior chapters has been included here. principles of Cancer treatment Edward A. Sausville, Dan L. Longo CANCER PRESENTATION Cancer in a localized or systemic state is a frequent item in the differ-ential diagnosis of a variety of common complaints. Although not all forms of cancer are curable at diagnosis, affording patients the greatest 103e opportunity for cure or meaningful prolongation of life is greatly aided by diagnosing cancer at the earliest point possible in its natural history and defining treatments that prevent or retard its systemic spread. Indeed, certain forms of cancer, notably breast, colon, and possibly lung cancers in certain patients, can be prevented by screening appropriately selected asymptomatic patients; screening is arguably the earliest point in the spectrum of possible cancer-related interventions where cure is possible (Table 103e-1). The term cancer, as used here, is synonymous with the term tumor, whose original derivation from Latin simply meant “swelling,” not otherwise specified. We now understand that the swelling that is a common physical manifestation of a tumor derives from increased interstitial fluid pressure and increased cellular and stromal mass per volume, compared to normal tissue. Tumors historically were referred to as carcinomas, or “crab-like” infiltrating tumors, or sarcomas, or “fleshy tumors,” derived from the Greek terms for “crab” and “flesh,” respectively. Leukemias are a special case of a cancer of the blood-forming tissues presenting in a disseminated form frequently without definable tumor masses. In addition to localized swelling, tumors present by altered function of the organ they afflict, such as dyspnea on exertion from the anemia caused by leukemia replacing normal hematopoietic cells, cough from lung cancers, jaundice from tumors disrupting the hepatobiliary tree, or seizures and neurologic signs from brain tumors. Hemorrhage is also a frequent presenting sign of tumors involving hollow viscera, as are decreases in the number of platelets and inappropriate inhibition of blood coagulation. Thus, although statistically the fraction of patients with cancer underlying a particular presenting sign or symptom may be low, the implications for a patient with cancer of missing an early-stage tumor call for vigilance; therefore, persistent signs or symptoms should be evaluated as possibly coming from an early-stage tumor. Evidence of a tumor’s existence can objectively be established by careful physical examination, such as enlarged lymph nodes in lymphomas or a palpable mass in a breast or soft tissue site. A mass SpeCtrum of CanCer-related InterventIonS Consideration of cancer in a differential diagnosis Physical examination, imaging, or endoscopy to define a possible tumor Diagnosis of cancer by biopsy or removal: Specialized histology: immunohistochemistry Staging the cancer: Where has it spread? During treatment: related to tumor effects on patient During treatment to counteract side effects of treatment Palliative and end of life When useful treatments are not feasible or desired may also be detected or confirmed by an imaging modality, such as 103e-1 plain x-ray, computed tomography (CT) scan, ultrasound, positron emission tomography (PET) imaging, or nuclear magnetic resonance approaches. Sensitivity of these technologies varies considerably, and the index of suspicion for a tumor should match the technology chosen. For example, low-dose helical CT scans are superior to plain chest radiographs in detecting lung cancers. Another way of initially establishing the existence of a possible tumor is through direct visualization of an afflicted organ by endoscopy. Once the existence of a likely tumor is defined, unequivocally establishing the diagnosis is the next step in the spectrum of correctly addressing a patient’s needs. This is usually accomplished by a biopsy procedure and the emergence after pathologic examination of an unequivocal statement that cancer is present. The underlying principle in cancer diagnosis is to obtain as much tissue as safely as possible. Due to tumor heterogeneity, pathologists are better able to make the diagnosis when they have more tissue to examine. In addition to light microscopic inspection of a tumor for pattern of growth, degree of cellular atypia, invasiveness, and morphologic features that aid in the differential diagnosis, sufficient tissue is of value in searching for genetic abnormalities and protein expression patterns, such as hormone receptor expression in breast cancers, that may aid in the differential diagnosis or provide information about prognosis or likely response to treatment. Efforts to define “personalized” information from the biology of each patient’s tumor and pertinent to each patient’s treatment plan are becoming increasingly important in selecting treatment options. The general internist should make sure that a patient’s cancer biopsy is appropriately referred from the surgical suite for important molecular studies that can advise the best treatment (Table 103e-2). Similar-appearing tumors by microscopic morphology may have very different gene expression patterns when assessed by such techniques as microarray analysis for gene expression patterns using gene dIagnoStIC bIopSy: Standard of Care moleCular and SpeCIal StudIeS Breast cancer: primary and suspected metastatic Hormone receptors: estrogen, progesterone HER2/neu oncoprotein Lung cancer: primary and suspected metastatic If nonsquamous non-small cell: epidermal growth factor receptor mutation; alk oncoprotein gene fusion Colon cancer: suspected metastatic Ki-ras mutation Gastrointestinal stromal tumor c-kit oncoprotein mutation Bcr-Abl fusion protein t(15,17) inversion 16 t(8,21) Lymphoma Immunohistochemistry for CD20, CD30, T cell markers Treatment defining chromosomal translocations: t(14,18) t(8,14) Chapter 103e Principles of Cancer Treatment chips, with important differences in biology and response to treatment. Such testing requires that the tissue be handled properly (e.g., immunologic detection of proteins is more effective in fresh-frozen tissue rather than in formalin-fixed tissue). Coordination among the surgeon, pathologist, and primary care physician is essential to ensure that the amount of information learned from the biopsy material is maximized. These goals are best met by an excisional biopsy in which the entire tumor mass is removed with a small margin of normal tissue surrounding it. If an excisional biopsy cannot be performed, incisional biopsy is the procedure of second choice. A wedge of tissue is removed, and an effort is made to include the majority of the cross-sectional diameter of the tumor in the biopsy to minimize sampling error. Biopsy techniques that involve cutting into tumor carry with them a risk of facilitating the spread of the tumor, and consideration of whether the biopsy might be the prelude to a curative surgery if certain diagnoses are established should inform the actual approach taken. Core-needle biopsy usually obtains considerably less tissue, but this procedure often provides enough information to plan a definitive surgical procedure. Fine-needle aspiration generally obtains only a suspension of cells from within a mass. This procedure is minimally invasive, and if positive for cancer, it may allow inception of systemic treatment when metastatic disease is evident, or it can provide a basis for planning a more meticulous and extensive surgical procedure. However, a negative fine-needle aspiration for a neoplastic diagnosis cannot be taken as definitive evidence that a tumor is absent or make a definitive diagnosis in someone not known to have a cancer. An essential component of correct patient management in many cancer types is defining the extent of disease, because this information critically informs whether localized treatments, “combined-modality” approaches, or systemic treatments should initially be considered. Radiographic and other imaging tests can be helpful in defining the clinical stage; however, pathologic staging requires defining the extent of involvement by documenting the histologic presence of tumor in tissue biopsies obtained through a surgical procedure. Axillary lymph node sampling in breast cancer and lymph node sampling at laparotomy for testicular, colon, and other intraabdominal cancers may provide crucial information for treatment planning and may determine the extent and nature of primary cancer treatment. For tumors associated with a potential “primary site,” staging systems have evolved to define a “T” component related to the size of the tumor or its invasion into local structures, an “N” component related to the number and nature of lymph node groups adjacent to the tumor with evidence of tumor spread, and an “M” component, based on the presence of local or distant metastatic sites. The various “TNM” components are then aggregated to stages, usually stage I to III or IV, depending on the anatomic site. The numerical stages reflect similar long-term survival outcomes of the aggregated TNM groupings in a numeric stage after treatment tailored to the stage. In general, stage I tumors are T1 (reflecting small size), N0 or N1 (reflecting no or minimal node spread), and M0 (no metastases). Such early-stage tumors are amenable to curative approaches with local treatments. On the other hand, stage IV tumors usually have metastasized to distant sites or locally invaded viscera in a nonresectable way and are dealt with using techniques that have palliative intent, except for those diseases with exceptional sensitivity to systemic treatments such as chemotherapy or immunotherapy. Also, the TNM staging system is not useful in diseases such as leukemia, where bone marrow infiltration is never really localized, or central nervous system tumors, where tumor histology and the extent of anatomically feasible resection are more important in driving prognosis. The goal of cancer treatment is first to eradicate the cancer. If this primary goal cannot be accomplished, the goal of cancer treatment shifts to palliation, the amelioration of symptoms, and preservation of quality of life while striving to extend life. The dictum primum non nocere may not always be the guiding principle of cancer therapy. When cure of cancer is possible, cancer treatments may be considered despite the certainty of severe and perhaps life-threatening toxicities. Every cancer treatment has the potential to cause harm, and treatment may be given that produces toxicity with no benefit. The therapeutic index of many interventions may be quite narrow, with treatments given to the point of toxicity. Conversely, when the clinical goal is palliation, careful attention to minimizing the toxicity of potentially toxic treatments becomes a significant goal. Cancer treatments are divided into two main types: local and systemic. Local treatments include surgery, radiation therapy (including photodynamic therapy), and ablative approaches, including radio-frequency and cryosurgical approaches. Systemic treatments include chemotherapy (including hormonal therapy and molecularly targeted therapy) and biologic therapy (including immunotherapy). The modalities are often used in combination, and agents in one category can act by several mechanisms. For example, cancer chemotherapy agents can induce differentiation, and antibodies (a form of immunotherapy) can be used to deliver radiation therapy. Oncology, the study of tumors including treatment approaches, is a multidisciplinary effort with surgical, radiation, and internal medicine–related areas of oncologic expertise. Treatments for patients with hematologic malignancies are often shared by hematologists and medical oncologists. In many ways, cancer mimics an organ attempting to regulate its own growth. However, cancers have not set an appropriate limit on how much growth should be permitted. Normal organs and cancers share the property of having (1) a population of cells actively progressing through the cell cycle with their division providing a basis for tumor growth, and (2) a population of cells not in cycle. In cancers, cells that are not dividing are heterogeneous; some have sustained too much genetic damage to replicate but have defects in their death pathways that permit their survival, some are starving for nutrients and oxygen, and some are out of cycle but poised to be recruited back into cycle and expand if needed (i.e., reversibly growth-arrested). Severely damaged and starving cells are unlikely to kill the patient. The problem is that the cells that are reversibly not in cycle are capable of replenishing tumor cells physically removed or damaged by radiation and chemotherapy. These include cancer stem cells, whose properties are being elucidated, as they may serve as a basis for giving rise to tumor initiating or repopulating cells. The stem cell fraction may define new targets for therapies that will retard their ability to reenter the cell cycle. Tumors follow a Gompertzian growth curve (Fig. 103e-1), with the apparent growth fraction of a neoplasm being high with small tumor burdens and declining until, at the time of diagnosis, with a tumor burden of 1–5 × 109 tumor cells, the growth fraction is usually 1–4% for many solid tumors. By this view, the most rapid growth rate occurs before the tumor is detectable. An alternative explanation for such growth properties may also emerge from the ability of tumors at metastatic sites to recruit circulating tumor cells from the primary tumor or other metastases. An additional key feature of a successful tumor is the ability to stimulate the development of a new supporting stroma through angiogenesis and production of proteases to allow invasion through basement membranes and normal tissue barriers (Chap. 102e). Specific cellular mechanisms promote entry or withdrawal of tumor cells from the cell cycle. For example, when a tumor recurs after surgery or chemotherapy, frequently its growth is accelerated and the growth fraction of the tumor is increased. This pattern is similar to that seen in regenerating organs. Partial resection of the liver results in the recruitment of cells into the cell cycle, and the resected liver volume is replaced. Similarly, chemotherapy-damaged bone marrow increases its growth to replace cells killed by chemotherapy. However, cancers do not recognize a limit on their expansion. Monoclonal gammopathy of uncertain significance may be an example of a clonal neoplasm with intrinsic features that stop its growth before a lethal tumor burden is reached. A fraction of patients with this disorder go on to develop fatal multiple myeloma, but probably this occurs because of the accumulation of additional genetic lesions. Elucidation of the mechanisms that regulate this “organ-like” behavior of tumors may provide additional clues to cancer control and treatment. necessary to obtain the best outcomes. Thus, lumpectomy with 103e-3 radiation therapy is as effective as modified radical mastectomy for breast cancer, and limb-sparing surgery followed by adjuvant radia rhabdomyosarcomas and osteosarcomas. More limited surgery is also being used to spare organ function, as in larynx and bladder cancer. The magnitude of operations necessary to optimally control and cure 1012 Lethal cancer has also been diminished by technical advances; for example, the circular anastomotic stapler has allowed narrower (<2 cm) margins Tumor burden logs of cells in colon cancer without compromise of local control rates, and many patients who would have had colostomies are able to maintain normal anatomy. In some settings (e.g., bulky testicular cancer or stage III breast can of a tumor declines exponentially over time (top). The growth rate of a tumor peaks before it is clinically detectable (middle). Tumor size increases slowly, goes through an exponential phase, and slows again as the tumor reaches the size at which limitation of nutrients or autoregulatory or host regulatory influences can occur. The maximum growth rate occurs at 1/e, the point at which the tumor is about 37% of its maximum size (marked with an X ). Tumor becomes detectable at a burden of about 109 (1 cm3) cells and kills the patient at a tumor cell burden of about 1012 (1 kg). Efforts to treat the tumor and reduce its size can result in an increase in the growth fraction and an increase in growth rate. Surgery is unquestionably the most effective means of treating cancer. Today at least 40% of cancer patients are cured by surgery. Unfortunately, a large fraction of patients with solid tumors (perhaps 60%) have metastatic disease that is not accessible for removal. However, even when the disease is not curable by surgery alone, the removal of tumor can obtain important benefits, including local control of tumor, preservation of organ function, debulking that permits subsequent therapy to work better, and staging information on extent of involvement. Cancer surgery aiming for cure is usually planned to excise the tumor completely with an adequate margin of normal tissue (the margin varies with the tumor and the anatomy), touching the tumor as little as possible to prevent vascular and lymphatic spread, and minimizing operative risk. Such a resection is defined as an R0 resection. R1 and R2 resections, in contrast, are imprecisely defined pathologically as having microscopic or macroscopic tumor at resection margins. Such outcomes may be necessitated by proximity of the tumor to vital structures or recognition only in the resected specimen of the extent of tumor involvement, and may be the basis for reoperation to obtain optimal margins if feasible. Extending the procedure to resect draining lymph nodes obtains prognostic information and may, in some anatomic locations, improve survival. Increasingly, laparoscopic approaches are being used to address primary abdominal and pelvic tumors. Lymph node spread may be assessed using the sentinel node approach, in which the first draining lymph node a spreading tumor would encounter is defined by injecting a dye or radioisotope into the tumor site at operation and then resecting the first node to turn blue or collect label. The sentinel node assessment is continuing to undergo clinical evaluation but appears to provide reliable information without the risks (lymphedema, lymphangiosarcoma) associated with resection of all the regional nodes. Advances in adjuvant chemotherapy (chemotherapy given systemically after removal of all disease by operation and without evidence of active metastatic disease) and radiation therapy following surgery have permitted a substantial decrease in the extent of primary surgery cer), surgery is not the first treatment modality used. After an initial diagnostic biopsy, chemotherapy and/or radiation therapy is delivered to reduce the size of the tumor and clinically control undetected metastatic disease. Such therapy is followed by a surgical procedure to remove residual masses; this is called neoadjuvant therapy. Because the sequence of treatment is critical to success and is different from the standard surgery-first approach, coordination among the surgical oncologist, radiation oncologist, and medical oncologist is crucial. Surgery may be curative in a subset of patients with metastatic disease. Patients with lung metastases from osteosarcoma may be cured by resection of the lung lesions. In patients with colon cancer who have fewer than five liver metastases restricted to one lobe and no extrahepatic metastases, hepatic lobectomy may produce long-term disease-free survival in 25% of selected patients. Surgery can also be associated with systemic antitumor effects. In the setting of hormonally responsive tumors, oophorectomy and/or adrenalectomy may eliminate estrogen production, and orchiectomy may reduce androgen production, hormones that drive certain breast and all prostate cancers, respectively; both procedures can have useful effects on metastatic tumor growth. If resection of the primary lesion takes place in the presence of metastases, acceleration of metastatic growth has also been described in certain cases, perhaps based on the removal of a source of angiogenesis inhibitors and mass-related growth regulators in the tumor. In selecting a surgeon or center for primary cancer treatment, consideration must be given to the volume of cancer surgeries undertaken by the site. Studies in a variety of cancers have shown that increased annual procedure volume appears to correlate with outcome. In addition, facilities with extensive support systems—e.g., for joint thoracic and abdominal surgical teams with cardiopulmonary bypass, if needed—may allow resection of certain tumors that would otherwise not be possible. Surgery is used in a number of ways for palliative or supportive care of the cancer patient, not related to the goal of curing the cancer. These include insertion and care of central venous catheters, control of pleural and pericardial effusions and ascites, caval interruption for recurrent pulmonary emboli, stabilization of cancer-weakened weight-bearing bones, and control of hemorrhage, among others. Surgical bypass of gastrointestinal, urinary tract, or biliary tree obstruction can alleviate symptoms and prolong survival. Surgical procedures may provide relief of otherwise intractable pain or reverse neurologic dysfunction (cord decompression). Splenectomy may relieve symptoms and reverse hypersplenism. Intrathecal or intrahepatic therapy relies on surgical placement of appropriate infusion portals. Surgery may correct other treatment-related toxicities such as adhesions or strictures. Surgical procedures are also valuable in rehabilitative efforts to restore health or function. Orthopedic procedures may be necessary to ensure proper ambulation. Breast reconstruction can make an enormous impact on the patient’s perception of successful therapy. Plastic and reconstructive surgery can correct the effects of disfiguring primary treatment. Surgery is also a tool valuable in the prevention of cancers in high-risk populations. Prophylactic mastectomy, colectomy, oophorectomy, and thyroidectomy are mainstays of prevention of genetic cancer syndromes. Resection of premalignant skin and uterine cervix lesions and colonic polyps prevents progression to frank malignancy. Chapter 103e Principles of Cancer Treatment RADIATION Radiation Biology and Medicine Therapeutic radiation is ionizing; it damages any tissue in its path. The selectivity of radiation for causing cancer cell death may be due to defects in a cancer cell’s ability to repair sublethal DNA and other damage. Ionizing radiation causes breaks in DNA and generates free radicals from cell water that may damage cell membranes, proteins, and organelles. Radiation damage is augmented by oxygen; hypoxic cells are more resistant. Augmentation of oxygen presence is one basis for radiation sensitization. Sulfhydryl compounds interfere with free radical generation and may act as radiation protectors. X-rays and gamma rays are the forms of ionizing radiation most commonly used to treat cancer. They are both electromagnetic, nonparticulate waves that cause the ejection of an orbital electron when absorbed. This orbital electron ejection is called ionization. X-rays are generated by linear accelerators; gamma rays are generated from decay of atomic nuclei in radioisotopes such as cobalt and radium. These waves behave biologically as packets of energy, called photons. Particulate ionizing radiation using protons has also become available. Most radiation-induced cell damage is due to the formation of hydroxyl radicals from tissue water: Radiation is quantitated based on the amount of radiation absorbed by the tumor in the patient; it is not based on the amount of radiation generated by the machine. The International System (SI) unit for radiation absorbed is the Gray (Gy): 1 Gy refers to 1 J/kg of tissue; 1 Gy equals 100 centigrays (cGy) of absorbed dose. A historically used unit appearing in the oncology literature, the rad (radiation absorbed dose), is defined as 100 ergs of energy absorbed per gram of tissue and is equivalent to 1 cGy. Radiation dosage is defined by the energy absorbed per mass of tissue. Radiation dose is measured by placing detectors at the body surface or based on radiating phantoms that resemble human form and substance, containing internal detectors. The features that make a particular cell more sensitive or more resistant to the biologic effects of radiation are not completely defined and critically involve DNA repair proteins that, in their physiologic role, protect against environmentally related DNA damage. Localized Radiation Therapy Radiation effect is influenced by three determinants: total absorbed dose, number of fractions, and time of treatment. A frequent error is to omit the number of fractions and the duration of treatment. This is analogous to saying that a runner completed a race in 20 s; without knowing how far he or she ran, the result is difficult to interpret. The time could be very good for a 200-m race or very poor for a 100-m race. Thus, a typical course of radiation therapy should be described as 4500 cGy delivered to a particular target (e.g., mediastinum) over 5 weeks in 180-cGy fractions. Most curative radiation treatment programs are delivered once a day, 5 days a week, in 150to 200-cGy fractions. A number of parameters influence the damage done to tissue (normal and tumor) by radiation. Hypoxic cells are relatively resistant. Nondividing cells are more resistant than dividing cells, and this is one rationale for delivering radiation in repeated fractions, to ultimately expose a larger number of tumor cells that have entered the division cycle. In addition to these biologic parameters, physical parameters of the radiation are also crucial. The energy of the radiation determines its ability to penetrate tissue. Low-energy orthovoltage beams (150–400 kV) scatter when they strike the body, much like light diffuses when it strikes particles in the air. Such beams result in more damage to adjacent normal tissues and less radiation delivered to the tumor. Megavoltage radiation (>1 MeV) has very low lateral scatter; this produces a skin-sparing effect, more homogeneous distribution of the radiation energy, and greater deposit of the energy in the tumor, or target volume. The tissues that the beam passes through to get to the tumor are called the transit volume. The maximum dose in the target volume is often the cause of complications to tissues in the transit volume, and the minimum dose in the target volume influences the likelihood of tumor recurrence. Dose homogeneity in the target volume is the goal. Computational approaches and delivery of many beams to converge on a target lesion are the basis for “gamma knife” and related approaches to deliver high doses to small volumes of tumor, sparing normal tissue. Therapeutic radiation is delivered in three ways: (1) teletherapy, with focused beams of radiation generated at a distance and aimed at the tumor within the patient; (2) brachytherapy, with encapsulated sources of radiation implanted directly into or adjacent to tumor tissues; and (3) systemic therapy, with radionuclides administered, for example, intravenously but targeted by some means to a tumor site. Teletherapy with x-ray or gamma-ray photons is the most commonly used form of radiation therapy. Particulate forms of radiation are also used in certain circumstances, such as the use of proton beams. The difference between photons and protons relates to the volume in which the greatest delivery of energy occurs. Typically protons have a much narrower range of energy deposition, theoretically resulting in more precise delivery of radiation with improvement in the degree to which adjacent structures may be affected, in comparison to photons. Electron beams are a particulate form of radiation that, in contrast to photons and protons, have a very low tissue penetrance and are used to treat cutaneous tumors. Apart from sparing adjacent structures, particulate forms of radiation are in most applications not superior to x-rays or gamma rays in clinical studies reported thus far, but this is an active area of investigation. Certain drugs used in cancer treatment may also act as radiation sensitizers. For example, compounds that incorporate into DNA and alter its stereochemistry (e.g., halogenated pyrimidines, cisplatin) augment radiation effects at local sites, as does hydroxyurea, another DNA synthesis inhibitor. These are important adjuncts to the local treatment of certain tumors, such as squamous head and neck, uterine cervix, and rectal cancers. Toxicity of Radiation Therapy Although radiation therapy is most often administered to a local region, systemic effects, including fatigue, anorexia, nausea, and vomiting, may develop that are related in part to the volume of tissue irradiated, dose fractionation, radiation fields, and individual susceptibility. Injured tissues release cytokines that act systemically to produce these effects. Bone is among the most radio-resistant organs, with radiation effects being manifested mainly in children through premature fusion of the epiphyseal growth plate. By contrast, the male testis, female ovary, and bone marrow are the most sensitive organs. Any bone marrow in a radiation field will be eradicated by therapeutic irradiation. Organs with less need for cell renewal, such as heart, skeletal muscle, and nerves, are more resistant to radiation effects. In radiation-resistant organs, the vascular endothelium is the most sensitive component. Organs with more self-renewal as a part of normal homeostasis, such as the hematopoietic system and mucosal lining of the intestinal tract, are more sensitive. Acute toxicities include mucositis, skin erythema (ulceration in severe cases), and bone marrow toxicity. Often these can be alleviated by interruption of treatment. Chronic toxicities are more serious. Radiation of the head and neck region often produces thyroid failure. Cataracts and retinal damage can lead to blindness. Salivary glands stop making saliva, which leads to dental caries and poor dentition. Taste and smell can be affected. Mediastinal irradiation leads to a threefold increased risk of fatal myocardial infarction. Other late vascular effects include chronic constrictive pericarditis, lung fibrosis, viscus stricture, spinal cord transection, and radiation enteritis. A serious late toxicity is the development of second solid tumors in or adjacent to the radiation fields. Such tumors can develop in any organ or tissue and occur at a rate of about 1% per year beginning in the second decade after treatment. Some organs vary in susceptibility to radiation carcinogenesis. A woman who receives mantle field radiation therapy for Hodgkin’s disease at age 25 years has a 30% risk of developing breast cancer by age 55 years. This is comparable in magnitude to genetic breast cancer syndromes. Women treated after age 30 years have little or no increased risk of breast cancer. No data suggest that a threshold dose of therapeutic radiation exists below which the incidence of second cancers is decreased. High rates of second tumors occur in people who receive as little as 1000 cGy. Endoscopy techniques may allow the placement of stents to unblock viscera by mechanical means, palliating, for example, gastrointestinal or biliary obstructions. Radiofrequency ablation (RFA) refers to the use of focused microwave radiation to induce thermal injury within a volume of tissue. RFA can be useful in the control of metastatic lesions, particularly in liver, that may threaten biliary drainage (as one example) and threaten quality and duration of useful life in patients with otherwise unresectable disease. Cryosurgery uses extreme cold to sterilize lesions in certain sites, such as prostate and kidney, when at a very early stage, eliminating the need for modalities with more side effects such as surgery or radiation. Some chemicals (porphyrins, phthalocyanines) are preferentially taken up by cancer cells by mechanisms not fully defined. When light, usually delivered by a laser, is shone on cells containing these compounds, free radicals are generated and the cells die. Hematoporphyrins and light (phototherapy) are being used with increasing frequency to treat skin cancer; ovarian cancer; and cancers of the lung, colon, rectum, and esophagus. Palliation of recurrent locally advanced disease can sometimes be dramatic and last many months. Infusion of chemotherapeutic or biologic agents or radiation-bearing delivery devices such as isotope-coated glass spheres into local sites through catheters inserted into specific vascular sites such as liver or an extremity have been used in an effort to control disease limited to that site; in selected cases, prolonged control of truly localized disease has been possible. The concept that systemically administered agents may have a useful effect on cancers was historically derived from three sets of observations. Paul Ehrlich in the nineteenth century observed that different dyes reacted with different cell and tissue components. He hypothesized the existence of compounds that would be “magic bullets” that might bind to tumors, owing to the affinity of the agent for the tumor. A second observation was the toxic effects of certain mustard gas derivatives on the bone marrow during World War I, leading to the idea that smaller doses of these agents might be used to treat tumors of marrow-derived cells. Finally, the observation that certain tumors from hormone-responsive tissues, e.g., breast tumors, could shrink after oophorectomy led to the idea that endogenous substances promoting the growth of a tumor might be antagonized. Chemicals achieving each of the goals are actually or intellectually the forbearers of the currently used cancer chemotherapy agents. Systemic cancer treatments are of four broad types. Conventional “cytotoxic” chemotherapy agents were historically derived by the empirical observation that these “small molecules” (generally with molecular mass <1500 Da) could cause major regression of experimental tumors growing in animals. These agents mainly target DNA structure or segregation of DNA as chromosomes in mitosis. Targeted agents refer to small molecules or “biologics” (generally macromolecules such as antibodies or cytokines) designed and developed to interact with a defined molecular target important in maintaining the malignant state or expressed by the tumor cells. As described in Chap. 102e, successful tumors have activated biochemical pathways that lead to uncontrolled proliferation through the action of, e.g., oncogene products, loss of cell cycle inhibitors, or loss of cell death regulation, and have acquired the capacity to replicate chromosomes indefinitely, invade, metastasize, and evade the immune system. Targeted therapies seek to capitalize on the biology behind the aberrant cellular behavior as a basis for therapeutic effects. Hormonal therapies (the first form of targeted therapy) capitalize on the biochemical pathways underlying estrogen and androgen function and action as a therapeutic basis for approaching patients with tumors of breast, prostate, uterus, and ovarian origin. Biologic therapies are often macromolecules that have a particular target (e.g., antigrowth factor or cytokine antibodies) or may have the capacity to regulate growth 103e-5 of tumor cells or induce a host immune response to kill tumor cells. Thus, biologic therapies include not only antibodies but also cytokines and gene therapies. CANCER CHEMOTHERAPY Principles The usefulness of any drug is governed by the extent to which a given dose causes a useful result (therapeutic effect; in the case of anticancer agents, toxicity to tumor cells) as opposed to a toxic effect to the host. The therapeutic index is the degree of separation between toxic and therapeutic doses. Really useful drugs have large therapeutic indices, and this usually occurs when the drug target is expressed in the disease-causing compartment as opposed to the normal compartment. Classically, selective toxicity of an agent for a tissue or cell type is governed by the differential expression of a drug’s target in the “sensitive” cell type or by differential drug accumulation into or elimination from compartments where greater or lesser toxicity is experienced, respectively. Currently used chemotherapeutic agents have the unfortunate property that their targets are present in both normal and tumor tissues. Therefore, they have relatively narrow therapeutic indices. Figure 103e-2 illustrates steps in cancer drug development. Following demonstration of antitumor activity in animal models, potentially useful anticancer agents are further evaluated to define an optimal schedule of administration and arrive at a drug formulation designed for a given route of administration and schedule. Safety testing in two species on an analogous schedule of administration defines the starting dose for a phase 1 trial in humans, usually but not always in patients with cancer who have exhausted “standard” (already approved) treatments. The initial dose is usually one-sixth to one-tenth of the dose just causing easily reversible toxicity in the more sensitive animal species. Escalating doses of the drug are then given during the human phase 1 trial until reversible toxicity is observed. Dose-limiting toxicity (DLT) defines a dose that conveys greater toxicity than would be acceptable in routine practice, allowing definition of a lower maximum-tolerated dose (MTD). The occurrence of toxicity is, if possible, correlated with plasma drug concentrations. The MTD or a dose just lower than the MTD is usually the dose suitable for phase 2 trials, where a fixed dose is administered to a relatively homogeneous set of patients with a particular tumor type in an effort to define whether the drug causes regression of tumors. In a phase 3 trial, evidence of improved overall survival or improvement in the time to progression of disease on the part of the new drug is sought in comparison to an appropriate control population, which is usually receiving an acceptable “standard of care” approach. A favorable outcome of a phase 3 trial is the basis for application to a regulatory agency for approval of the new agent for commercial marketing as safe and possessing a measure of clinical effectiveness. Response, defined as tumor shrinkage, is the most immediate indicator of drug effect. To be clinically valuable, responses must translate into clinical benefit. This is conventionally established by a beneficial effect on overall survival, or at least an increased time to further progression of disease. Karnofsky was among the first to champion the evaluation of a chemotherapeutic agent’s benefit by carefully quantitating its effect on tumor size and using these measurements to objectively decide the basis for further treatment of a particular patient or further clinical evaluation of a drug’s potential. A partial response (PR) is defined conventionally as a decrease by at least 50% in a tumor’s bidimensional area; a complete response (CR) connotes disappearance of all tumor; progression of disease signifies an increase in size of existing lesions by >25% from baseline or best response or development of new lesions; and stable disease fits into none of the above categories. Newer evaluation systems, such as Response Evaluation Criteria in Solid Tumors (RECIST), use unidimensional measurement, but the intent is similar in rigorously defining evidence for the activity of the agent in assessing its value to the patient. An active chemotherapy agent conventionally has PR rates of at least 20–25% with reversible non-life-threatening side effects, and it may then be suitable for study in phase 3 trials to assess efficacy in comparison to standard or no therapy. Active efforts are being made to quantitate effects of anticancer Chapter 103e Principles of Cancer Treatment Preclinical Model (e.g., mouse or rat) Rx :Time ? Phase II FIGuRE 103e-2 Steps in cancer drug discovery and development. Preclinical activity (top) in animal models of cancers may be used as evidence to support the entry of the drug candidate into phase 1 trials in humans to define a correct dose and observe any clinical antitumor effect that may occur. The drug may then be advanced to phase 2 trials directed against specific cancer types, with rigorous quantitation of antitumor effects (middle). Phase 3 trials then may reveal activity superior to standard or no treatment (bottom). agents on quality of life. Cancer drug clinical trials conventionally use a toxicity grading scale where grade 1 toxicities do not require treatment, grade 2 toxicities may require symptomatic treatment but are not life-threatening, grade 3 toxicities are potentially life-threatening if untreated, grade 4 toxicities are actually life-threatening, and grade 5 toxicities are those that result in the patient’s death. Development of targeted agents may proceed quite differently. While phase 1–3 trials are still conducted, molecular analysis of human tumors may allow the precise definition of target expression in a patient’s tumor that is necessary for or relevant to the drug’s action. This information might then allow selection of patients expressing the drug target for participation in all trial phases. These patients may then have a greater chance of developing a useful response to the drug by virtue of expressing the target in the tumor. Clinical trials may be designed to incorporate an assessment of the behavior of the target in relation to the drug (pharmacodynamic studies). Ideally, the plasma concentration that affects the drug target is known, so escalation to MTD may not be necessary. Rather, the correlation of host toxicity while achieving an “optimal biologic dose” becomes a more relevant endpoint for phase 1 and early phase 2 trials with targeted agents. Useful cancer drug treatment strategies using conventional chemotherapy agents, targeted agents, hormonal treatments, or biologics have one of two valuable outcomes. They can induce cancer cell death, resulting in tumor shrinkage with corresponding improvement in patient survival, or increase the time until the disease progresses. Another potential outcome is to induce cancer cell differentiation or dormancy with loss of tumor cell replicative potential and reacquisition of phenotypic properties resembling normal cells. A blocking in normal cellular differentiation may be a key feature in the pathogenesis of certain leukemias. Cell death is a closely regulated process. Necrosis refers to cell death induced, for example, by physical damage with the hallmarks of cell swelling and membrane disruption. Apoptosis, or programmed cell death, refers to a highly ordered process whereby cells respond to defined stimuli by dying, and it recapitulates the necessary cell death observed during the ontogeny of the organism. Cancer chemotherapeutic agents can cause both necrosis and apoptosis. Apoptosis is characterized by chromatin condensation (giving rise to “apoptotic bodies”), cell shrinkage, and, in living animals, phagocytosis by surrounding stromal cells without evidence of inflammation. This process is regulated either by signal transduction systems that promote a cell’s demise after a certain level of insult is achieved or in response to specific cell-surface receptors that mediate physiologic cell death responses, such as occurs in the developing organism or in the normal function of immune cells. Influencing apoptosis by manipulation of signal transduction pathways has emerged as a basis for understanding the actions of drugs and designing new strategies to improve their use. Autophagy is a cellular response to injury where the cell does not initially die but catabolizes itself in a way that can lead to loss of replicative potential. A general view of how cancer treatments work is that the interaction of a chemotherapeutic drug with its target induces a “cascade” of further signaling steps. These signals ultimately lead to cell death by triggering an “execution phase” where proteases, nucleases, and endogenous regulators of the cell death pathway are activated (Fig. 103e-3). Targeted agents differ from chemotherapy agents in that they do not indiscriminately cause macromolecular lesions but regulate the action of particular pathways. For example, the p210bcr-abl fusion protein tyrosine kinase drives chronic myeloid leukemia (CML), and HER2/neu stimulates the proliferation of certain breast cancers. The tumor has been described as “addicted” to the function of these molecules in the sense that without the pathway’s continued action, the tumor cell cannot survive. In this way, targeted agents directed at or HER2/neu may alter the “threshold” tumors driven by these molecules may have for undergoing apoptosis without actually creating any molecular lesions such as direct DNA strand breakage or altered membrane function. While apoptotic mechanisms are important in regulating cellular proliferation and the behavior of tumor cells in vitro, in vivo it is unclear whether all of the actions of chemotherapeutic agents to cause cell death can be attributed to apoptotic mechanisms. However, changes in molecules that regulate apoptosis are correlated with clinical outcomes (e.g., bcl2 overexpression in certain lymphomas conveys poor prognosis; proapoptotic bax expression is associated with a better outcome after chemotherapy for ovarian carcinoma). A better understanding of the relationship of cell death and cell survival mechanisms is needed. Chemotherapy agents may be used for the treatment of active, clinically apparent cancer. The goal of such treatment in some cases is cure of the cancer, that is, elimination of all clinical and pathologic evidence of cancer and return of the patient to an expected survival no different than the general population. Table 103e-3, A lists those tumors considered curable by conventionally available chemotherapeutic agents chemotherapy prior to any surgery or radia-103e-7 tion to a local tumor in an effort to enhance the effect of the local treatment. Chemotherapy is routinely used in FASR “conventional” dose regimens. In general, TRAIL-R effects, primarily consisting of transient therapy regimens are predicated on the myelosuppression with or without gastro- intestinal toxicity (usually nausea), which are readily managed. “High-dose” chemo- observation that the dose-response curve for many anticancer agents is rather steep, increased therapeutic effect, although at the cost of potentially life-threatening com- plications that require intensive support, usually in the form of hematopoietic stem Cytochrome C PIGs, etc cell support from the patient (autologous) or from donors matched for histocompatibility loci (allogeneic), or pharmacologic “rescue” strategies to repair the effect of Chapter 103e Principles of Cancer Treatment the high-dose chemotherapy on normal tissues. High-dose regimens have definite curative potential in defined clinical settings (Table 103e-3, D). If cure is not possible, chemotherapyFIGuRE 103e-3 Integration of cell death responses. Cell death through an apoptotic mecha-may be undertaken with the goal of palliatnism requires active participation of the cell. In response to interruption of growth factor (GF) or ing some aspect of the tumor’s effect on thepropagation of certain cytokine death signals (e.g., tumor necrosis factor receptor [TNF-R]), there host. In this usage, value is perceived by the is activation of “upstream” cysteine aspartyl proteases (caspases), which then directly digest demonstration of improved symptom relief,cytoplasmic and nuclear proteins, resulting in activation of “downstream” caspases; these cause progression-free survival, or overall survivalactivation of nucleases, resulting in the characteristic DNA fragmentation that is a hallmark of at a certain time from the inception of treatapoptosis. Chemotherapy agents that create lesions in DNA or alter mitotic spindle function ment in the treated population, compared seem to activate aspects of this process by damage ultimately conveyed to the mitochondria, to a relevant control population established perhaps by activating the transcription of genes whose products can produce or modulate the as the result of clinical research protocol ortoxicity of free radicals. In addition, membrane damage with activation of sphingomyelinases other organized comparative study. Such results in the production of ceramides that can have a direct action at mitochondria. The anti-clinical research protocols are the basis forapoptotic protein bcl2 attenuates mitochondrial toxicity, while proapoptotic gene products such U.S. Food and Drug Administration (FDA)as bax antagonize the action of bcl2. Damaged mitochondria release cytochrome C and approval of a particular cancer treatment as apoptosis-activating factor (APAF), which can directly activate caspase 9, resulting in propaga-safe and effective and are the benchmark for tion of a direct signal to other downstream caspases through protease activation. Apoptosis-an evidence-based approach to the use ofinducing factor (AIF) is also released from the mitochondrion and then can translocate to the chemotherapeutic agents. Common tumorsnucleus, bind to DNA, and generate free radicals to further damage DNA. An additional proapop-that may be meaningfully addressed by chetotic stimulus is the bad protein, which can heterodimerize with bcl2 gene family members to motherapy with palliative intent are listedantagonize apoptosis. Importantly, though, bad protein function can be retarded by its seques-in Table 103e-3, E. tration as phospho-bad through the 14-3-3 adapter proteins. The phosphorylation of bad is Usually, tumor-related symptoms manimediated by the action of the AKT kinase in a way that defines how growth factors that activate fest as pain, weight loss, or some localthis kinase can retard apoptosis and promote cell survival. symptom related to the tumor’s effect on normal structures. Patients treated with palliative intent should be aware of their when used to address disseminated or metastatic cancers. If a tumor diagnosis and the limitations of the proposed treatments, have accessis localized to a single site, serious consideration of surgery or primary to supportive care, and have suitable “performance status,” accordingradiation therapy should be given, because these treatment modalities to assessment algorithms such as the one developed by Karnofskymay be curative as local treatments. Chemotherapy may then be used (see Table 99-4) or by the Eastern Cooperative Oncology Groupafter the failure of these modalities to eradicate a local tumor or as part (ECOG) (see Table 99-5). ECOG performance status 0 (PS0) patientsof multimodality approaches to offer primary treatment to a clinically are without symptoms; PS1 patients are ambulatory but restricted inlocalized tumor. In this event, it can allow organ preservation when strenuous physical activity; PS2 patients are ambulatory but unablegiven with radiation, as in the larynx or other upper airway sites, or to work and are up and about 50% or more of the time; PS3 patientssensitize tumors to radiation when given, e.g., to patients concur-are capable of limited self-care and are up <50% of the time; and PS4rently receiving radiation for lung or cervix cancer (Table 103e-3, B). patients are totally confined to bed or chair and incapable of self-care.Chemotherapy can be administered as an adjuvant, i.e., in addition to Only PS0, PS1, and PS2 patients are generally considered suitable forsurgery or radiation (Table 103e-3, C), even after all clinically appar-palliative (noncurative) treatment. If there is curative potential, evenent disease has been removed. This use of chemotherapy has curative poor–performance status patients may be treated, but their prognosispotential in breast and colorectal neoplasms, as it attempts to eliminate is usually inferior to that of good–performance status patients treated clinically unapparent tumor that may have already disseminated. As with similar regimens.noted above, small tumors frequently have high growth fractions and An important perspective the primary care provider may bring totherefore may be intrinsically more susceptible to the action of antipro-patients and their families facing incurable cancer is that, given theliferative agents. Neoadjuvant chemotherapy refers to administration of limited value of chemotherapeutic approaches at some point in the CurabIlIty of CanCerS wIth Chemotherapy A. Advanced Cancers with D. Cancers Possibly Cured with Possible Cure “High-Dose” Chemotherapy with Stem Cell Support Acute lymphoid and acute myeloid leukemia (pediatric/ Relapsed leukemias, lymphoid adult) and myeloid Hodgkin’s disease (pediatric/ Relapsed lymphomas, Hodgkin’s adult) and non-Hodgkin’s E. Cancers Responsive with Embryonal carcinoma Useful Palliation, But Not Cure, by Chemotherapy B. Advanced Cancers Possibly Islet cell neoplasms Cured by Chemotherapy and F. Tumors Poorly Responsive in Advanced Stages to Carcinoma of the uterine cervix Biliary tract neoplasms (stage III) Thyroid carcinoma Small-cell lung carcinoma Carcinoma of the vulva C. Cancers Possibly Cured with Chemotherapy as Adjuvant to Surgery Prostate carcinoma Breast carcinoma Melanoma (subsets) Colorectal carcinomaa Hepatocellular carcinoma Osteogenic sarcoma Salivary gland cancer Soft tissue sarcoma aRectum also receives radiation therapy. natural history of most metastatic cancers, palliative care or hospice-based approaches, with meticulous and ongoing attention to symptom relief and with family, psychological, and spiritual support, should receive prominent attention as a valuable therapeutic plan (Chaps. 10 and 99). Optimizing the quality of life rather than attempting to extend it becomes a valued intervention. Patients facing the impending progression of disease in a life-threatening way frequently choose to undertake toxic treatments of little to no potential value, and support provided by the primary caregiver in accessing palliative and hospice-based options in contrast to receiving toxic and ineffective regimen can be critical in providing a basis for patients to make sensible choices. Cytotoxic Chemotherapy Agents Table 103e-4 lists commonly used cytotoxic cancer chemotherapy agents and pertinent clinical aspects of their use, with particular reference to adverse effects that might be encountered by the generalist in the care of patients. The drugs listed may be usefully grouped into two general categories: those affecting DNA and those affecting microtubules. Direct DNA-iNterActive AgeNts DNA replication occurs during the synthesis or S-phase of the cell cycle, with chromosome segregation of the replicated DNA occurring in the M, or mitosis, phase. The G1 and G2 “gap phases” precede S and M, respectively. Historically, chemotherapeutic agents have been divided into “phase-nonspecific” agents, which can act in any phase of the cell cycle, and “phase-specific” agents, which require the cell to be at a particular cell cycle phase to cause greatest effect. Once the agent has acted, cells may progress to “checkpoints” in the cell cycle where the drug-related damage may be assessed and either repaired or allowed to initiate apoptosis. An important function of certain tumor-suppressor genes such as p53 may be to modulate checkpoint function. Alkylating agents as a class are cell cycle phase–nonspecific agents. They break down, either spontaneously or after normal organ or tumor cell metabolism, to reactive intermediates that covalently modify bases in DNA. This leads to cross-linkage of DNA strands or the appearance of breaks in DNA as a result of repair efforts. “Broken” or cross-linked DNA is intrinsically unable to complete normal replication or cell division; in addition, it is a potent activator of cell cycle checkpoints and further activates cell-signaling pathways that can precipitate apoptosis. As a class, alkylating agents share similar toxicities: myelosuppression, alopecia, gonadal dysfunction, mucositis, and pulmonary fibrosis. They differ greatly in a spectrum of normal organ toxicities. As a class, they share the capacity to cause “second” neoplasms, particularly leukemia, many years after use, particularly when used in low doses for protracted periods. Cyclophosphamide is inactive unless metabolized by the liver to 4-hydroxy-cyclophosphamide, which decomposes into an alkylating species, as well as to chloroacetaldehyde and acrolein. The latter causes chemical cystitis; therefore, excellent hydration must be maintained while using cyclophosphamide. If severe, the cystitis may be prevented from progressing or prevented altogether (if expected from the dose of cyclophosphamide to be used) by mesna (2-mercaptoethanesulfonate). Liver disease impairs cyclophosphamide activation. Sporadic interstitial pneumonitis leading to pulmonary fibrosis can accompany the use of cyclophosphamide, and high doses used in conditioning regimens for bone marrow transplant can cause cardiac dysfunction. Ifosfamide is a cyclophosphamide analogue also activated in the liver, but more slowly, and it requires coadministration of mesna to prevent bladder injury. Central nervous system (CNS) effects, including somnolence, confusion, and psychosis, can follow ifosfamide use; the incidence appears related to low body surface area or decreased creatinine clearance. Several alkylating agents are less commonly used. Nitrogen mustard (mechlorethamine) is the prototypic agent of this class, decomposing rapidly in aqueous solution to potentially yield a bifunctional carbonium ion. It must be administered shortly after preparation into a rapidly flowing intravenous line. It is a powerful vesicant, and infiltration may be symptomatically ameliorated by infiltration of the affected site with 1/6 M thiosulfate. Even without infiltration, aseptic thrombophlebitis is frequent. It can be used topically as a dilute solution or ointment in cutaneous lymphomas, with a notable incidence of hypersensitivity reactions. It causes moderate nausea after intravenous administration. Bendamustine is a nitrogen mustard derivative with evidence of activity in chronic lymphocytic leukemia and certain lymphomas. Chlorambucil causes predictable myelosuppression, azoospermia, nausea, and pulmonary side effects. Busulfan can cause profound myelosuppression, alopecia, and pulmonary toxicity but is relatively “lymphocyte sparing.” Its routine use in treatment of CML has been curtailed in favor of imatinib (Gleevec) or dasatinib, but it is still used in transplant preparation regimens. Melphalan shows variable oral bioavailability and undergoes extensive binding to albumin and α1-acidic glycoprotein. Mucositis appears more prominently; however, it has prominent activity in multiple myeloma. Nitrosoureas break down to carbamylating species that not only cause a distinct pattern of DNA base pair–directed toxicity but also can covalently modify proteins. They share the feature of causing relatively delayed bone marrow toxicity, which can be cumulative and long-lasting. Procarbazine is metabolized in the liver and possibly in tumor cells to yield a variety of free radical and alkylating species. In addition to Drug Toxicity Interactions, Issues Liver metabolism required to activate to phosphoramide mustard + acrolein Mesna protects against “high-dose” bladder damage Analogue of cyclophosphamide Must use mesna Greater activity vs testicular neoplasms and sarcomas Liver and tissue metabolism required Disulfiram-like effect with ethanol Acts as MAOI HBP after tyrosinase-rich foods Metabolic activation Maintain high urine flow; osmotic diuresis, monitor intake/ output K+, Mg2+ Emetogenic—prophylaxis needed Full dose if CrCl >60 mL/min and tolerate fluid push Reduce dose according to CrCl: to AUC of 5–7 mg/mL per min [AUC = dose/(CrCl + 25)] Acute reversible neurotoxicity; chronic sensory neurotoxicity cumulative with dose; reversible laryngopharyngeal spasm Chapter 103e Principles of Cancer Treatment Drug Toxicity Interactions, Issues Marrow (WBCs > platelet) Alopecia Hypotension Hypersensitivity (rapid IV) Nausea Mucositis (high dose) Marrow Mucositis Nausea Mild alopecia Diarrhea: “early onset” with cramping, flush ing, vomiting; “late onset” after several doses Marrow Alopecia Nausea Vomiting Pulmonary Marrow Mucositis Alopecia Cardiovascular acute/chronic Vesicant Marrow Cardiac (less than doxorubicin) Marrow Cardiac Marrow Cardiac (less than doxorubicin) Vesicant (mild) Blue urine, sclerae, nails Hepatic metabolism—renal 30% Reduce doses with renal failure Schedule-dependent (5-day schedule better than 1-day) Late leukemogenic Accentuate antimetabolite action Reduce dose with renal failure No liver toxicity Prodrug requires enzymatic clearance to active drug “SN 38” Early diarrhea likely due to biliary excretion Late diarrhea, use “high-dose” loperamide (2 mg q2–4 h) Heparin aggregate; coadministration increases clearance Acetaminophen, BCNU increase liver toxicity Radiation recall Interacts with heparin Less alopecia, nausea than doxorubicin Radiation recall Less alopecia, nausea than doxorubicin Excretes in urine Reduce dose for renal failure Inhibits adenosine deaminase Reduce dose for renal failure Variable bioavailability Metabolize by xanthine oxidase Decrease dose with allopurinol Increased toxicity with thiopurine methyltransferase deficiency Variable bioavailability Increased toxicity with thiopurine methyltransferase deficiency Metabolizes to 6-MP; therefore, reduce dose with allopurinol Increase toxicity with thiopurine methyltransferase deficiency Decrease dose with renal failure Augments antimetabolite effect Drug Toxicity Interactions, Issues Asparaginase Protein synthesis; indirect inhibition of DNA synthesis by decreased histone synthesis Clotting factors Glucose Albumin Hypersensitivity CNS Pancreatitis Hepatic Toxicity lessened by “rescue” with leucovorin Excreted in urine Decrease dose in renal failure; NSAIDs increase renal toxicity Toxicity enhanced by leucovorin by increasing “ternary complex” with thymidylate synthase; dihydropyrimidine dehydrogenase deficiency increases toxicity; metabolism in tissue Prodrug of 5FU due to intratumoral metabolism Enhances activity of alkylating agents Metabolizes in tissues by deamination but renal excretion prominent at doses >500 mg; therefore, dose reduce in “high-dose” regimens in patients with decreased CrCl Use limited to leukemia Altered methylation of DNA alters gene expression Dose reduction with renal failure Metabolized to F-ara converted to F-ara ATP in cells by deoxycytidine kinase Chapter 103e Principles of Cancer Treatment Vincristine Vesicant Hepatic clearance Marrow Dose reduction for bilirubin >1.5 mg/dL Neurologic Prophylactic bowel regimen GI: ileus/constipation; bladder hypotoxicity; Drug Toxicity Interactions, Issues trum to other vincas) Hypertension Raynaud’s spectrum to other vincas) aCommon allkylator: alopecia, pulmonary, infertility, plus teratogenesis. Hepatic clearance Dose reduction as with vincristine Premedicate with steroids, H1 and H2 blockers Hepatic clearance Dose reduction as with vincas Premedicate with steroids, H1 and H2 blockers Abbreviations: ALL, acute lymphocytic leukemia; AUC, area under the curve; CHF, congestive heart failure; CNS, central nervous system; CrCl, creatinine clearance; CV, cardiovascular; GI, gastrointestinal; HBP, high blood pressure; MAOI, monoamine oxidase inhibitor; NSAID, nonsteroidal anti-inflammatory drug; SIADH, syndrome of inappropriate antidiuretic hormone secretion. myelosuppression, it causes hypnotic and other CNS effects, including vivid nightmares. It can cause a disulfiram-like syndrome on ingestion of ethanol. Altretamine (formerly hexa-methylmelamine) and thiotepa can chemically give rise to alkylating species, although the nature of the DNA damage has not been well characterized in either case. Dacarbazine (DTIC) is activated in the liver to yield the highly reactive methyl diazonium cation. It causes only modest myelosuppression 21–25 days after a dose but causes prominent nausea on day 1. Temozolomide is structurally related to dacarbazine but was designed to be activated by nonenzymatic hydrolysis in tumors and is bioavailable orally. Cisplatin was discovered fortuitously by observing that bacteria present in electrolysis solutions with platinum electrodes could not divide. Only the cis diamine configuration is active as an antitumor agent. It is hypothesized that in the intracellular environment, a chloride is lost from each position, being replaced by a water molecule. The resulting positively charged species is an efficient bifunctional interactor with DNA, forming Pt-based cross-links. Cisplatin requires administration with adequate hydration, including forced diuresis with mannitol to prevent kidney damage; even with the use of hydration, gradual decrease in kidney function is common, along with noteworthy anemia. Hypomagnesemia frequently attends cisplatin use and can lead to hypocalcemia and tetany. Other common toxicities include neurotoxocity with stocking-and-glove sensorimotor neuropathy. Hearing loss occurs in 50% of patients treated with conventional doses. Cisplatin is intensely emetogenic, requiring prophylactic antiemetics. Myelosuppression is less evident than with other alkylating agents. Chronic vascular toxicity (Raynaud’s phenomenon, coronary artery disease) is a more unusual toxicity. Carboplatin displays less nephro-, oto-, and neurotoxicity. However, myelosuppression is more frequent, and because the drug is exclusively cleared through the kidney, adjustment of dose for creatinine clearance must be accomplished through use of various dosing nomograms. Oxaliplatin is a platinum analogue with noteworthy activity in colon cancers refractory to other treatments. It is prominently neurotoxic. ANtitumor ANtibiotics AND topoisomerAse poisoNs Antitumor antibiotics are substances produced by bacteria that in nature appear to provide a chemical defense against other hostile microorganisms. As a class, they bind to DNA directly and can frequently undergo electron transfer reactions to generate free radicals in close proximity to DNA, leading to DNA damage in the form of single-strand breaks or cross-links. Topoisomerase poisons include natural products or semisynthetic species derived ultimately from plants, and they modify enzymes that regulate the capacity of DNA to unwind to allow normal replication or transcription. These include topoisomerase I, which creates single-strand breaks that then rejoin following the passage of the other DNA strand through the break. Topoisomerase II creates double-strand breaks through which another segment of DNA duplex passes before rejoining. DNA damage from these agents can occur in any cell cycle phase, but cells tend to arrest in S-phase or G2 of the cell cycle in cells with p53 and Rb pathway lesions as the result of defective checkpoint mechanisms in cancer cells. Owing to the role of topoisomerase I in the procession of the replication fork, topoisomerase I poisons cause lethality if the topoisomerase I–induced lesions are made in S-phase. Doxorubicin can intercalate into DNA, thereby altering DNA structure, replication, and topoisomerase II function. It can also undergo reduction reactions by accepting electrons into its quinone ring system, with the capacity to undergo reoxidation to form reactive oxygen radicals after reoxidation. It causes predictable myelosuppression, alopecia, nausea, and mucositis. In addition, it causes acute cardiotoxicity in the form of atrial and ventricular dysrhythmias, but these are rarely of clinical significance. In contrast, cumulative doses >550 mg/m2 are associated with a 10% incidence of chronic cardiomyopathy. The incidence of cardiomyopathy appears to be related to schedule (peak serum concentration), with low-dose, frequent treatment or continuous infusions better tolerated than intermittent higher-dose exposures. Cardiotoxicity has been related to iron-catalyzed oxidation and reduction of doxorubicin, and not to topoisomerase action. Cardiotoxicity is related to peak plasma dose; thus, lower doses and continuous infusions are less likely to cause heart damage. Doxorubicin’s cardiotoxicity is increased when given together with trastuzumab (Herceptin), the anti-HER2/neu antibody. Radiation recall or interaction with concomitantly administered radiation to cause local site complications is frequent. The drug is a powerful vesicant, with necrosis of tissue apparent 4–7 days after an extravasation; therefore, it should be administered into a rapidly flowing intravenous line. Dexrazoxane is an antidote to doxorubicin-induced extravasation. Doxorubicin is metabolized by the liver, so doses must be reduced by 50–75% in the presence of liver dysfunction. Daunorubicin is closely related to doxorubicin and was actually introduced first into leukemia treatment, where it remains part of curative regimens and has been shown preferable to doxorubicin owing to less mucositis and colonic damage. Idarubicin is also used in acute myeloid leukemia treatment and may be preferable to daunorubicin in activity. Encapsulation of daunorubicin into a liposomal formulation has attenuated cardiac toxicity and antitumor activity in Kaposi’s sarcoma, other sarcomas, multiple myeloma, and ovarian cancer. Bleomycin refers to a mixture of glycopeptides that have the unique feature of forming complexes with Fe2+ while also bound to DNA. It remains an important component of curative regimens for Hodgkin’s disease and germ cell neoplasms. Oxidation of Fe2+ gives rise to superoxide and hydroxyl radicals. The drug causes little, if any, myelosuppression. The drug is cleared rapidly, but augmented skin and pulmonary toxicity in the presence of renal failure has led to the recommendation that doses be reduced by 50–75% in the face of a creatinine clearance <25 mL/min. Bleomycin is not a vesicant and can be administered intravenously, intramuscularly, or subcutaneously. Common side effects include fever and chills, facial flush, and Raynaud’s phenomenon. Hypertension can follow rapid intravenous administration, and the incidence of anaphylaxis with early preparations of the drug has led to the practice of administering a test dose of 0.5–1 unit before the rest of the dose. The most feared complication of bleomycin treatment is pulmonary fibrosis, which increases in incidence at >300 cumulative units administered and is minimally responsive to treatment (e.g., glucocorticoids). The earliest indicator of an adverse effect is 103e-13 usually a decline in the carbon monoxide diffusing capacity (DLco) or coughing, although cessation of drug immediately upon documentation of a decrease in DLco may not prevent further decline in pulmonary function. Bleomycin is inactivated by a bleomycin hydrolase, whose concentration is diminished in skin and lung. Because bleomycindependent electron transport is dependent on O2, bleomycin toxicity may become apparent after exposure to transient very high fraction of inspired oxygen (Fio2). Thus, during surgical procedures, patients with prior exposure to bleomycin should be maintained on the lowest Fio2 consistent with maintaining adequate tissue oxygenation. Mitoxantrone is a synthetic compound that was designed to recapitulate features of doxorubicin but with less cardiotoxicity. It is quantitatively less cardiotoxic (comparing the ratio of cardiotoxic to therapeutically effective doses) but is still associated with a 10% incidence of cardiotoxicity at cumulative doses of >150 mg/m2. It also causes alopecia. Cases of acute promyelocytic leukemia (APL) have arisen shortly after exposure of patients to mitoxantrone, particularly in the adjuvant treatment of breast cancer. Although chemotherapy-associated leukemia is generally of the acute myeloid type, APL arising in the setting of prior mitoxantrone treatment had the typical t(15;17) chromosome translocation associated with APL, but the breakpoints of the translocation appeared to be at topoisomerase II sites that would be preferred sites of mitoxantrone action, clearly linking the action of the drug to the generation of the leukemia. Etoposide was synthetically derived from the plant product podophyllotoxin; it binds directly to topoisomerase II and DNA in a reversible ternary complex. It stabilizes the covalent intermediate in the enzyme’s action where the enzyme is covalently linked to DNA. This “alkali-labile” DNA bond was historically a first hint that an enzyme such as a topoisomerase might exist. The drug therefore causes a prominent G2 arrest, reflecting the action of a DNA damage checkpoint. Prominent clinical effects include myelosuppression, nausea, and transient hypotension related to the speed of administration of the agent. Etoposide is a mild vesicant but is relatively free from other large-organ toxicities. When given at high doses or very frequently, topoisomerase II inhibitors may cause acute leukemia associated with chromosome 11q23 abnormalities in up to 1% of exposed patients. Camptothecin was isolated from extracts of a Chinese tree and had notable antileukemia activity in preclinical mouse models. Early human clinical studies with the sodium salt of the hydrolyzed camptothecin lactone showed evidence of toxicity with little antitumor activity. Identification of topoisomerase I as the target of camptothecins and the need to preserve lactone structure allowed additional efforts to identify active members of this series. Topoisomerase I is responsible for unwinding the DNA strand by introducing single-strand breaks and allowing rotation of one strand about the other. In S-phase, topoisomerase I–induced breaks that are not promptly resealed lead to progress of the replication fork off the end of a DNA strand. The DNA damage is a potent signal for induction of apoptosis. Camptothecins promote the stabilization of the DNA linked to the enzyme in a so-called cleavable complex, analogous to the action of etoposide with topoisomerase II. Topotecan is a camptothecin derivative approved for use in gynecologic tumors and small-cell lung cancer. Toxicity is limited to myelosuppression and mucositis. CPT-11, or irinotecan, is a camptothecin with evidence of activity in colon carcinoma. In addition to myelosuppression, it causes a secretory diarrhea related to the toxicity of a metabolite called SN-38. Levels of SN-38 are particularly high in the setting of Gilbert’s disease, characterized by defective glucuronyl transferase and indirect hyperbilirubinemia, a condition that affects about 10% of the white population in the United States. The diarrhea can be treated effectively with loperamide or octreotide. iNDirect moDulAtors of Nucleic AciD fuNctioN: ANtimetAbolites A broad definition of antimetabolites would include compounds with structural similarity to precursors of purines or pyrimidines, or compounds that interfere with purine or pyrimidine synthesis. Some antimetabolites can cause DNA damage indirectly, through misincorporation into DNA, abnormal timing or progression through DNA synthesis, or Chapter 103e Principles of Cancer Treatment altered function of pyrimidine and purine biosynthetic enzymes. They tend to convey greatest toxicity to cells in S-phase, and the degree of toxicity increases with duration of exposure. Common toxic manifestations include stomatitis, diarrhea, and myelosuppression. Second malignancies are not associated with their use. Methotrexate inhibits dihydrofolate reductase, which regenerates reduced folates from the oxidized folates produced when thymidine monophosphate is formed from deoxyuridine mono-phosphate. Without reduced folates, cells die a “thymine-less” death. N5-Tetrahydrofolate or N5-formyltetrahydrofolate (leucovorin) can bypass this block and rescue cells from methotrexate, which is maintained in cells by polyglutamylation. The drug and other reduced folates are transported into cells by the folate carrier, and high concentrations of drug can bypass this carrier and allow diffusion of drug directly into cells. These properties have suggested the design of “high-dose” methotrexate regimens with leucovorin rescue of normal marrow and mucosa as part of curative approaches to osteosarcoma in the adjuvant setting and hematopoietic neoplasms of children and adults. Methotrexate is cleared by the kidney via both glomerular filtration and tubular secretion, and toxicity is augmented by renal dysfunction and drugs such as salicylates, probenecid, and nonsteroidal anti-inflammatory agents that undergo tubular secretion. With normal renal function, 15 mg/m2 leucovorin will rescue 10−8 to 10−6 M methotrexate in three to four doses. However, with decreased creatinine clearance, doses of 50–100 mg/m2 are continued until methotrexate levels are <5 × 10−8 M. In addition to bone marrow suppression and mucosal irritation, methotrexate can cause renal failure itself at high doses owing to crystallization in renal tubules; therefore, high-dose regimens require alkalinization of urine with increased flow by hydration. Methotrexate can be sequestered in third-space collections and diffuse back into the general circulation, causing prolonged myelosuppression. Less frequent adverse effects include reversible increases in transaminases and hypersensitivity-like pulmonary syndrome. Chronic low-dose methotrexate can cause hepatic fibrosis. When administered to the intrathecal space, methotrexate can cause chemical arachnoiditis and CNS dysfunction. Pemetrexed is a novel folate-directed antimetabolite. It is “multitargeted” in that it inhibits the activity of several enzymes, including thymidylate synthetase, dihydrofolate reductase, and glycinamide ribonucleotide formyltransferase, thereby affecting the synthesis of both purine and pyrimidine nucleic acid precursors. To avoid significant toxicity to the normal tissues, patients receiving pemetrexed should also receive low-dose folate and vitamin B12 supplementation. Pemetrexed has notable activity against certain lung cancers and, in combination with cisplatin, also against mesotheliomas. Pralatrexate is an antifolate approved for use in T cell lymphoma that is very efficiently transported into cancer cells. 5-Fluorouracil (5FU) represents an early example of “rational” drug design in that it originated from the observation that tumor cells incorporate radiolabeled uracil more efficiently into DNA than normal cells, especially gut. 5FU is metabolized in cells to 5´FdUMP, which inhibits thymidylate synthetase (TS). In addition, misincorporation can lead to single-strand breaks, and RNA can aberrantly incorporate FUMP. 5FU is metabolized by dihydropyrimidine dehydrogenase, and deficiency of this enzyme can lead to excessive toxicity from 5FU. Oral bioavailability varies unreliably, but orally administered analogues of 5FU such as capecitabine have been developed that allow at least equivalent activity to many parenteral 5FU-based approaches. Intravenous administration of 5FU leads to bone marrow suppression after short infusions but to stomatitis after prolonged infusions. Leucovorin augments the activity of 5FU by promoting formation of the ternary covalent complex of 5FU, the reduced folate, and TS. Less frequent toxicities include CNS dysfunction, with prominent cerebellar signs, and endothelial toxicity manifested by thrombosis, including pulmonary embolus and myocardial infarction. Cytosine arabinoside (ara-C) is incorporated into DNA after formation of ara-CTP, resulting in S-phase–related toxicity. Continuous infusion schedules allow maximal efficiency, with uptake maximal at 5–7 μM. Ara-C can be administered intrathecally. Adverse effects include nausea, diarrhea, stomatitis, chemical conjunctivitis, and cerebellar ataxia. Gemcitabine is a cytosine derivative that is similar to ara-C in that it is incorporated into DNA after anabolism to the triphosphate, rendering DNA susceptible to breakage and repair synthesis, which differs from that in ara-C in that gemcitabine-induced lesions are very inefficiently removed. In contrast to ara-C, gemcitabine appears to have useful activity in a variety of solid tumors, with limited nonmyelosuppressive toxicities. 6-Thioguanine and 6-mercaptopurine (6MP) are used in the treatment of acute lymphoid leukemia. Although administered orally, they display variable bioavailability. 6MP is metabolized by xanthine oxidase and therefore requires dose reduction when used with allopurinol. 6MP is also metabolized by thiopurine methyltransferase; genetic deficiency of thiopurine methyltransferase results in excessive toxicity. Fludarabine phosphate is a prodrug of F-adenine arabinoside (F-ara-A), which in turn was designed to diminish the susceptibility of ara-A to adenosine deaminase. F-ara-A is incorporated into DNA and can cause delayed cytotoxicity even in cells with low growth fraction, including chronic lymphocytic leukemia and follicular B cell lymphoma. CNS and peripheral nerve dysfunction and T cell depletion leading to opportunistic infections can occur in addition to myelosuppression. 2-Chlorodeoxyadenosine is a similar compound with activity in hairy cell leukemia. 2-Deoxycoformycin inhibits adenosine deaminase, with resulting increase in dATP levels. This causes inhibition of ribonucleotide reductase as well as augmented susceptibility to apoptosis, particularly in T cells. Renal failure and CNS dysfunction are notable toxicities in addition to immunosuppression. Hydroxyurea inhibits ribonucleotide reductase, resulting in S-phase block. It is orally bioavailable and useful for the acute management of myeloproliferative states. Asparaginase is a bacterial enzyme that causes breakdown of extracellular asparagine required for protein synthesis in certain leukemic cells. This effectively stops tumor cell DNA synthesis, as DNA synthesis requires concurrent protein synthesis. The outcome of asparaginase action is therefore very similar to the result of the small-molecule antimetabolites. Because asparaginase is a foreign protein, hypersensitivity reactions are common, as are effects on organs such as pancreas and liver that normally require continuing protein synthesis. This may result in decreased insulin secretion with hyperglycemia, with or without hyperamylasemia and clotting function abnormalities. Close monitoring of clotting functions should accompany use of asparaginase. Paradoxically, owing to depletion of rapidly turning over anticoagulant factors, thromboses particularly affecting the CNS may also be seen with asparaginase. mitotic spiNDle iNhibitors Microtubules are cellular structures that form the mitotic spindle, and in interphase cells, they are responsible for the cellular “scaffolding” along which various motile and secretory processes occur. Microtubules are composed of repeating noncovalent multimers of a heterodimer of α and β isoform of the protein tubulin. Vincristine binds to the tubulin dimer with the result that microtubules are disaggregated. This results in the block of growing cells in M-phase; however, toxic effects in G1 and S-phase are also evident, reflecting effects on normal cellular activities of microtubules. Vincristine is metabolized by the liver, and dose adjustment in the presence of hepatic dysfunction is required. It is a powerful vesicant, and infiltration can be treated by local heat and infiltration of hyaluronidase. At clinically used intravenous doses, neurotoxicity in the form of glove-and-stocking neuropathy is frequent. Acute neuropathic effects include jaw pain, paralytic ileus, urinary retention, and the syndrome of inappropriate antidiuretic hormone secretion. Myelosuppression is not seen. Vinblastine is similar to vincristine, except that it tends to be more myelotoxic, with more frequent thrombocytopenia and also mucositis and stomatitis. Vinorelbine is a vinca alkaloid that appears to have differences in resistance patterns in comparison to vincristine and vinblastine; it may be administered orally. The taxanes include paclitaxel and docetaxel. These agents differ from the vinca alkaloids in that the taxanes stabilize micro-tubules against depolymerization. The “stabilized” microtubules function abnormally and are not able to undergo the normal dynamic changes of microtubule structure and function necessary for cell cycle completion. Taxanes are among the most broadly active anti-neoplastic agents for use in solid tumors, with evidence of activity in ovarian cancer, breast cancer, Kaposi’s sarcoma, and lung tumors. They are administered intravenously, and paclitaxel requires use of a Cremophor-containing vehicle that can cause hypersensitivity reactions. Premedication with dexamethasone (8–16 mg orally or intravenously 12 and 6 h before treatment) and diphenhydramine (50 mg) and cimetidine (300 mg), both 30 min before treatment, decreases but does not eliminate the risk of hypersensitivity reactions to the paclitaxel vehicle. Docetaxel uses a polysorbate 80 formulation, which can cause fluid retention in addition to hypersensitivity reactions, and dexamethasone premedication with or without antihistamines is frequently used. A protein-bound formulation of paclitaxel (called nab-paclitaxel) has at least equivalent antineoplastic activity and decreased risk of hypersensitivity reactions. Paclitaxel may also cause hypersensitivity reactions, myelosuppression, neurotoxicity in the form of glove-and-stocking numbness, and paresthesia. Cardiac rhythm disturbances were observed in phase 1 and 2 trials, most commonly asymptomatic bradycardia but also, much more rarely, varying degrees of heart block. These have not emerged as clinically significant in the majority of patients. Docetaxel causes comparable degrees of myelosuppression and neuropathy. Hypersensitivity reactions, including bronchospasm, dyspnea, and hypotension, are less frequent but occur to some degree in up to 25% of patients. Fluid retention appears to result from a vascular leak syndrome that can aggravate preexisting effusions. Rash can complicate docetaxel administration, appearing prominently as a pruritic maculopapular rash affecting the forearms, but it has also been associated with fingernail ridging, breakdown, and skin discoloration. Stomatitis appears to be somewhat more frequent than with paclitaxel. Cabazitaxel is a taxane with somewhat better activity in prostate cancers than earlier generations of taxanes, perhaps due to superior delivery to sites of disease. Resistance to taxanes has been related to the emergence of efficient efflux of taxanes from tumor cells through the p170 P-glycoprotein (mdr gene product) or the presence of variant or mutant forms of tubulin. Epothilones represent a class of novel microtubule-stabilizing agents that have been conscientiously optimized for activity in taxane-resistant tumors. Ixabepilone has clear evidence of activity in breast cancers resistant to taxanes and anthracyclines such as doxorubicin. It retains acceptable expected side effects, including myelosuppression, and can also cause peripheral sensory neuropathy. Eribulin is a microtubule-directed agent with activity in patients who have had progression of disease on taxanes and is more similar to vinca alkaloids in its action but has similar side effects as vinca alkaloids and taxanes. Estramustine was originally synthesized as a mustard derivative that might be useful in neoplasms that possessed estrogen receptors. However, no evidence of interaction with DNA was observed. Surprisingly, the drug caused metaphase arrest, and subsequent study revealed that it binds to microtubule-associated proteins, resulting in abnormal microtubule function. Estramustine binds to estramustinebinding proteins (EMBPs), which are notably present in prostate tumor tissue, where the drug is used. Gastrointestinal and cardiovascular adverse effects related to the estrogen moiety occur in up to 10% of patients, including worsened heart failure and thromboembolic phenomena. Gynecomastia and nipple tenderness can also occur. Targeted Chemotherapy • hormoNe receptor–DirecteD therApy Steroid hormone receptor–related molecules have emerged as prominent targets for small molecules useful in cancer treatment. When bound to their cognate ligands, these receptors can alter gene transcription and, in certain tissues, induce apoptosis. The pharmacologic effect is a mirror or parody of the normal effects of the agents acting on nontransformed normal tissues, although the effects on tumors are mediated by indirect effects in some cases. While in some cases, such as breast cancer, demonstration of the target hormone receptor is necessary, in other cases such prostate cancer (androgen receptor) and lymphoid neoplasms (glucocorticoid receptor), the relevant receptor is 103e-15 always present in the tumor. Glucocorticoids are generally given in “pulsed” high doses in leukemias and lymphomas, where they induce apoptosis in tumor cells. Cushing’s syndrome and inadvertent adrenal suppression on withdrawal from high-dose glucocorticoids can be significant complications, along with infections common in immunosuppressed patients, in particular Pneumocystis pneumonia, which classically appears a few days after completing a course of high-dose glucocorticoids. Tamoxifen is a partial estrogen receptor antagonist; it has a 10-fold greater antitumor activity in breast cancer patients whose tumors express estrogen receptors than in those who have low or no levels of expression. It might be considered the prototypic “molecularly targeted” agent. Owing to its agonistic activities in vascular and uterine tissue, side effects include a somewhat increased risk of cardiovascular complications, such as thromboembolic phenomena, and a small increased incidence of endometrial carcinoma, which appears after chronic use (usually >5 years). Progestational agents—including medroxyprogesterone acetate, androgens including fluoxymesterone (Halotestin), and, paradoxically, estrogens—have approximately the same degree of activity in primary hormonal treatment of breast cancers that have elevated expression of estrogen receptor protein. Estrogen itself is not used often owing to prominent cardiovascular and uterotropic activity. Aromatase refers to a family of enzymes that catalyze the formation of estrogen in various tissues, including the ovary and peripheral adipose tissue and some tumor cells. Aromatase inhibitors are of two types, the irreversible steroid analogues such as exemestane and the reversible inhibitors such as anastrozole or letrozole. Anastrozole is superior to tamoxifen in the adjuvant treatment of breast cancer in postmenopausal patients with estrogen receptor–positive tumors. Letrozole treatment affords benefit following tamoxifen treatment. Adverse effects of aromatase inhibitors may include an increased risk of osteoporosis. Prostate cancer is classically treated by androgen deprivation. Diethylstilbestrol (DES) acting as an estrogen at the level of the hypothalamus to downregulate hypothalamic luteinizing hormone (LH) production results in decreased elaboration of testosterone by the testicle. For this reason, orchiectomy is equally as effective as moderate-dose DES, inducing responses in 80% of previously untreated patients with prostate cancer but without the prominent cardiovascular side effects of DES, including thrombosis and exacerbation of coronary artery disease. In the event that orchiectomy is not accepted by the patient, testicular androgen suppression can also be effected by luteinizing hormone–releasing hormone (LHRH) agonists such as leuprolide and goserelin. These agents cause tonic stimulation of the LHRH receptor, with the loss of its normal pulsatile activation resulting in decreased output of LH by the anterior pituitary. Therefore, as primary hormonal manipulation in prostate cancer, one can choose orchiectomy or leuprolide, but not both. The addition of androgen receptor blockers, including flutamide or bicalutamide, is of uncertain additional benefit in extending overall response duration; the combined use of orchiectomy or leuprolide plus flutamide is referred to as total androgen blockade. Enzalutamide also binds to the androgen receptor and antagonizes androgen action in a mechanistically distinct way. Somewhat analogous to inhibitors of aromatase, agents have been derived that inhibit testosterone and other androgen synthesis in the testis, adrenal gland, and prostate tissue. Abiraterone inhibits 17 α-hydroxylase/C17,20 lyase (CYP 17A1) and has been shown to be active in prostate cancer patients experiencing progression despite androgen blockade. Tumors that respond to a primary hormonal manipulation may frequently respond to second and third hormonal manipulations. Thus, breast tumors that had previously responded to tamoxifen have, on relapse, notable response rates to withdrawal of tamoxifen itself or to subsequent addition of an aromatase inhibitor or progestin. Likewise, initial treatment of prostate cancers with leuprolide plus flutamide may be followed after disease progression by response to withdrawal of flutamide. These responses may result from the removal Chapter 103e Principles of Cancer Treatment of antagonists from mutant steroid hormone receptors that have come to depend on the presence of the antagonist as a growth-promoting influence. Additional strategies to treat refractory breast and prostate cancers that possess steroid hormone receptors may also address adrenal capacity to produce androgens and estrogens, even after orchiectomy or oophorectomy, respectively. Thus, aminoglutethimide or ketocon azole can be used to block adrenal synthesis by interfering with the enzymes of steroid hormone metabolism. Administration of these agents requires concomitant hydrocortisone replacement and additional glucocorticoid doses administered in the event of physiologic stress. Humoral mechanisms can also result in complications from an underlying malignancy producing the hormone. Adrenocortical carcinomas can cause Cushing’s syndrome as well as syndromes of androgen or estrogen excess. Mitotane can counteract these by decreasing synthesis of steroid hormones. Islet cell neoplasms can cause debilitating diarrhea, treated with the somatostatin analogue octreotide. Prolactin-secreting tumors can be effectively managed by the dopaminergic agonist bromocriptine. DiAgNosticAlly guiDeD therApy The basis for discovery of drugs of this type was the prior knowledge of the importance of the drugs’ molecular target to drive tumors in different contexts. Figure 103e-4 summarizes how FDA-approved targeted agents act. In the case of diagnostically guided targeted chemotherapy, prior demonstration of a specific target is necessary to guide the rational use of the agent, while in the case of targeted agents directed at oncogenic pathways, specific diagnosis of pathway activation is not yet necessary or in some cases feasible, although this is an area of ongoing clinical research. Table 103e-5 lists currently approved targeted chemotherapy agents, with features of their use. In hematologic tumors, the prototypic agent of this type is imatinib, which targets the ATP binding site of the p210bcr-abl protein tyrosine kinase that is formed as the result of the chromosome 9;22 translocation producing the Philadelphia chromosome in CML. Imatinib is superior to interferon plus chemotherapy in the initial treatment of the chronic phase of this disorder. It has lesser activity in the blast phase of CML, where the cells may have acquired additional mutations in p210bcr-abl itself or other genetic lesions. Its side effects are relatively tolerable in most patients and include hepatic dysfunction, diarrhea, and fluid retention. Rarely, patients receiving imatinib have decreased cardiac function, which may persist after discontinuation of the drug. The quality of response to imatinib enters into the decision about when to refer patients with CML for consideration of transplant approaches. Nilotinib is a tyrosine protein kinase inhibitor with a similar spectrum of activity to imatinib, but with increased potency and perhaps better tolerance by certain patients. Dasatinib, another inhibitor of the p210bcr-abl oncoproteins, is active in certain mutant variants of p210bcr-abl that are refractory to imatinib and arise during therapy with imatinib or are present de novo. Dasatinib also has inhibitory action against kinases belonging to the src tyrosine protein kinase family; this activity may contribute to its effects in hematopoietic tumors and suggest a role in solid tumors where src kinases are active. The T315I mutant of p210bcr-abl is resistant to imatinib, nilotinib, bosutinib, and dasatinib; ponatinib has activity in patients with this p210bcr-abl variant, but ponatinib has noteworthy associated thromboembolic toxicity. Use of this class of targeted agents is thus critically guided not only by the presence of the p210bcr-abl tyrosine kinase, but also by the presence of different mutations in the ATP binding site. All-trans-retinoic acid (ATRA) targets the PML-retinoic acid receptor (RAR) α fusion protein, which is the result of the chromosome 15;17 translocation pathogenic for most forms of APL. Administered orally, it causes differentiation of the neoplastic promyelocytes to mature granulocytes and attenuates the rate of hemorrhagic complications. Adverse effects include headache with or without pseudotumor cerebri and gastrointestinal and cutaneous toxicities. In epithelial solid tumors, the small-molecule epidermal growth factor (EGF) antagonists act at the ATP binding site of the EGF receptor FIGuRE 103e-4 Targeted chemotherapeutic agents act in most instances by interrupting cell growth factor-mediated signaling pathways. After a growth factor binds to is cognate receptor (1), in many cases there is activation of tyrsosine kinase activity particularly after dimerization of the receptors (2). This leads to autophosphorylation of the receptor and docking of “adaptor” proteins. One important pathway activated occurs after exchange of GDP for GTP in the RAS family of proto-oncogene products (3). GTP-RAS activates the RAF proto-oncogene kinase (4), leading to a phosphorylation cascade of kinases (5, 6) that ultimately impart signals to regulators of gene function to produce transcripts which activate cell cycle progression and increase protein synthesis. In parallel, tyrosine phosphorylated receptors can activate the phosphatidylinositol-3-kinase to produce the phosphorylated lipid phosphatidyl-inositol-3phosphate (7). This leads to the activation of the AKT kinase(8) which in turn stimulates the mammalian “Target of Rapamycin” kinase (mTOR), which directly increases the translation of key mRNAs for gene products regulating cell growth. Erlotinib and afatinib, are examples of Epidermal Growth Factor receptor tyrosine kinase inhibitors; imatinib can act on the nonreceptor tyrosine kinase bcr-abl or c-KIT membrane bound tyrosine kinase. Vemurafenib and Dabrafenib act on the B isoform of RAF uniquely in melanoma, and c-RAF is inhibited by sorafenib. Trametinib acts on MEK. Temsirolimus and everolimus inhibit mTOR kinase to downregulate translation of oncogenic mRNAs. tyrosine kinase. In early clinical trials, gefitinib showed evidence of responses in a small fraction of patients with non-small-cell lung cancer (NSCLC). Side effects were generally acceptable, consisting mostly of rash and diarrhea. Subsequent analysis of responding patients revealed a high frequency of activating mutations in the EGF receptor. Patients with such activating mutations who initially responded to gefitinib but who then had progression of the disease then acquired additional mutations in the enzyme, analogous functionally to mutational variants responsible for imatinib resistance in CML. Erlotinib is another EGF receptor tyrosine kinase antagonist with a superior outcome in clinical Chapter 103e Principles of Cancer Treatment Temsirolimus Renal cell carcinoma, second line or poor prognosis Stomatitis Thrombocytopenia Nausea Anorexia, fatigue Metabolic (glucose, lipid) Abbreviations: APL, acute promyelocytic leukemia; ALL, acute lymphocytic leukemia; CHF, congestive heart failure; CML, chronic myeloid leukemia; EGFR, epidermal growth factor receptor; GI, gastrointestinal; mTOR, mammalian target of rapamycin kinase; NSCLC, non-small-cell lung cancer; PDGFR, platelet-derived growth factor receptor; Pgp, P-glycoprotein; VEGFR, vascular endothelial growth factor receptor. trials in NSCLC; an overall survival advantage was demonstrated in subsets of patients who were treated after demonstrating progression of disease and who also had not been preselected for the presence of activating mutations. Thus, although even patients with wild-type EGF receptors may benefit from erlotinib treatment, the presence of EGF receptor tyrosine kinase mutations has recently been shown to be a basis for recommending erlotinib and afatinib for first-line treatment of advanced NSCLC. Likewise, crizotinib targeting the alk protooncogene fusion protein has value in the initial treatment of alk-positive NSCLC. Lapatinib is a tyrosine kinase inhibitor with both EGF receptor and HER2/neu antagonist activity, which is important in the treatment of breast cancers expressing the HER2/neu oncoprotein. In addition to the p210bcr-abl kinase, imatinib also has activity against the c-kit tyrosine kinase (the receptor for the steel growth factor, also called stem cell factor) and the platelet-derived growth factor receptor (PDGFR), both of which can be expressed in gastrointestinal stromal sarcoma (GIST). Imatinib has found clinical utility in GIST, a tumor previously notable for its refractoriness to chemotherapeutic approaches. Imatinib’s degree of activity varies with the specific mutational variant of kit or PDGFR present in a particular patient’s tumor. The BRAF V600E mutation has been detected in a notable fraction of melanomas, thyroid tumors, and hairy cell leukemia, and preclinical models supported the concept that BRAF V600E drives oncogenic signaling in these tumors. Vemurafenib and dabrafenib, with selective capacity to inhibit the BRAF V600E serine kinase activity, were each shown to cause noteworthy responses in patients with BRAF V600E– mutated melanomas, although early relapse occurred in many patients treated with the drugs as single agents. Trametinib, acting downstream of BRAF V600E by directly inhibiting the MEK serine kinase by a non-ATP binding site mechanism, also displayed noteworthy responses in BRAF V600E–mutated melanomas, and the combination of trametinib and dabrafenib is even more active, by targeting the BRAF V600E– driven pathway at two points in the pathway leading to gene activation. oNcogeNicAlly ActivAteD pAthwAys This group of agents also targets specific regulatory molecules in promoting the viability of tumor cells, but they do not require the diagnostically verified presence of a particular target or target variant at this time. “Multitargeted” kinase antagonists are small-molecule ATP site-directed antagonists that inhibit more than one protein kinase and have value in the treatment of several solid tumors. Drugs of this type with prominent activity against the vascular endothelial growth factor receptor (VEGFR) tyrosine kinase have activity in renal cell carcinoma. Sorafenib is a VEGFR antagonist with activity against the raf serine-threonine protein kinase, and regorafenib is a closely related drug with value in relapsed advanced colon cancer. Pazopanib also prominently targets VEGFR and has activity in renal carcinoma and soft tissue sarcomas. Sunitinib has anti-VEGFR, anti-PDGFR, and anti-c-kit activity. It causes prominent responses and stabilization of disease in renal cell cancers and GISTs. Side effects for agents with anti-VEGFR activity prominently include hypertension, proteinuria, and, more rarely, bleeding and clotting disorders and perforation of scarred gastrointestinal lesions. Also encountered are fatigue, diarrhea, and the hand-foot syndrome, with erythema and desquamation of the distal extremities, in some cases requiring dose modification, particularly with sorafenib. Temsirolimus and everolimus are mammalian target of rapamycin (mTOR) inhibitors with activity in renal cancers. They produce stomatitis, fatigue, and some hyperlipidemia (10%), myelosuppression (10%), and rare lung toxicity. Everolimus is also useful in patients with hormone receptor–positive breast cancers displaying resistance to hormonal inhibition and in certain neuroendocrine and brain tumors, the latter arising in patients with sporadic or inherited mutations in the pathway activating mTOR. In hematologic neoplasms, bortezomib is an inhibitor of the proteasome, the multisubunit assembly of protease activities responsible for the selective degradation of proteins important in regulating activation of transcription factors, including nuclear factor-κB (NF-κB) and proteins regulating cell cycle progression. It has activity in multiple myeloma and certain lymphomas. Adverse effects include neuropathy, orthostatic hypotension with or without hyponatremia, and reversible thrombocytopenia. Carfilzomib is a proteasome inhibitor chemically unrelated to bortezomib without prominent neuropathy, but with evidence of a cytokine release syndrome, which can be a cardiopulmonary stress. Other agents active in multiple myeloma and certain other hematologic neoplasms include the immunomodulatory agents related to thalidomide, including lenalidomide and pomalidomide. All these agents collectively inhibit aberrant angiogenesis in the bone marrow microenvironment, as well as influence stromal cell immune functions to alter the cytokine milieu supporting the growth of myeloma cells. Thalidomide, although clinically active, has prominent cytopenic, neuropathic, procoagulant, and CNS toxicities that have been somewhat attenuated in the other drugs of the class, although use of these agents frequently entails concomitant anticoagulant prophylaxis. Ibrutinib is representative a novel class of inhibitors directed at Bruton’s tyrosine kinase, which is important in the function of B cells. Initially approved for use in mantle cell lymphoma, it is potentially applicable to a number of B cell neoplasms that depend on signals through the B cell antigen receptor. Janus kinases likewise function downstream of a variety of cytokine receptors to amplify cytokine signals, and Janus kinase inhibitors including ruxolitinib have approved activity in myelofibrosis to ameliorate splenomegaly and systemic symptoms. Vorinostat is an inhibitor of histone deacetylases, which are responsible for maintaining the proper orientation of histones on DNA, with resulting capacity for transcriptional readiness. Acetylated histones allow access of transcription factors to target genes and therefore Chapter 103e Principles of Cancer Treatment increase expression of genes that are selectively repressed in tumors. The result can be differentiation with the emergence of a more normal cellular phenotype, or cell cycle arrest with expression of endogenous regulators of cell cycle progression. Vorinostat is approved for clinical use in cutaneous T cell lymphoma, with dramatic skin clearing and very few side effects. Romidepsin is a distinct molecular class of his-tone deacetylase inhibitor also active in cutaneous T cell lymphoma. An active retinoid in cutaneous T cell lymphoma is the synthetic retinoid X receptor ligand bexarotene. DNA methyltransferase inhibitors, including 5-aza-cytidine and 2´-deoxy-5-azacytidine (decitabine), can also increase transcription of genes “silenced” during the pathogenesis of a tumor by causing demethylation of the methylated cytosines that are acquired as an “epigenetic” (i.e., after the DNA is replicated) modification of DNA. These drugs were originally considered antimetabolites but have clinical value in myelodysplastic syndromes and certain leukemias when administered at low doses. CANCER BIOLOGIC THERAPY Principles The goal of biologic therapy is to manipulate the host– tumor interaction in favor of the host, potentially at an optimum biologic dose that might be different than the MTD. As a class, biologic therapies may be distinguished from molecularly targeted agents in that many biologic therapies require an active response (e.g., reexpression of silenced genes or antigen expression) on the part of the tumor cell or on the part of the host (e.g., immunologic effects) to allow therapeutic effect. This may be contrasted with the more narrowly defined antiproliferative or apoptotic response that is the ultimate goal of molecularly targeted agents discussed above. However, there is much commonality in the strategies to evaluate and use molecularly targeted and biologic therapies. Immune Cell–Mediated Therapies Tumors have a variety of means of avoiding the immune system: (1) they are often only subtly different from their normal counterparts; (2) they are capable of downregulating their major histocompatibility complex antigens, effectively masking them from recognition by T cells; (3) they are inefficient at presenting antigens to the immune system; (4) they can cloak themselves in a protective shell of fibrin to minimize contact with surveillance mechanisms; and (5) they can produce a range of soluble molecules, including potential immune targets, that can distract the immune system from recognizing the tumor cell or can kill or inactivate the immune effector cells. Some of the cell products initially polarize the immune response away from cellular immunity (shifting from TH1 to TH2 responses; Chap. 372e) and ultimately lead to defects in T cells that prevent their activation and cytotoxic activity. Cancer treatment further suppresses host immunity. A variety of strategies are being tested to overcome these barriers. Cell-Mediated Immunity The strongest evidence that the immune system can exert clinically meaningful antitumor effects comes from allogeneic bone marrow transplantation. Adoptively transferred T cells from the donor expand in the tumor-bearing host, recognize the tumor as being foreign, and can mediate impressive antitumor effects (graft-versus-tumor effects). Three types of experimental interventions are being developed to take advantage of the ability of T cells to kill tumor cells. 1. Transfer of allogeneic T cells. This occurs in three major settings: in allogeneic bone marrow transplantation; as purified lymphocyte transfusions following bone marrow recovery after allogeneic bone marrow transplantation; and as pure lymphocyte transfusions following immunosuppressive (nonmyeloablative) therapy (also called minitransplants). In each of these settings, the effector cells are donor T cells that recognize the tumor as being foreign, probably through minor histocompatibility differences. The main risk of such therapy is the development of graft-versus-host disease because of the minimal difference between the cancer and the normal host cells. This approach has been highly effective in certain hematologic cancers. 2. Transfer of autologous T cells. In this approach, the patient’s own T cells are removed from the tumor-bearing host, manipulated in several ways in vitro, and given back to the patient. There are three major classes of autologous T-cell manipulation. First, tumor antigen–specific T cells can be developed and expanded to large numbers over many weeks ex vivo before administration. Second, the patient’s T cells can be activated by exposure to polyclonal stimulators such as anti-CD3 and anti-CD28 after a short period ex vivo, and then amplified in the host after transfer by stimulation with IL-2, for example. Short periods removed from the patient permit the cells to overcome the tumor-induced T cell defects, and such cells traffic and home to sites of disease better than cells that have been in culture for many weeks. In a third approach, genes that encode for a T cell receptor specific for an antigen expressed by the tumor along with genes that facilitate T cell activation can be introduced into subsets of a patient’s T cells, which, after transfer back into the patient, allow homing of cytotoxic T cells to tumor cells expressing the antigen. 3. Tumor vaccines aimed at boosting T cell immunity. The finding that mutant oncogenes that are expressed only intracellularly can be recognized as targets of T cell killing greatly expanded the possibilities for tumor vaccine development. No longer is it difficult to find something different about tumor cells. However, major difficulties remain in getting the tumor-specific peptides presented in a fashion to prime the T cells. Tumors themselves are very poor at presenting their own antigens to T cells at the first antigen exposure (priming). Priming is best accomplished by professional antigen-presenting cells (dendritic cells). Thus, a number of experimental strategies are aimed at priming host T cells against tumor-associated peptides. Vaccine adjuvants such as granulocytemacrophage colony-stimulating factor (GM-CSF) appear capable of attracting antigen-presenting cells to a skin site containing a tumor antigen. Such an approach has been documented to eradicate microscopic residual disease in follicular lymphoma and give rise to tumor-specific T cells. Purified antigen-presenting cells can be pulsed with tumor, its membranes, or particular tumor antigens and delivered as a vaccine. One such vaccine, Sipuleucel-T, is approved for use in patients with hormone-independent prostate cancer. In this approach, the patient undergoes leukapheresis, wherein mononuclear cells (that include antigen-presenting cells) are removed from the patient’s blood. The cells are pulsed in a laboratory with an antigenic fusion protein comprising a protein frequently expressed by prostate cancer cells, prostate acid phosphatase, fused to GM-CSF, and matured to increase their capacity to present the antigen to immune effector cells. The cells are then returned to the patient in a well-tolerated treatment. Although no objective tumor response was documented in clinical trials, median survival was increased by about 4 months. Tumor cells can also be transfected with genes that attract antigen-presenting cells. Another important vaccine strategy is directed at infectious agents whose action ultimately is tied to the development of human cancer. Hepatitis B vaccine in an epidemiologic sense prevents hepatocellular carcinoma, and a tetravalent human papillomavirus vaccine prevents infection by virus types currently accounting for 70% of cervical cancer. Unfortunately, these vaccines are ineffective at treating patients who have developed a virus-induced cancer. Antibody-Mediated Therapeutic Approaches In general, antibodies are not very effective at killing cancer cells. Because the tumor seems to influence the host toward making antibodies rather than generating cellular immunity, it is inferred that antibodies are easier for the tumor to fend off. Many patients can be shown to have serum antibodies directed at their tumors, but these do not appear to influence disease progression. However, the ability to grow very large quantities of high-affinity antibody directed at a tumor by the hybridoma technique has led to the application of antibodies in the treatment of cancer. In this approach, antibodies are derived where the antigen-combining regions are grafted onto human immunoglobulin gene products (chimerized or humanized) or derived de novo from mice bearing Drug Target Indications and Features of Use Rituximab CD20 B cell neoplasms (also emerging role in autoimmune disease); chimeric antibody with frequent mouse-derived sequences; frequent infusion reactions, particularly on initial doses; reactivation of infections, Ofatumumab CD20 active in CLL; fully human antibody with distinct binding site compared to rituximab; decreased intensity infusion reactions; Trastuzumab HER2/neu Active in breast cancer and GI cancers expressing HER2/neu; cardiotoxicity, particularly in setting of prior anthracyclines, requires monitoring; infusion reactions Pertuzumab HER2/neu Breast cancer; targets distinct binding site from trastuzumab, inhibiting dimerization of HER2 family members; infusion Cetuximab EGFR Colorectal cancers with wild-type Ki-ras oncoprotein; head and neck cancers with radiation; rash, diarrhea, infusion reactions Panitumumab EGFR Colorectal cancers with wild-type Ki-ras oncoprotein; fully humanized; decreased infusion reactions; different IgG subtype Bevacizumab VEGF Metastatic colorectal cancer and non-small-cell lung cancer (nonsquamous) with chemotherapy; renal cancer and glioblastoma as single agents; prominent HBP, proteinuria, GI perforations, hemorrhage, thrombosis (venous and arterial) adverse events Abbreviations: CLL, chronic lymphocytic leukemia; EGFR, epidermal growth factor receptor; GI, gastrointestinal; HBP, high blood pressure; VEGF, vascular endothelial growth factor. human immunoglobulin gene loci. Three general strategies have emerged using antibodies. Tumor-regulatory antibodies target tumor cells directly or indirectly to modulate intracellular functions or attract immune or stromal cells. Immunoregulatory antibodies target antigens expressed on the tumor cells or host immune cells to modulate primarily the host’s immune responsiveness to the tumor. Finally, antibody conjugates can be made with the antibody linked to drugs, toxins, or radioisotopes to target these “warheads” for delivery to the tumor. Table 103e-6 lists features of currently used or promising antibodies for cancer treatment. tumor-regulAtory ANtiboDies Humanized antibodies against the CD20 molecule expressed on B cell lymphomas (rituximab and ofatumumab) are exemplary of antibodies that affect both signaling events driving lymphomagenesis as well as activating immune responses against B cell neoplasms. They are used as single agents and in combination with chemotherapy and radiation in the treatment of B cell neoplasms. Obinutuzumab is an antibody with an altered glycosylation that enhances its ability to fix complement; it is also directed against CD20 and is of value in chronic lymphocytic leukemia. It seems to be more effective in this setting than rituximab. The HER2/neu receptor overexpressed on epithelial cancers, especially breast cancer, was initially targeted by trastuzumab, with noteworthy activity in potentiating the action of chemotherapy in breast cancer as well as some evidence of single-agent activity. Trastuzumab also appears to interrupt intracellular signals derived from HER2/ neu and to stimulate immune mechanisms. The anti-HER2 antibody pertuzumab, specifically targeting the domain of HER2/neu responsible for dimerization with other HER2 family members, is more specifically directed against HER2 signaling function and augments the action of trastuzumab. EGF receptor (EGFR)-directed antibodies (such as cetuximab and panitumumab) have activity in colorectal cancer refractory to chemotherapy, particularly when used to augment the activity of an additional chemotherapy program, and in the primary treatment of head and neck cancers treated with radiation therapy. The mechanism of action is unclear. Direct effects on the tumor may mediate an antiproliferative effect as well as stimulate the participation of host mechanisms involving immune cell or complement-mediated response to tumor cell–bound antibody. Alternatively, the antibody may alter the release of paracrine factors promoting tumor cell survival. The anti-VEGF antibody bevacizumab shows little evidence of antitumor effect when used alone, but when combined with chemotherapeutic agents, it improves the magnitude of tumor shrinkage and time to disease progression in colorectal and nonsquamous lung cancers. The mechanism for the effect is unclear and may relate to the capacity of the antibody to alter delivery and tumor uptake of the active chemotherapeutic agent. Ziv-aflibercept is not an antibody, but a solubilized VEGF receptor VEGF binding domain, and therefore may have a distinct mechanism of action with comparable side effects. Unintended side effects of any antibody use include infusion-related hypersensitivity reactions, usually limited to the first infusion, which can be managed with glucocorticoid and/or antihistamine prophylaxis. In addition, distinct syndromes have emerged with different antibodies. Anti-EGFR antibodies produce an acneiform rash that poorly responds to glucocorticoid cream treatment. Trastuzumab (anti-HER2) can inhibit cardiac function, particularly in patients with prior exposure to anthracyclines. Bevacizumab has a number of side effects of medical significance, including hypertension, thrombosis, proteinuria, hemorrhage, and gastrointestinal perforations with or without prior surgeries; these adverse events also occur with small-molecule drugs modulating VEGFR function. immuNoregulAtory ANtiboDies Purely immunoregulatory antibodies stimulate immune responses to mediate tumor-directed cytotoxicity. First-generation approaches sought to activate complement and are exemplified by antibodies to CD52; these are active in chronic lymphoid leukemia and T cell malignancies. A more refined understanding of the tumor–host interface has defined that cytotoxic tumor-directed T cells are frequently inhibited by ligands upregulated in the tumor cells. The programmed death ligand 1 (PD-L1; also known as B7-homolog 1) was initially recognized as an entity that induced T cell death through a receptor present on T cells, termed the PD receptor (Fig 103e-5), which physiologically exists to regulate the intensity of the immune response. The PD family of ligands and receptors also regulates macrophage function, present in tumor stroma. These actions raised the hypothesis that antibodies directed against the PD signaling axis (both anti-PD-L1 and anti-PD) might be useful in cancer treatment by allowing reactivation of the immune response against tumors. Indeed, nivolumab and lambrolizumab, both anti-PD antibodies, have shown evidence of important immune-mediated actions against certain solid tumors, including melanoma and lung cancers. Chapter 103e Principles of Cancer Treatment FIGuRE 103e-5 Tumors possess a microenvironment (tumor stroma) with immune cells including both helper T cells, suppressor T cells (both “regulatory” of other immune cell function), macrophages, and cytotoxic T cells. Cytokines found in the stroma and deriving from macrophawsges and regulatory T cells modulate the activities of cytotoxic T cells, which have the potential to kill tumor cells. Antigens released by tumor cells are taken up by Antigen Presenting Cells (APCs), also in the stroma. Antigens are processed by the APCs to peptides presented by the Major Histocompatibility Complex to T-cell antigen receptors, thus providing an (+) activation signal for the cytotoxic tumor cells to kill tumor cells bearing that antigen. Negative (–) signals inhibiting cytoxic T cell action include the CTLA4 receptor (on T cells), interacting with the B7 family of negative regulatory signals from APCs, and the PD receptor (on T cells), interacting with the PD-L1 (–) signal coming from tumor cells expressing the PD-1 ligand (PD-l1). As both CTLA4 and PD1 signals attenuate the anti-tumor T cell response, strategies which inhibit CTLA4 and PD1 function are a means of stimulating cytotoxic T cell activity to kill tumor cells. Cytokines from other immune cells and macrophages can provide both (+) and (–) signals for T cell action, and are under investigation as novel immunoregulatory therapeutics. Already approved for clinical use in melanoma is ipilimumab, an antibody directed against the anti-CTLA4 (cytotoxic T lymphocyte antigen 4), which is expressed on T cells (not tumor cells), responds to signals from antigen-presenting cells (Fig. 103e-5), and also down-regulates the intensity of the T cell proliferative response to antigens derived from tumor cells. Indeed, manipulation of the CTLA4 axis was the first demonstration that purely immunoregulatory antibody strategies directed at T cell physiology could be safe and effective in the treatment of cancer, although it acts at a very early stage in T cell activation and can be considered somewhat nonspecific in its basis for T cell stimulation. Pembrolizumab, an anti-PD ligand blocking agent was also approved for melanoma, with a similar spectrum of potential adverse events, but acting in the tumor microenvironment. Indeed, prominent activation of autoimmune hepatic, endocrine, cutaneous, neurologic, and gastrointestinal responses is a basis for adverse events with the use of ipilimumab; the emergent use of glucocorticoids may be required to attenuate severe toxicities, which unfortunately can cause potential attenuation of antitumor effect. Importantly for the general internist, these events may occur late after exposure to ipilimumab while the patient may otherwise be enjoying sustained control of tumor growth owing to the beneficial actions of ipilimumab. Another class of immunoregulatory antibody is the “bispecific” antibody blinatumomab, which was constructed to have an anti-CD19 antigen combining site as one valency of an antibody with anti-CD3 binding site as the other valency. This antibody thus can bring T cells (with its anti-CD3 activity) close to B cells bearing the CD19 determinant. Blinatumomab is active in B cell neoplasms such as acute lymphocytic leukemia, which may not have prominent expression of the CD20 targeted by rituximab. ANtiboDy coNjugAtes Conjugates of antibodies with drugs and isotopes have also been shown to be effective in the treatment of cancer and have the intent of increasing the therapeutic index of the drug or isotope by delivering the toxic “warhead” directly to the tumor cell or tumor microenvironment. Ado-trastuzumab is a conjugate of the HER2/neu-directed trastuzumab and a highly toxic microtubule targeted drug (emtansine), which by itself is too toxic for human use; the antibody-drug conjugate shows valuable activity in patients with breast cancer who have developed resistance to the “naked” antibody. Brentuximab vedotin is an anti-CD30 antibody drug conjugate with a distinct microtubule poison with activity in neoplasms such as Hodgkin’s lymphoma where the tumor cells frequently express CD30. Radioconjugates targeting CD20 on lymphomas have been approved for use (ibritumomab tiuxetan [Zevalin], using yttrium-90 or 131I-tositumomab). Toxicity concerns have limited their use. Cytokines There are >70 separate proteins and glycoproteins with biologic effects in humans: interferon (IFN) α, β, and γ; interleukin (IL) 1 through 29 (so far); the tumor necrosis factor (TNF) family (including lymphotoxin, TNF-related apoptosis-inducing ligand [TRAIL], CD40 ligand, and others); and the chemokine family. Only a fraction of these has been tested against cancer; only IFN-α and IL-2 are in routine clinical use. About 20 different genes encode IFN-α, and their biologic effects are indistinguishable. IFN induces the expression of many genes, inhibits protein synthesis, and exerts a number of different effects on diverse cellular processes. The two recombinant forms that are commercially available are IFN-α2a and -α2b. Interferon is not curative for any tumor but can induce partial responses in follicular lymphoma, hairy cell leukemia, CML, melanoma, and Kaposi’s sarcoma. It has been used in the adjuvant setting in stage II melanoma, multiple myeloma, and follicular lymphoma, with uncertain effects on survival. It produces fever, fatigue, a flulike syndrome, malaise, myelosuppression, and depression and can induce clinically significant autoimmune disease. IFN-α is not generally the treatment of choice for any cancer. IL-2 exerts its antitumor effects indirectly through augmentation of immune function. Its biologic activity is to promote the growth and activity of T cells and natural killer (NK) cells. High doses of IL-2 can produce tumor regression in certain patients with metastatic melanoma and renal cell cancer. About 2–5% of patients may experience complete remissions that are durable, unlike any other treatment for these tumors. IL-2 is associated with myriad clinical side effects, including intravascular volume depletion, capillary leak syndrome, adult respiratory distress syndrome, hypotension, fever, chills, skin rash, and impaired renal and liver function. Patients may require blood pressure support and intensive care to manage the toxicity. However, once the agent is stopped, most of the toxicities reverse completely within 3–6 days. Ligand Receptor–Directed Constructs High-affinity receptors for cytokines have led to the design of cytokine-toxin recombinant fusion proteins, such as IL-2 expressed in frame with a fragment of diphtheria toxin. A commercially available construct has activity against certain T cell lymphomas. Likewise, the high-affinity folate receptor is the target for folate conjugated to chemotherapeutic agents. In both cases, the drug’s utility derives from the internalization of the targeted receptor and cleavage of the active drug or toxin moiety. Although total-body irradiation has a role in preparing a patient to received allogeneic stem cells, and antibodies as described above can specifically target radioisotopes, systemically administered isotopes of iodide salts have an important role in the treatment of thyroid neoplasms, owing to the selective upregulation of the iodide transporter in the tumor cell compartment. Likewise, isotopes of samarium and radium have been found useful in the palliation of symptoms from advanced bony metastases of prostate cancer owing to their selective deposition at the tumor–bone matrix interface, thereby potentially affecting the function of both tumor and stromal cells in the progressive growth of the metastatic deposit. Resistance mechanisms to the conventional cytotoxic agents were initially characterized in the late twentieth century as defects in drug uptake, metabolism, or export by tumor cells. The multidrug resistance (mdr) gene defined in vitro in cell lines exposed to increasing concentrations of drugs led to the definition of a family of transport proteins that, when overexpressed, result in the facile transport of a variety of hydrophobic drugs out of the cancer cell. Although efforts to manipulate this transporter to promote drug residence in tumor cells have been pursued, none are clinically useful at this time. Drug-metabolizing enzymes such as cytidine deaminase are upregulated in resistant tumor cells, and this is the basis for so-called “high-dose cytarabine” regimens in the treatment of leukemia. Another resistance mechanism defined during this era involved increased expression of a drug’s target, exemplified by amplification of the dihydrofolate reductase gene, in patients who had lost responsiveness to methotrexate, or mutation of topoisomerase II in tumors that relapsed after topoisomerase II modulator treatment. A second class of resistance mechanisms involves loss of the cellular apoptotic mechanism activated after the engagement of a drug’s target by the drug. This occurs in a way that is heavily influenced by the biology of the particular tumor type. For example, decreased alkylguanine alkyltransferase defines a subset of glioblastoma patients with the prospect of greatest benefit from treatment with temozolomide, but has no predictive value for benefit from temozolomide in epithelial neoplasms. Likewise, ovarian cancers resistant to platinating agents have decreased expression of the proapoptotic gene bax. These types of findings have prompted the idea that responsive tumors to chemotherapeutic agents are populated by cells that express drug-related cell death controlling genes, creating in effect a state of “synthetic lethality” of the drug (Chap. 102e) with the genes expressed in responsive tumors, analogous to the existence in yeast of mutations that are well tolerated in the absence of a physiologic stressor but become lethal in the presence of that stressor. In the case of tumors, the chemotherapy inducing the cell death response is the analogous physiologic stressor. A third class of resistance mechanisms emerged from sequencing of the targets of agents directed at oncogenic kinases. Thus, patients with CML resistant to imatinib have acquired mutations in the ATP binding domain of p210bcr-abl in some cases, leading to the screening and design of agents with activity against the mutant proteins. Entirely analogous resistance mechanisms have emerged in patients with lung cancer treated with the EGFR antagonists gefitinib and erlotinib. A final category of tumor resistance mechanisms to targeted agents includes the upregulation of alternate means of activating the pathway targeted by the agent. Thus melanomas initially responsive to BRAF V600E antagonists such as vemurafenib may reactivate raf signaling by upregulating isoforms that can bypass the variant blocked by the drug. Likewise, inhibition of HER2/neu signaling in breast cancer cells can lead to the emergence of variants with distinct oncogenic signaling pathways such as PI3 kinase. Analogously in NSCLC, EGFR inhibitor treatment leads to the emergence of cells with a predominance of c-met protooncogene–dependent signaling in the resistant tumors. The susceptibility of a tumor to different treatments as a function of its expression of potential drug targets or their mutational profile has led to efforts to define the dominant pathways driving a patient’s tumor by genomic techniques including whole exome sequencing. The difficulty with applying such data to patient treatment is recognizing that these pathways may change during the natural history of a tumor and that different sites in a single patient may have tumors with different patterns of gene mutation. The common cytotoxic chemotherapeutic agents almost invariably affect bone marrow function. Titration of this effect determines the MTD of the agent on a given schedule. The normal kinetics of blood cell turnover influences the sequence and sensitivity of each of the formed elements. Polymorphonuclear leukocytes (PMNs; t1/2 = 6–8 h), platelets (t = 5–7 days), and red blood cells (RBCs; t = 120 days) have most, less, and least susceptibility, respectively, to usually administered cytotoxic agents. The nadir count of each cell type in response to classes of agents is characteristic. Maximal neutropenia occurs 6–14 days after conventional doses of anthracyclines, antifolates, and antimetabolites. Alkylating agents differ from each other in the timing of cytopenias. Nitrosoureas, DTIC, and procarbazine can display delayed marrow toxicity, first appearing 6 weeks after dosing. Complications of myelosuppression result from the predictable sequelae of the missing cells’ function. Febrile neutropenia refers to the clinical presentation of fever (one temperature ≥38.5°C or three readings ≥38°C but ≤38.5°C per 24 h) in a neutropenic patient with an uncontrolled neoplasm involving the bone marrow or, more usually, in a patient undergoing treatment with cytotoxic agents. Mortality from uncontrolled infection varies inversely with the neutrophil count. If the nadir neutrophil count is >1000/μL, there is little risk; if <500/μL, risk of death is markedly increased. Management of febrile neutropenia has conventionally included empirical coverage with antibiotics for the duration of neutropenia (Chap. 104). Selection of antibiotics is governed by the expected association of infections with certain underlying neoplasms; careful physical examination (with scrutiny of catheter sites, dentition, mucosal surfaces, and perirectal and genital orifices by gentle palpation); chest x-ray; and Gram stain and culture of blood, urine, and sputum (if any) to define a putative site of infection. In the absence of any originating site, a broadly acting β-lactam with anti-Pseudomonas activity, such as ceftazidime, is begun empirically. The addition of vancomycin to cover potential cutaneous sites of origin (until these are ruled out or shown to originate from methicillin-sensitive organisms) or metronidazole or imipenem for abdominal or other sites favoring anaerobes reflects modifications tailored to individual patient presentations. The coexistence of pulmonary compromise raises a distinct set of potential pathogens, including Legionella, Pneumocystis, and fungal agents that may require further diagnostic evaluations, such as bronchoscopy with bronchoalveolar lavage. Febrile neutropenic patients can be stratified broadly into two prognostic groups. The first, with expected short duration of neutropenia and no evidence of hypotension or abdominal or other localizing symptoms, may be expected to do well even with oral regimens, e.g., ciprofloxacin or moxifloxacin, or amoxicillin plus clavulanic acid. A less favorable prognostic group is patients with expected prolonged neutropenia, evidence of sepsis, and end organ compromise, particularly pneumonia. These patients require tailoring of their antibiotic regimen to their underlying presentation, with frequent empirical addition of antifungal agents if fever and neutropenia persists for 7 days without identification of an adequately treated organism or site. Transfusion of granulocytes has no role in the management of febrile neutropenia, owing to their exceedingly short half-life, mechanical fragility, and clinical syndromes of pulmonary compromise with leukostasis after their use. Instead, colony-stimulating factors (CSFs) are used to augment bone marrow production of PMNs. Early-acting factors such as IL-1, IL-3, and stem cell factor have not been as useful clinically as late-acting, lineage-specific factors such as granulocyte colony-stimulating factor (G-CSF) or GM-CSF, erythropoietin (EPO), thrombopoietin, IL-6, and IL-11. CSFs may easily become overused in oncology practice. The settings in which their use has been proved effective are limited. G-CSF, GM-CSF, EPO, and IL-11 are currently approved for use. The American Society of Clinical Oncology has developed practice guidelines for the use of G-CSF and GM-CSF (Table 103e-7). Primary prophylaxis (i.e., shortly after completing chemotherapy to reduce the nadir) administers G-CSF to patients receiving cytotoxic Chapter 103e Principles of Cancer Treatment With the first cycle of chemotherapy (so-called primary CSF administration) Use if the probability of febrile neutropenia is ≥20% Age >65 years treated for lymphoma with curative intent or other tumor Dose-dense regimens in a clinical trial or with strong evidence of benefit With subsequent cycles if febrile neutropenia has previously occurred (so-called secondary CSF administration) Afebrile neutropenic patients No evidence of benefit Febrile neutropenic patients No evidence of benefit May feel compelled to use in the face of clinical deterioration from sepsis, pneumonia, or fungal infection, but benefit unclear In bone marrow or peripheral blood stem cell transplantation Use to mobilize stem cells from marrow Use to hasten myeloid recovery In acute myeloid leukemia G-CSF of minor or no benefit GM-CSF of no benefit and may be harmful In myelodysplastic syndromes Use intermittently in subset with neutropenia and recurrent infection What Dose and Schedule Should Be Used? G-CSF: 5 mg/kg per day subcutaneously GM-CSF: 250 mg/m2 per day subcutaneously Pegfilgrastim: one dose of 6 mg 24 h after chemotherapy When Should Therapy Begin and End? When indicated, start 24–72 h after chemotherapy Continue until absolute neutrophil count is 10,000/μL Do not use concurrently with chemotherapy or radiation therapy Abbreviations: CSF, cerebrospinal fluid; G-CSF, granulocyte colony-stimulating factor; GM-CSF, granulocyte-macrophage colony-stimulating factor. Source: From the American Society of Clinical Oncology: J Clin Oncol 24:3187, 2006. regimens associated with a 20% incidence of febrile neutropenia. “Dose-dense” regimens, where cycling of chemotherapy is intended to be completed without delay of administered doses, may also benefit, but such patients should be on a clinical trial. Administration of G-CSF in these circumstances has reduced the incidence of febrile neutropenia in several studies by about 50%. Most patients, however, receive regimens that do not have such a high risk of expected febrile neutropenia, and therefore most patients initially should not receive G-CSF or GM-CSF. Special circumstances—such as a documented history of febrile neutropenia with the regimen in a particular patient or categories of patients at increased risk, such as patients older than age 65 years with aggressive lymphoma treated with curative chemotherapy regimens; extensive compromise of marrow by prior radiation or chemotherapy; or active, open wounds or deep-seated infection—may support primary treatment with G-CSF or GM-CSF. Administration of G-CSF or GM-CSF to afebrile neutropenic patients or to patients with low-risk febrile neutropenia is not recommended, and patients receiving concomitant chemoradiation treatment, particularly those with thoracic neoplasms, likewise are not generally recommended for treatment. In contrast, administration of G-CSF to high-risk patients with febrile neutropenia and evidence of organ compromise including sepsis syndrome, invasive fungal infection, concurrent hospitalization at the time fever develops, pneumonia, profound neutropenia (<0.1 × 109/L), or age >65 years is reasonable. Secondary prophylaxis refers to the administration of CSFs in patients who have experienced a neutropenic complication from a prior cycle of chemotherapy; dose reduction or delay may be a reasonably considered alternative. G-CSF or GM-CSF is conventionally started 24–72 h after completion of chemotherapy and continued until a PMN count of 10,000/μL is achieved, unless a “depot” preparation of G-CSF such as pegfilgrastim is used, where one dose is administered at least 14 days before the next scheduled administration of chemotherapy. Also, patients with myeloid leukemias undergoing induction therapy may have a slight reduction in the duration of neutropenia if G-CSF is commenced after completion of therapy and may be of particular value in elderly patients, but the influence on long-term outcome has not been defined. GM-CSF probably has a more restricted utility than G-CSF, with its use currently limited to patients after autologous bone marrow transplants, although proper head-to-head comparisons with G-CSF have not been conducted in most instances. GM-CSF may be associated with more systemic side effects. Dangerous degrees of thrombocytopenia do not frequently complicate the management of patients with solid tumors receiving cytotoxic chemotherapy (with the possible exception of certain carboplatincontaining regimens), but they are frequent in patients with certain hematologic neoplasms where marrow is infiltrated with tumor. Severe bleeding related to thrombocytopenia occurs with increased frequency at platelet counts <20,000/μL and is very prevalent at counts <5000/μL. The precise “trigger” point at which to transfuse patients has been defined as a platelet count of 10,000/μL or less in patients without medical comorbidities that may increase the risk of bleeding. This issue is important not only because of the costs of frequent transfusion, but unnecessary platelet transfusions expose the patient to the risks of allosensitization and loss of value from subsequent transfusion owing to rapid platelet clearance, as well as the infectious and hypersensitivity risks inherent in any transfusion. Prophylactic transfusions to keep platelets >20,000/μL are reasonable in patients with leukemia who are stressed by fever or concomitant medical conditions (the threshold for transfusion is 10,000/μL in patients with solid tumors and no other bleeding diathesis or physiologic stressors such as fever or hypotension, a level that might also be reasonably considered for leukemia patients who are thrombocytopenic but not stressed or bleeding). In contrast, patients with myeloproliferative states may have functionally altered platelets despite normal platelet counts, and transfusion with normal donor platelets should be considered for evidence of bleeding in these patients. Careful review of medication lists to prevent exposure to nonsteroidal anti-inflammatory agents and maintenance of clotting factor levels adequate to support near-normal prothrombin and partial thromboplastin time tests are important in minimizing the risk of bleeding in the thrombocytopenic patient. Certain cytokines in clinical investigation have shown an ability to increase platelets (e.g., IL-6, IL-1, thrombopoietin), but clinical benefit and safety are not yet proven. IL-11 (oprelvekin) is approved for use in the setting of expected thrombocytopenia, but its effects on platelet counts are small, and it is associated with side effects such as headache, fever, malaise, syncope, cardiac arrhythmias, and fluid retention. Eltrombopag and romiplostim are thrombopoietin agonists with demonstrated efficacy in certain thrombocytopenic states, but they have not been systematically studied in chemotherapy-induced thrombocytopenia. Anemia associated with chemotherapy can be managed by transfusion of packed RBCs. Transfusion is not undertaken until the hemoglobin falls to <80 g/L (8 g/dL), compromise of end organ function occurs, or an underlying condition (e.g., coronary artery disease) calls for maintenance of hemoglobin >90 g/L (9 g/dL). Patients who are to receive therapy for >2 months on a “stable” regimen and who are likely to require continuing transfusions are also candidates for erythropoietin (EPO). Randomized trials in certain tumors have raised the possibility that EPO use may promote tumor-related adverse events. This information should be considered in the care of individual patients. In the event EPO treatment is undertaken, maintenance of hemoglobin of 90–100 g/L (9–10 g/dL) should be the target. In the setting of adequate iron stores and serum EPO levels <100 ng/mL, EPO, 150 U three times a week, can produce a slow increase in hemoglobin over about 2 months of administration. Depot formulations can be administered less frequently. It is unclear whether higher hemoglobin levels, up to 110–120 g/L (11–12 g/dL), are associated with improved quality of life to a degree that justifies the more intensive EPO use. Efforts to achieve levels at or above 120 g/L (12 g/dL) have been associated with increased thromboses and mortality rates. EPO may rescue hypoxemic cells from death and contribute to tumor radioresistance. The most common side effect of chemotherapy administration is nausea, with or without vomiting. Nausea may be acute (within 24 h of chemotherapy), delayed (>24 h), or anticipatory of the receipt of chemotherapy. Patients may be likewise stratified for their risk of susceptibility to nausea and vomiting, with increased risk in young, female, heavily pretreated patients without a history of alcohol or drug use but with a history of motion or morning sickness. Antineoplastic agents vary in their capacity to cause nausea and vomiting. Highly emetogenic drugs (>90%) include mechlorethamine, streptozotocin, DTIC, cyclophosphamide at >1500 mg/m2, and cisplatin; moderately emetogenic drugs (30–90% risk) include carboplatin, cytosine arabinoside (>1 mg/m2), ifosfamide, conventional-dose cyclophosphamide, and anthracyclines; low-risk (10–30%) agents include 5FU, taxanes, etoposide, and bortezomib, with minimal risk (<10%) afforded by treatment with antibodies, bleomycin, busulfan, fludarabine, and vinca alkaloids. Emesis is a reflex caused by stimulation of the vomiting center in the medulla. Input to the vomiting center comes from the chemoreceptor trigger zone (CTZ) and afferents from the peripheral gastrointestinal tract, cerebral cortex, and heart. The different emesis “syndromes” require distinct management approaches. In addition, a conditioned reflex may contribute to anticipatory nausea arising after repeated cycles of chemotherapy. Accordingly, antiemetic agents differ in their locus and timing of action. Combining agents from different classes or the sequential use of different classes of agent is the cornerstone of successful management of chemotherapy-induced nausea and vomiting. Of great importance are the prophylactic administration of agents and such psychological techniques as the maintenance of a supportive milieu, counseling, and relaxation to augment the action of antiemetic agents. Serotonin antagonists (5-HT3) and neurokinin 1 (NK1) receptor antagonists are useful in “high-risk” chemotherapy regimens. The combination acts at both peripheral gastrointestinal and CNS sites that control nausea and vomiting. For example, the 5-HT3 blocker dolasetron, 100 mg intravenously or orally; dexamethasone, 12 mg; and the NK1 antagonist aprepitant, 125 mg orally, are combined on the day of administration of severely emetogenic regimens, with repetition of dexamethasone (8 mg) and aprepitant (80 mg) on days 2 and 3 for delayed nausea. Alternate 5-HT3 antagonists include ondansetron, given as 0.15 mg/kg intravenously for three doses just before and at 4 and 8 h after chemotherapy; palonosetron at 0.25 mg over 30 s, 30 min before chemotherapy; and granisetron, given as a single dose of 0.01 mg/kg just before chemotherapy. Emesis from moderately emetic chemotherapy regimens may be prevented with a 5-HT3 antagonist and dexamethasone alone for patients not receiving doxorubicin and cyclophosphamide combinations; the latter combination requires the 5-HT3/dexamethasone/aprepitant on day 1 but aprepitant alone on days 2 and 3. Emesis from low-emetic-risk regimens may be prevented with 8 mg of dexamethasone alone or with non-5-HT3, non-NK1 antagonist approaches including the following. Antidopaminergic phenothiazines act directly at the CTZ and include prochlorperazine (Compazine), 10 mg intramuscularly or intravenously, 10–25 mg orally, or 25 mg per rectum every 4–6 h for up to four doses; and thiethylperazine, 10 mg by potentially all of the above routes every 6 h. Haloperidol is a butyrophenone dopamine antagonist given at 1 mg intramuscularly or orally every 8 103e-25 h. Antihistamines such as diphenhydramine have little intrinsic anti-emetic capacity but are frequently given to prevent or treat dystonic reactions that can complicate use of the antidopaminergic agents. Lorazepam is a short-acting benzodiazepine that provides an anxiolytic effect to augment the effectiveness of a variety of agents when used at 1–2 mg intramuscularly, intravenously, or orally every 4–6 h. Metoclopramide acts on peripheral dopamine receptors to augment gastric emptying and is used in high doses for highly emetogenic regimens (1–2 mg/kg intravenously 30 min before chemotherapy and every 2 h for up to three additional doses as needed); intravenous doses of 10–20 mg every 4–6 h as needed or 50 mg orally 4 h before and 8 and 12 h after chemotherapy are used for moderately emetogenic regimens. 5-9-Tetrahydrocannabinol (Marinol) is a rather weak antiemetic compared to other available agents, but it may be useful for persisting nausea and is used orally at 10 mg every 3–4 h as needed. Regimens that include 5FU infusions and/or irinotecan may produce severe diarrhea. Similar to the vomiting syndromes, chemotherapy-induced diarrhea may be immediate or can occur in a delayed fashion up to 48–72 h after the drugs. Careful attention to maintained hydration and electrolyte repletion, intravenously if necessary, along with antimotility treatments such as “high-dose” loperamide, commenced with 4 mg at the first occurrence of diarrhea, with 2 mg repeated every 2 h until 12 h without loose stools, not to exceed a total daily dose of 16 mg. Octreotide (100–150 μg), a somatostatin analogue, or opiate-based preparations may be considered for patients not responding to loperamide. Irritation and inflammation of the mucous membranes particularly afflicting the oral and anal mucosa, but potentially involving the gastrointestinal tract, may accompany cytotoxic chemotherapy. Mucositis is due to damage to the proliferating cells at the base of the mucosal squamous epithelia or in the intestinal crypts. Topical therapies, including anesthetics and barrier-creating preparations, may provide symptomatic relief in mild cases. Palifermin or keratinocyte growth factor, a member of the fibroblast growth factor family, is effective in preventing severe mucositis in the setting of high-dose chemotherapy with stem cell transplantation for hematologic malignancies. It may also prevent or ameliorate mucositis from radiation. Chemotherapeutic agents vary widely in causing alopecia, with anthracyclines, alkylating agents, and topoisomerase inhibitors reliably causing near-total alopecia when given at therapeutic doses. Antimetabolites are more variably associated with alopecia. Psychological support and the use of cosmetic resources are to be encouraged, and “chemo caps” that reduce scalp temperature to decrease the degree of alopecia should be discouraged, particularly during treatment with curative intent of neoplasms, such as leukemia or lymphoma, or in adjuvant breast cancer therapy. The richly vascularized scalp can certainly harbor micrometastatic or disseminated disease. Cessation of ovulation and azoospermia reliably result from alkylating agent– and topoisomerase poison–containing regimens. The duration of these effects varies with age and sex. Males treated for Hodgkin’s disease with mechlorethamineand procarbazine-containing regimens are effectively sterile, whereas fertility usually returns after regimens that include cisplatin, vinblastine, or etoposide and after bleomycin for testicular cancer. Sperm banking before treatment may be considered to support patients likely to be sterilized by treatment. Females experience amenorrhea with anovulation after alkylating agent therapy; they are likely to recover normal menses if treatment is completed before age 30 but unlikely to recover menses after age 35. Even those who regain menses usually experience premature Chapter 103e Principles of Cancer Treatment menopause. Because the magnitude and extent of decreased fertility can be difficult to predict, patients should be counseled to maintain effective contraception, preferably by barrier means, during and after therapy. Resumption of efforts to conceive should be considered in the context of the patient’s likely prognosis. Hormone replacement therapy should be undertaken in women who do not have a hormonally responsive tumor. For patients who have had a hormone-sensitive tumor primarily treated by a local modality, conventional practice would counsel against hormone replacement, but this issue is under investigation. Chemotherapy agents have variable effects on the success of pregnancy. All agents tend to have increased risk of adverse outcomes when administered during the first trimester, and strategies to delay chemotherapy, if possible, until after this milestone should be considered if the pregnancy is to continue to term. Patients in their second or third trimester can be treated with most regimens for the common neoplasms afflicting women in their childbearing years, with the exception of antimetabolites, particularly antifolates, which have notable teratogenic or fetotoxic effects throughout pregnancy. The need for anticancer chemotherapy per se is infrequently a clear basis to recommend termination of a concurrent pregnancy, although each treatment strategy in this circumstance must be tailored to the individual needs of the patient. Treatment with EGFR-directed small molecules (e.g., erlotinib, afatinib, lapatinib), antibodies (e.g., cetuximab, panitumumab), and mTOR antagonists (e.g., everolimus, temsirolimus) reliably produces an acneiform rash that can be a source of distress to patients and can be ameliorated with topically applied clindamycin gels and low-potency corticosteroid creams. Diarrhea frequently accompanies tyrosine kinase inhibitor administration and may respond to antimotility agents such as loperamide or stool-bulking agents. Anti-VEGFR-directed treatments, including the specific antibody bevacizumab, and the “multikinase” inhibitors with anti VEGFR activity, such as sorafenib, sunitinib, and pazopanib, reliably produce hypertension in a significant fraction of patients that typically can be treated with lisinopril, amlodipine, or clonidine alone or in combination. More difficult to treat is proteinuria with resultant azotemia; this can be a basis for discontinuing treatment depending on the clinical context. Thyroid function is prominently affected by chronic exposure to this group of multikinase inhibitors including sorafenib and pazopanib, and periodic surveillance of thyroid-stimulating hormone and thyroxine (T4) levels during treatment is reasonable. Gastrointestinal perforations, arterial thromboses, and hemorrhage likewise have no specific treatments and may be a basis to avoid this class of agents. Palmar-plantar dysesthesia (“hand-foot syndrome”) can be seen after administration of these agents (as well as some cytotoxic agents including gemcitabine and liposomal preparations of doxorubicin) and is a basis for considering dose reduction if not responsive to topical emollients and analgesics. Protein kinase antagonists as a class have been associated with poorly predicted hepatic and cardiac toxicities (imatinib, dasatinib, sorafenib, pazopanib) or cardiac conduction deficits including prolonged QT interval (pazopanib). The occurrence of new cardiac or liver abnormalities in a patient receiving treatment with a protein kinase antagonist should lead to a consideration of the risk versus benefit and the possible relation of the agent to the new adverse event. The existence of prior cardiac dysfunction is a relative contraindication to the use of certain targeted therapies (e.g., trastuzumab), although each patient’s needs should be individualized. Chronic effects of cancer treatment are reviewed in Chap. 125. of the epidermis, which allows bacteria to gain access to subcutaneous infections in patients with Cancer tissue and permits the development of cellulitis. The artificial closing of a normally patent orifice can also predispose to infection; for Robert W. Finberg example, obstruction of a ureter by a tumor can cause urinary tract Infections are a common cause of death and an even more common cause of morbidity in patients with a wide variety of neoplasms. Autopsy studies show that most deaths from acute leukemia and half of deaths from lymphoma are caused directly by infection. With more intensive chemotherapy, patients with solid tumors have also become more likely to die of infection. Fortunately, an evolving approach to prevention and treatment of infectious complications of cancer has decreased infection-associated mortality rates and will probably continue to do so. This accomplishment has resulted from three major steps: 1. The practice of using “early empirical” antibiotics reduced mortality rates among patients with leukemia and bacteremia from 84% in 1965 to 44% in 1972. Recent studies suggest that the mortality rate due to infection in febrile neutropenic patients dropped to <10% by 2013. This dramatic improvement is attributed to early intervention with appropriate antimicrobial therapy. 2. “Empirical” antifungal therapy has also lowered the incidence of disseminated fungal infection, with dramatic decreases in mortality rates. An antifungal agent is administered—on the basis of likely fungal infection—to neutropenic patients who, after 4–7 days of antibiotic therapy, remain febrile but have no positive cultures. 3. Use of antibiotics for afebrile neutropenic patients as broad-spectrum prophylaxis against infections has decreased both mortality and morbidity even further. The current approach to treatment of severely neutropenic patients (e.g., those receiving high-dose chemotherapy for leukemia or high-grade lymphoma) is based on initial prophylactic therapy at the onset of neutropenia, subsequent “empirical” antibacterial therapy targeting the organisms whose involvement is likely in light of physical findings (most often fever alone), and finally “empirical” antifungal therapy based on the known likelihood that fungal infection will become a serious issue after 4–7 days of broad-spectrum antibacterial therapy. A physical predisposition to infection in patients with cancer (Table 104-1) can be a result of the neoplasm’s production of a break in the skin. For example, a squamous cell carcinoma may cause local invasion infection, and obstruction of the bile duct can cause cholangitis. Part of the host’s normal defense against infection depends on the continuous emptying of a viscus; without emptying, a few bacteria that are present as a result of bacteremia or local transit can multiply and cause disease. A similar problem can affect patients whose lymph node integrity has been disrupted by radical surgery, particularly patients who have had radical node dissections. A common clinical problem following radical mastectomy is the development of cellulitis (usually caused by streptococci or staphylococci) because of lymphedema and/or inadequate lymph drainage. In most cases, this problem can be addressed by local measures designed to prevent fluid accumulation and breaks in the skin, but antibiotic prophylaxis has been necessary in refractory cases. A life-threatening problem common to many cancer patients is the loss of the reticuloendothelial capacity to clear microorganisms after splenectomy, which may be performed as part of the management of hairy cell leukemia, chronic lymphocytic leukemia (CLL), and chronic myelogenous leukemia (CML) and in Hodgkin’s disease. Even after curative therapy for the underlying disease, the lack of a spleen predisposes such patients to rapidly fatal infections. The loss of the spleen through trauma similarly predisposes the normal host to overwhelming infection throughout life. The splenectomized patient should be counseled about the risks of infection with certain organisms, such as the protozoan Babesia (Chap. 249) and Capnocytophaga canimorsus, a bacterium carried in the mouths of animals (Chaps. 167e and 183e). Because encapsulated bacteria (Streptococcus pneumoniae, Haemophilus influenzae, and Neisseria meningitidis) are the organisms most commonly associated with postsplenectomy sepsis, splenectomized persons should be vaccinated (and revaccinated; Table 104-2 and Chap. 148) against the capsular polysaccharides of these organisms. Many clinicians recommend giving splenectomized patients a small supply of antibiotics effective against S. pneumoniae, N. meningitidis, and H. influenzae to avert rapid, overwhelming sepsis in the event that they cannot present for medical attention immediately after the onset of fever or other signs or symptoms of bacterial infection. A few tablets of amoxicillin/clavulanic acid (or levofloxacin if resistant strains of S. pneumoniae are prevalent locally) are a reasonable choice for this purpose. Type of Defense Specific Lesion Cells Involved Organism Cancer Association Disease Emptying of fluid Occlusion of orifices: collections ureters, bile duct, colon Lymphatic function Node dissection Splenic clearance of Splenectomy Phagocytosis Lack of granulocytes Humoral immunity Lack of antibody Cellular immunity Lack of T cells Skin epithelial cells Luminal epithelial cells T cells and macrophages Staphylococci, streptococci Staphylococci, streptococci Streptococcus pneumoniae, Haemophilus influenzae, Neisseria meningitidis, Babesia, Capnocytophaga canimorsus Staphylococci, streptococci, enteric organisms, fungi S. pneumoniae, H. influenzae, N. meningitidis Mycobacterium tuberculosis, Listeria, herpesviruses, fungi, intracellular parasites Head and neck, squamous cell carcinoma Renal, ovarian, biliary tree, metastatic diseases of many cancers Hodgkin’s disease, leukemia Acute myeloid and acute lymphocytic leukemias, hairy cell leukemia Chronic lymphocytic leukemia, multiple myeloma Hodgkin’s disease, leukemia, T cell lymphoma Cellulitis, extensive skin infection Rapid, overwhelming bacteremia; urinary tract infection Rapid, overwhelming sepsis Infections with encapsulated organisms, sinusitis, pneumonia Infections with intracellular bacteria, fungi, parasites; virus reactivation a case-by-case basis following reevaluation.) aThe latest recommendations by the Advisory Committee on Immunization Practices and the CDC guidelines can be found at http://www.cdc.gov/vaccines. bA single dose of TDaP (tetanus–diphtheria–acellular pertussis), followed by a booster dose of Td (tetanus-diphtheria) every 10 years, is recommended for adults. cLive-virus vaccine is contraindicated; inactivated vaccine should be used. dTwo types of vaccine are used to prevent pneumococcal disease. A conjugate vaccine active against 13 serotypes (13-valent pneumococcal conjugate vaccine, or PCV13) is currently administered in three separate doses to all children. A polysaccharide vaccine active against 23 serotypes (23-valent pneumococcal polysaccharide vaccine, or PPSV23) elicits titers of antibody lower than those achieved with the conjugate vaccine, and immunity may wane more rapidly. Because the ablative chemotherapy given to recipients of hematopoietic stem cell transplants (HSCTs) eradicates immunologic memory, revaccination is recommended for all such patients. Vaccination is much more effective once immunologic reconstitution has occurred; however, because of the need to prevent serious disease, pneumococcal vaccine should be administered 6–12 months after transplantation in most cases. Because PPSV23 includes serotypes not present in PCV13, HSCT recipients should receive a dose of PPSV23 at least 8 weeks after the last dose of PCV13. Although antibody titers from PPSV23 clearly decay, experience with multiple doses of PPSV23 is limited, as are data on the safety, toxicity, or efficacy of such a regimen. For this reason, the CDC currently recommends the administration of one additional dose of PPSV23 at least 5 years after the last dose to immunocompromised patients, including transplant recipients, as well as patients with Hodgkin’s disease, multiple myeloma, lymphoma, or generalized malignancies. Beyond this single additional dose, further doses are not recommended at this time. eMeningococcal conjugate vaccine MenACWY is recommended for adults ≤55 years old, and meningococcal polysaccharide vaccine (MPSV4) is recommended for those ≥56 years old. fIncludes both varicella vaccine for children and zoster vaccine for adults. gContact the manufacturer for more information on use in children with acute lymphocytic leukemia. Infections in Patients with Cancer The level of suspicion of infections with certain organisms should depend on the type of cancer diagnosed (Table 104-3). Diagnosis of multiple myeloma or CLL should alert the clinician to the possibility of hypogammaglobulinemia. While immunoglobulin replacement therapy can be effective, in most cases prophylactic antibiotics are a cheaper, more convenient method of eliminating bacterial infections in CLL patients with hypogammaglobulinemia. Patients with acute lymphocytic leukemia (ALL), patients with non-Hodgkin’s lymphoma, and all cancer patients treated with high-dose glucocorticoids (or glucocorticoid-containing chemotherapy regimens) should receive antibiotic prophylaxis for Pneumocystis infection (Table 104-3) for the duration of their chemotherapy. In addition to exhibiting susceptibility to certain infectious organisms, patients with cancer are likely to manifest their infections in characteristic ways. For example, fever—generally a sign of infection in normal hosts—continues to be a reliable indicator in neutropenic patients. In contrast, patients receiving glucocorticoids and agents that impair T cell function and cytokine secretion may have serious infections in the absence of fever. Similarly, neutropenic patients commonly present with cellulitis without purulence and with pneumonia without sputum or even x-ray findings (see below). The use of monoclonal antibodies that target B and T cells as well as drugs that interfere with lymphocyte signal transduction events is associated with reactivation of latent infections. The use of rituximab, the antibody to CD20 (a B cell surface protein), is associated with the development of reactivation tuberculosis as well as other latent viral infections, including hepatitis B and cytomegalovirus (CMV) infection. Like organ transplant recipients (Chap. 169), patients with latent bacterial disease (like tuberculosis) and latent viral disease (like herpes simplex or zoster) should be carefully monitored for reactivation disease. Skin lesions are common in cancer patients, and the appearance of these lesions may permit the diagnosis of systemic bacterial or fungal infection. While cellulitis caused by skin organisms such as aThe reason for this association is not well defined. Streptococcus or Staphylococcus is common, neutropenic patients—i.e., those with <500 functional polymorphonuclear leukocytes (PMNs)/ μL—and patients with impaired blood or lymphatic drainage may develop infections with unusual organisms. Innocent-looking macules or papules may be the first sign of bacterial or fungal sepsis in immunocompromised patients (Fig. 104-1). In the neutropenic host, a macule progresses rapidly to ecthyma gangrenosum (see Fig. 25e-35), a usually painless, round, necrotic lesion consisting of a central black or gray-black eschar with surrounding erythema. Ecthyma gangrenosum, which is located in nonpressure areas (as distinguished from necrotic lesions associated with lack of circulation), is often associated with Pseudomonas aeruginosa bacteremia (Chap. 189) but may be caused by other bacteria. Candidemia (Chap. 240) is also associated with a variety of skin conditions (see Fig. 25e-38) and commonly presents as a maculopapular rash. Punch biopsy of the skin may be the best method for diagnosis. Escherichia coli Serratia spp. Klebsiella spp. Acinetobacter spp.a Pseudomonas aeruginosa Stenotrophomonas spp. Enterobacter spp. Citrobacter spp. Non-aeruginosa Pseudomonas spp.a Candida spp. Mucor/Rhizopus Aspergillus spp. aOften associated with intravenous catheters. Cellulitis, an acute spreading inflammation of the skin, is most often caused by infection with group A Streptococcus or Staphylococcus aureus, virulent organisms normally found on the skin (Chap. 156). Although cellulitis tends to be circumscribed in normal hosts, it may spread rapidly in neutropenic patients. A tiny break in the skin may lead to spreading cellulitis, which is characterized by pain and erythema; in the affected patients, signs of infection (e.g., purulence) are often lacking. What might be a furuncle in a normal host may require amputation because of uncontrolled infection in a patient presenting with leukemia. A dramatic response to an infection that might be trivial in a normal host can mark the first sign of leukemia. Fortunately, granulocytopenic patients are likely to be infected with certain types of organisms (Table 104-4); thus the selection of an antibiotic regimen is somewhat easier than it might otherwise be (see “Antibacterial Therapy,” below). It is essential to recognize cellulitis early and to treat it aggressively. Patients who are neutropenic or who have previously received antibiotics for other reasons may develop cellulitis with unusual organisms (e.g., Escherichia coli, Pseudomonas, or fungi). Early treatment, even of innocent-looking lesions, is essential to prevent necrosis and loss of tissue. Debridement to prevent spread may sometimes be necessary early in the course of disease, but it can often be performed after chemotherapy, when the PMN count increases. FIGURE 104-1 A. Papules related to Escherichia coli bacteremia in a patient with acute lymphocytic leukemia. B. The same lesions on the following day. Sweet syndrome, or febrile neutrophilic dermatosis, was originally described in women with elevated white blood cell (WBC) counts. The disease is characterized by the presence of leukocytes in the lower dermis, with edema of the papillary body. Ironically, this disease now is usually seen in neutropenic patients with cancer, most often in association with acute myeloid leukemia (AML) but also in association with a variety of other malignancies. Sweet syndrome usually presents as red or bluish-red papules or nodules that may coalesce and form sharply bordered plaques (see Fig. 25e-41). The edema may suggest vesicles, but on palpation the lesions are solid, and vesicles probably never arise in this disease. The lesions are most common on the face, neck, and arms. On the legs, they may be confused with erythema nodosum (see Fig. 25e-40). The development of lesions is often accompanied by high fevers and an elevated erythrocyte sedimentation rate. Both the lesions and the temperature elevation respond dramatically to glucocorticoid administration. Treatment begins with high doses of glucocorticoids (prednisone, 60 mg/d) followed by tapered doses over the next 2–3 weeks. Data indicate that erythema multiforme (see Fig. 25e-25) with mucous membrane involvement is often associated with herpes simplex virus (HSV) infection and is distinct from Stevens-Johnson syndrome, which is associated with drugs and tends to have a more widespread distribution. Because cancer patients are both immunosuppressed (and therefore susceptible to herpes infections) and heavily treated with drugs (and therefore subject to Stevens-Johnson syndrome [see Fig. 46e-4]), both of these conditions are common in this population. Cytokines, which are used as adjuvants or primary treatments for cancer, can themselves cause characteristic rashes, further complicating the differential diagnosis. This phenomenon is a particular problem in bone marrow transplant recipients (Chap. 169), who, in addition to having the usual chemotherapy-, antibiotic-, and cytokineinduced rashes, are plagued by graft-versus-host disease. Because IV catheters are commonly used in cancer chemotherapy and are prone to cause infection (Chap. 168), they pose a major problem in the care of patients with cancer. Some catheter-associated infections can be treated with antibiotics, whereas in others the catheter must be removed (Table 104-5). If the patient has a “tunneled” catheter (which consists of an entrance site, a subcutaneous tunnel, and an exit site), a red streak over the subcutaneous part of the line (the tunnel) is grounds for immediate device removal. Failure to remove catheters under these 487 circumstances may result in extensive cellulitis and tissue necrosis. More common than tunnel infections are exit-site infections, often with erythema around the area where the line penetrates the skin. Most authorities (Chap. 172) recommend treatment (usually with vancomycin) for an exit-site infection caused by coagulase-negative Staphylococcus. Treatment of coagulase-positive staphylococcal infection is associated with a poorer outcome, and it is advisable to remove the catheter if possible. Similarly, most clinicians remove catheters associated with infections due to P. aeruginosa and Candida species, because such infections are difficult to treat and bloodstream infections with these organisms are likely to be deadly. Catheter infections caused by Burkholderia cepacia, Stenotrophomonas species, Agrobacterium species, Acinetobacter baumannii, Pseudomonas species other than aeruginosa, and carbapenem-resistant Enterobacteriaceae are likely to be very difficult to eradicate with antibiotics alone. Similarly, isolation of Bacillus, Corynebacterium, and Mycobacterium species should prompt removal of the catheter. infections of the mouth The oral cavity is rich in aerobic and anaerobic bacteria (Chap. 201) that normally live in a commensal relationship with the host. The antimetabolic effects of chemotherapy cause a breakdown of mucosal host defenses, leading to ulceration of the mouth and the potential for invasion by resident bacteria. Mouth ulcerations afflict most patients receiving cytotoxic chemotherapy and have been associated with viridans streptococcal bacteremia. Candida infections of the mouth are very common. Fluconazole is clearly effective in the treatment of both local infections (thrush) and systemic infections (esophagitis) due to Candida albicans. Other azoles (e.g., voriconazole) as well as echinocandins offer similar efficacy as well as activity against the fluconazole-resistant organisms that are associated with chronic fluconazole treatment (Chap. 240). Noma (cancrum oris), commonly seen in malnourished children, is a penetrating disease of the soft and hard tissues of the mouth and adjacent sites, with resulting necrosis and gangrene. It has a counterpart in immunocompromised patients and is thought to be due to invasion of the tissues by Bacteroides, Fusobacterium, and other normal inhabitants of the mouth. Noma is associated with debility, poor oral hygiene, and immunosuppression. Infections in Patients with Cancer Evidence of Infection, Negative Blood Cultures 488 Viruses, particularly HSV, are a prominent cause of morbidity in immunocompromised patients, in whom they are associated with severe mucositis. The use of acyclovir, either prophylactically or therapeutically, is of value. esoPhageal infections The differential diagnosis of esophagitis (usually presenting as substernal chest pain upon swallowing) includes herpes simplex and candidiasis, both of which are readily treatable. Lower Gastrointestinal Tract Disease Hepatic candidiasis (Chap. 240) results from seeding of the liver (usually from a gastrointestinal source) in neutropenic patients. It is most common among patients being treated for AML and usually presents symptomatically around the time the neutropenia resolves. The characteristic picture is that of persistent fever unresponsive to antibiotics, abdominal pain and tenderness or nausea, and elevated serum levels of alkaline phosphatase in a patient with hematologic malignancy who has recently recovered from neutropenia. The diagnosis of this disease (which may present in an indolent manner and persist for several months) is based on the finding of yeasts or pseudohyphae in granulomatous lesions. Hepatic ultrasound or CT may reveal bull’s-eye lesions. MRI scans reveal small lesions not visible by other imaging modalities. The pathology (a granulomatous response) and the timing (with resolution of neutropenia and an elevation in granulocyte count) suggest that the host response to Candida is an important component of the manifestations of disease. In many cases, although organisms are visible, cultures of biopsied material may be negative. The designation hepatosplenic candidiasis or hepatic candidiasis is a misnomer because the disease often involves the kidneys and other tissues; the term chronic disseminated candidiasis may be more appropriate. Because of the risk of bleeding with liver biopsy, diagnosis is often based on imaging studies (MRI, CT). Treatment should be directed to the causative agent (usually C. albicans but sometimes Candida tropicalis or other less common Candida species). Typhlitis Typhlitis (also referred to as necrotizing colitis, neutropenic colitis, necrotizing enteropathy, ileocecal syndrome, and cecitis) is a clinical syndrome of fever and right-lower-quadrant (or generalized abdominal) tenderness in an immunosuppressed host. This syndrome is classically seen in neutropenic patients after chemotherapy with cytotoxic drugs. It may be more common among children than among adults and appears to be much more common among patients with AML or ALL than among those with other types of cancer. Physical examination reveals right-lower-quadrant tenderness, with or without rebound tenderness. Associated diarrhea (often bloody) is common, and the diagnosis can be confirmed by the finding of a thickened cecal wall on CT, MRI, or ultrasonography. Plain films may reveal a right-lower-quadrant mass, but CT with contrast or MRI is a much more sensitive means of diagnosis. Although surgery is sometimes attempted to avoid perforation from ischemia, most cases resolve with medical therapy alone. The disease is sometimes associated with positive blood cultures (which usually yield aerobic gram-negative bacilli), and therapy is recommended for a broad spectrum of bacteria (particularly gram-negative bacilli, which are likely to be found in the bowel flora). Surgery is indicated in the case of perforation. Clostridium difficile–Induced Diarrhea Patients with cancer are predisposed to the development of C. difficile diarrhea (Chap. 161) as a consequence of chemotherapy alone. Thus, they may test positive for C. difficile even without receiving antibiotics. Obviously, such patients are also subject to C. difficile–induced diarrhea as a result of antibiotic pressure. C. difficile should always be considered as a possible cause of diarrhea in cancer patients who have received either chemotherapy or antibiotics. CENTRAL NERVOUS SYSTEM–SPECIFIC SYNDROMES Meningitis The presentation of meningitis in patients with lymphoma or CLL and in patients receiving chemotherapy (particularly with glucocorticoids) for solid tumors suggests a diagnosis of cryptococcal or listerial infection. As noted previously, splenectomized patients are susceptible to rapid, overwhelming infection with encapsulated bacteria (including S. pneumoniae, H. influenzae, and N. meningitidis). Similarly, patients who are antibody-deficient (e.g., those with CLL, those who have received intensive chemotherapy, or those who have undergone bone marrow transplantation) are likely to have infections caused by these bacteria. Other cancer patients, however, because of their defective cellular immunity, are likely to be infected with other pathogens (Table 104-3). Central nervous system (CNS) tuberculosis should be considered, especially in patients from countries where tuberculosis is highly prevalent in the population. Encephalitis The spectrum of disease resulting from viral encephalitis is expanded in immunocompromised patients. A predisposition to infections with intracellular organisms similar to those encountered in patients with AIDS (Chap. 226) is seen in cancer patients receiving (1) high-dose cytotoxic chemotherapy, (2) chemotherapy affecting T cell function (e.g., fludarabine), or (3) antibodies that eliminate T cells (e.g., anti-CD3, alemtuzumab, anti-CD52) or cytokine activity (anti– tumor necrosis factor agents or interleukin 1 receptor antagonists). Infection with varicella-zoster virus (VZV) has been associated with encephalitis that may be caused by VZV-related vasculitis. Chronic viral infections may also be associated with dementia and encephalitic presentations. A diagnosis of progressive multifocal leukoencephalopathy (Chap. 164) should be considered when a patient who has received chemotherapy (rituximab in particular) presents with dementia (Table 104-6). Other abnormalities of the CNS that may be confused with infection include normal-pressure hydrocephalus and vasculitis resulting from CNS irradiation. It may be possible to differentiate these conditions by MRI. Brain Masses Mass lesions of the brain most often present as headache with or without fever or neurologic abnormalities. Infections associated with mass lesions may be caused by bacteria (particularly Nocardia), fungi (particularly Cryptococcus or Aspergillus), or parasites (Toxoplasma). Epstein-Barr virus (EBV)–associated lymphoma may also present as single—or sometimes multiple—mass lesions of the brain. A biopsy may be required for a definitive diagnosis. Pneumonia (Chap. 153) in immunocompromised patients may be difficult to diagnose because conventional methods of diagnosis depend on the presence of neutrophils. Bacterial pneumonia in neutropenic patients may present without purulent sputum—or, in fact, without any sputum at all—and may not produce physical findings suggestive of chest consolidation (rales or egophony). In granulocytopenic patients with persistent or recurrent fever, the chest x-ray pattern may help to localize an infection and thus to determine which investigative tests and procedures should be undertaken and which therapeutic options should be considered (Table 104-7). In this setting, a simple chest x-ray is a screening tool; because the impaired host response results in less evidence of consolidation or infiltration, high-resolution CT is recommended for the diagnosis of pulmonary infections. The difficulties encountered in the management of pulmonary infiltrates relate in part to the difficulties of performing diagnostic procedures on the patients involved. When platelet counts can be increased to adequate levels by transfusion, aHigh-dose glucocorticoid therapy, cytotoxic chemotherapy. Cause of Pneumonia microscopic and microbiologic evaluation of the fluid obtained by endoscopic bronchial lavage is often diagnostic. Lavage fluid should be cultured for Mycoplasma, Chlamydia, Legionella, Nocardia, more common bacterial pathogens, fungi, and viruses. In addition, the possibility of Pneumocystis pneumonia should be considered, especially in patients with ALL or lymphoma who have not received prophylactic trimethoprim-sulfamethoxazole (TMP-SMX). The characteristics of the infiltrate may be helpful in decisions about further diagnostic and therapeutic maneuvers. Nodular infiltrates suggest fungal pneumonia (e.g., that caused by Aspergillus or Mucor). Such lesions may best be approached by visualized biopsy procedures. It is worth noting that while bacterial pneumonias classically present as lobar infiltrates in normal hosts, bacterial pneumonias in granulocytopenic hosts present with a paucity of signs, symptoms, or radiographic abnormalities; thus, the diagnosis is difficult. Aspergillus species (Chap. 241) can colonize the skin and respiratory tract or cause fatal systemic illness. Although this fungus may cause aspergillomas in a previously existing cavity or may produce allergic bronchopulmonary disease in some patients, the major problem posed by this genus in neutropenic patients is invasive disease, primarily due to Aspergillus fumigatus or Aspergillus flavus. The organisms enter the host following colonization of the respiratory tract, with subsequent invasion of blood vessels. The disease is likely to present as a thrombotic or embolic event because of this ability of the fungi to invade blood vessels. The risk of infection with Aspergillus correlates directly with the duration of neutropenia. In prolonged neutropenia, positive surveillance cultures for nasopharyngeal colonization with Aspergillus may predict the development of disease. Patients with Aspergillus infection often present with pleuritic chest pain and fever, which are sometimes accompanied by cough. Hemoptysis may be an ominous sign. Chest x-rays may reveal new focal infiltrates or nodules. Chest CT may reveal a characteristic halo consisting of a mass-like infiltrate surrounded by an area of low attenuation. The presence of a “crescent sign” on chest x-ray or chest CT, in which the mass progresses to central cavitation, is characteristic of invasive Aspergillus infection but may develop as the lesions are resolving. In addition to causing pulmonary disease, Aspergillus may invade through the nose or palate, with deep sinus penetration. The appearance of a discolored area in the nasal passages or on the hard palate should prompt a search for invasive Aspergillus. This situation is likely to require surgical debridement. Catheter infections with Aspergillus usually require both removal of the catheter and antifungal therapy. Diffuse interstitial infiltrates suggest viral, parasitic, or Pneumocystis pneumonia. If the patient has a diffuse interstitial pattern on chest x-ray, it may be reasonable, while considering invasive diagnostic procedures, to institute empirical treatment for Pneumocystis with TMP-SMX and for Chlamydia, Mycoplasma, and Legionella with a quinolone or azithromycin. Noninvasive procedures, such as staining of induced sputum smears for Pneumocystis, serum cryptococcal antigen tests, and urine testing for Legionella antigen, may be helpful. Serum galactomannan and β-d-glucan tests may be of value in diagnosing Aspergillus infection, but their utility is limited by their lack of sensitivity and specificity. The presence of an elevated level of β-d-glucan in the serum of a patient being treated for cancer who is not receiving prophylaxis against Pneumocystis suggests the diagnosis of Pneumocystis pneumonia. Infections with viruses that cause only 489 upper respiratory symptoms in immunocompetent hosts, such as respiratory syncytial virus (RSV), influenza viruses, and parainfluenza viruses, may be associated with fatal pneumonitis in immunocompromised hosts. CMV reactivation occurs in cancer patients receiving chemotherapy, but CMV pneumonia is most common among HSCT recipients (Chap. 169). Polymerase chain reaction testing now allows rapid diagnosis of viral pneumonia, which can lead to treatment in some cases (e.g., influenza). Multiplex studies that can detect a wide array of viruses in the lung and upper respiratory tract are now available and will lead to specific diagnoses of viral pneumonias. Bleomycin is the most common cause of chemotherapy-induced lung disease. Other causes include alkylating agents (such as cyclophosphamide, chlorambucil, and melphalan), nitrosoureas (carmustine [BCNU], lomustine [CCNU], and methyl-CCNU), busulfan, procarbazine, methotrexate, and hydroxyurea. Both infectious and noninfectious (drugand/or radiation-induced) pneumonitis can cause fever and abnormalities on chest x-ray; thus, the differential diagnosis of an infiltrate in a patient receiving chemotherapy encompasses a broad range of conditions (Table 104-7). The treatment of radiation pneumonitis (which may respond dramatically to glucocorticoids) or drug-induced pneumonitis is different from that of infectious pneumonia, and a biopsy may be important in the diagnosis. Unfortunately, no definitive diagnosis can be made in ∼30% of cases, even after bronchoscopy. Open-lung biopsy is the gold standard of diagnostic techniques. Biopsy via a visualized thoracostomy can replace an open procedure in many cases. When a biopsy cannot be performed, empirical treatment can be undertaken; a quinolone or an erythromycin derivative (azithromycin) and TMP-SMX are used in the case of diffuse infiltrates, and an antifungal agent is administered in the case of nodular infiltrates. The risks should be weighed carefully in these cases. If inappropriate drugs are administered, empirical treatment may prove toxic or ineffective; either of these outcomes may be riskier than biopsy. Patients with Hodgkin’s disease are prone to persistent infections by Salmonella, sometimes (and particularly often in elderly patients) affecting a vascular site. The use of IV catheters deliberately lodged in the right atrium is associated with a high incidence of bacterial endocarditis, presumably related to valve damage followed by bacteremia. Nonbacterial thrombotic endocarditis (marantic endocarditis) has been described in association with a variety of malignancies (most often solid tumors) and may follow bone marrow transplantation as well. The presentation of an embolic event with a new cardiac murmur suggests this diagnosis. Blood cultures are negative in this disease of unknown pathogenesis. Infections of the endocrine system have been described in immunocompromised patients. Candida infection of the thyroid may be difficult to diagnose during the neutropenic period. It can be defined by indium-labeled WBC scans or gallium scans after neutrophil counts increase. CMV infection can cause adrenalitis with or without resulting adrenal insufficiency. The presentation of a sudden endocrine anomaly in an immunocompromised patient can be a sign of infection in the involved end organ. Infection that is a consequence of vascular compromise, resulting in gangrene, can occur when a tumor restricts the blood supply to muscles, bones, or joints. The process of diagnosis and treatment of such infection is similar to that in normal hosts, with the following caveats: 1. In terms of diagnosis, a lack of physical findings resulting from a lack of granulocytes in the granulocytopenic patient should make the clinician more aggressive in obtaining tissue rather than more willing to rely on physical signs. 2. In terms of therapy, aggressive debridement of infected tissues may be required. However, it is usually difficult to operate on patients Infections in Patients with Cancer 490 who have recently received chemotherapy, both because of a lack of platelets (which results in bleeding complications) and because of a lack of WBCs (which may lead to secondary infection). A blood culture positive for Clostridium perfringens—an organism commonly associated with gas gangrene—can have a number of meanings (Chap. 179). Clostridium septicum bacteremia is associated with the presence of an underlying malignancy. Bloodstream infections with intestinal organisms such as Streptococcus bovis biotype 1 and C. perfringens may arise spontaneously from lower gastrointestinal lesions (tumor or polyps); alternatively, these lesions may be harbingers of invasive disease. The clinical setting must be considered in order to define the appropriate treatment for each case. Infections of the urinary tract are common among patients whose ureteral excretion is compromised (Table 104-1). Candida, which has a predilection for the kidney, can invade either from the bloodstream or in a retrograde manner (via the ureters or bladder) in immunocompromised patients. The presence of “fungus balls” or persistent candiduria suggests invasive disease. Persistent funguria (with Aspergillus as well as Candida) should prompt a search for a nidus of infection in the kidney. Certain viruses are typically seen only in immunosuppressed patients. BK virus (polyomavirus hominis 1) has been documented in the urine of bone marrow transplant recipients and, like adenovirus, may be associated with hemorrhagic cystitis. It is beyond the scope of this chapter to detail how all the immunologic abnormalities that result from cancer or from chemotherapy for cancer lead to infections. Disorders of the immune system are discussed in other sections of this book. As has been noted, patients with antibody deficiency are predisposed to overwhelming infection with encapsulated bacteria (including S. pneumoniae, H. influenzae, and N. meningitidis). Infections that result from the lack of a functional cellular immune system are described in Chap. 226. It is worth mentioning, however, that patients undergoing intensive chemotherapy for any form of cancer will have not only defects due to granulocytopenia but also lymphocyte dysfunction, which may be profound. Thus, these patients—especially those receiving glucocorticoid-containing regimens or drugs that inhibit either T cell activation (calcineurin inhibitors or drugs like fludarabine, which affect lymphocyte function) or cytokine induction—should be given prophylaxis for Pneumocystis pneumonia. Patients receiving treatment that eliminates B cells (e.g., with anti-CD20 antibodies or rituximab) are especially vulnerable to intercurrent viral infections. The incidence of progressive multifocal leukoencephalopathy (caused by JC virus) is elevated in these patients. Initial studies in the 1960s revealed a dramatic increase in the incidence of infections (fatal and nonfatal) among cancer patients with a granulocyte count of <500/μL. The use of prophylactic antibacterial agents has reduced the number of bacterial infections, but 35–78% of febrile neutropenic patients being treated for hematologic malignancies develop infections at some time during chemotherapy. Aerobic pathogens (both gram-positive and gram-negative) predominate in all series, but the exact organisms isolated vary from center to center. Infections with anaerobic organisms are uncommon. Geographic patterns affect the types of fungi isolated. Tuberculosis and malaria are common causes of fever in the developing world and may present in this setting as well. Neutropenic patients are unusually susceptible to infection with a wide variety of bacteria; thus, antibiotic therapy should be initiated Physical examination: skin lesions, mucous membranes, IV catheter sites, perirectal area Granulocyte count: absolute count < 500/˜L; expected duration of neutropenia Blood cultures; chest radiogram; other appropriate studies based on history (sputum, urine, skin biopsy) Continue treatment until neutropenia resolves (granulocyte count > 500/˜L). Add a broad spectrum antifungal agent. Febrile Afebrile InitialtherapyFollow-upSubsequenttherapyTreat with antibiotic(s) effective against both gram-negative and gram-positive aerobes. Treat the infection with the best available antibiotics. Do not narrow the spectrum unnecessarily. Continue to treat for both gram-positive and gram-negative aerobes. Obvious infectious site found No obvious infectious site Continue regimen. FIGURE 104-2 Algorithm for the diagnosis and treatment of fever and neutropenia. promptly to cover likely pathogens if infection is suspected. Indeed, early initiation of antibacterial agents is mandatory to prevent deaths. Like most immunocompromised patients, neutropenic patients are threatened by their own microbial flora, including gram-positive and gram-negative organisms found commonly on the skin and mucous membranes and in the bowel (Table 104-4). Because treatment with narrow-spectrum agents leads to infection with organisms not covered by the antibiotics used, the initial regimen should target all pathogens likely to be the initial causes of bacterial infection in neutropenic hosts. As noted in the algorithm shown in Fig. 104-2, administration of antimicrobial agents is routinely continued until neutropenia resolves—i.e., the granulocyte count is sustained above 500 μL for at least 2 days. In some cases, patients remain febrile after resolution of neutropenia. In these instances, the risk of sudden death from overwhelming bacteremia is greatly reduced, and the following diagnoses should be seriously considered: (1) fungal infection, (2) bacterial abscesses or undrained foci of infection, and (3) drug fever (including reactions to antimicrobial agents as well as to chemotherapy or cytokines). In the proper setting, viral infection or graft-versus-host disease should be considered. In clinical practice, antibacterial therapy is usually discontinued when the patient is no longer neutropenic and all evidence of bacterial disease has been eliminated. Antifungal agents are then discontinued if there is no evidence of fungal disease. If the patient remains febrile, a search for viral diseases or unusual pathogens is conducted while unnecessary cytokines and other drugs are systematically eliminated from the regimen. Hundreds of antibacterial regimens have been tested for use in patients with cancer. The major risk of infection is related to the degree of neutropenia seen as a consequence of either the disease or the therapy. Many of the relevant studies have involved small populations in which the outcomes have generally been good, and most have lacked the statistical power to detect differences among the regimens studied. Each febrile neutropenic patient should be approached as a unique problem, with particular attention given to previous infections and recent antibiotic exposures. Several general guidelines are useful in the initial treatment of neutropenic patients with fever (Fig. 104-2): 1. In the initial regimen, it is necessary to use antibiotics active against both gram-negative and gram-positive bacteria (Table 104-4). 2. Monotherapy with an aminoglycoside or an antibiotic lacking good activity against gram-positive organisms (e.g., ciprofloxacin or aztreonam) is not adequate in this setting. 3. The agents used should reflect both the epidemiology and the antibiotic resistance pattern of the hospital. 4. If the pattern of resistance justifies its use, a single third-generation cephalosporin constitutes an appropriate initial regimen in many hospitals. 5. Most standard regimens are designed for patients who have not previously received prophylactic antibiotics. The development of fever in a patient who has received antibiotics affects the choice of subsequent therapy, which should target resistant organisms and organisms known to cause infections in patients being treated with the antibiotics already administered. 6. Randomized trials have indicated the safety of oral antibiotic regimens in the treatment of “low-risk” patients with fever and neutropenia. Outpatients who are expected to remain neutropenic for <10 days and who have no concurrent medical problems (such as hypotension, pulmonary compromise, or abdominal pain) can be classified as low risk and treated with a broad-spectrum oral regimen. 7. Several large-scale studies indicate that prophylaxis with a fluoroquinolone (ciprofloxacin or levofloxacin) decreases morbidity and mortality rates among afebrile patients who are anticipated to have neutropenia of long duration. Commonly used antibiotic regimens for the treatment of febrile patients in whom prolonged neutropenia (>7 days) is anticipated include (1) ceftazidime or cefepime, (2) piperacillin/tazobactam, or (3) imipenem/cilastatin or meropenem. All three regimens have shown equal efficacy in large trials. All three are active against P. aeruginosa and a broad spectrum of aerobic gram-positive and gram-negative organisms. Imipenem/cilastatin has been associated with an elevated rate of C. difficile diarrhea, and many centers reserve carbapenem antibiotics for treatment of gram-negative bacteria that produce extended-spectrum β-lactamases; these limitations make carbapenems less attractive as an initial regimen. Despite the frequent involvement of coagulase-negative staphylococci, the initial use of vancomycin or its automatic addition to the initial regimen has not resulted in improved outcomes, and the antibiotic does exert toxic effects. For these reasons, only judicious use of vancomycin is recommended—for example, when there is good reason to suspect the involvement of coagulase-negative staphylococci (e.g., the appearance of erythema at the exit site of a catheter or a positive culture for methicillin-resistant S. aureus or coagulase-negative staphylococci). Because the sensitivities of bacteria vary from hospital to hospital, clinicians are advised to check their local sensitivities and to be aware that resistance patterns can change quickly, necessitating a change in approach to patients with fever and neutropenia. Similarly, infection control services should monitor for basic antibiotic resistance and for fungal infections. The appearance of a large number of Aspergillus infections, in particular, suggests the possibility of an environmental source that requires further investigation and remediation. The initial antibacterial regimen should be refined on the basis of culture results (Fig. 104-2). Blood cultures are the most relevant basis for selection of therapy; surface cultures of skin and mucous membranes may be misleading. In the case of gram-positive bacteremia or another gram-positive infection, it is important that the antibiotic be optimal for the organism isolated. Once treatment with broad-spectrum antibiotics has begun, it is not desirable to discontinue all antibiotics because of the risk of failing to treat a potentially fatal bacterial infection; the addition of more and more antibacterial agents to the regimen is not appropriate unless there 491 is a clinical or microbiologic reason to do so. Planned progressive therapy (the serial, empirical addition of one drug after another without culture data) is not efficacious in most settings and may have unfortunate consequences. Simply adding another antibiotic for fear that a gram-negative infection is present is a dubious practice. The synergy exhibited by β-lactams and aminoglycosides against certain gram-negative organisms (especially P. aeruginosa) provides the rationale for using two antibiotics in this setting, but recent analyses suggest that efficacy is not enhanced by the addition of aminoglycosides, while toxicity may be increased. Mere “double coverage,” with the addition of a quinolone or another antibiotic that is not likely to exhibit synergy, has not been shown to be of benefit and may cause additional toxicities and side effects. Cephalosporins can cause bone marrow suppression, and vancomycin is associated with neutropenia in some healthy individuals. Furthermore, the addition of multiple cephalosporins may induce β-lactamase production by some organisms; cephalosporins and double β-lactam combinations should probably be avoided altogether in Enterobacter infections. Fungal infections in cancer patients are most often associated with neutropenia. Neutropenic patients are predisposed to the development of invasive fungal infections, most commonly those due to Candida and Aspergillus species and occasionally those caused by Mucor, Rhizopus, Fusarium, Trichosporon, Bipolaris, and others. Cryptococcal infection, which is common among patients taking immunosuppressive agents, is uncommon among neutropenic patients receiving chemotherapy for AML. Invasive candidal disease is usually caused by C. albicans or C. tropicalis but can be caused by C. krusei, C. parapsilosis, and C. glabrata. For decades, it has been common clinical practice to add amphotericin B to antibacterial regimens if a neutropenic patient remains febrile despite 4–7 days of treatment with antibacterial agents. The rationale for this empirical addition is that it is difficult to culture fungi before they cause disseminated disease and that mortality rates from disseminated fungal infections in granulocytopenic patients are high. Before the introduction of newer azoles into clinical practice, amphotericin B was the mainstay of antifungal therapy. The insolubility of amphotericin B has resulted in the marketing of several lipid formulations that are less toxic than the amphotericin B deoxycholate complex. Echinocandins (e.g., caspofungin) are useful in the treatment of infections caused by azole-resistant Candida strains as well as in therapy for aspergillosis and have been shown to be equivalent to liposomal amphotericin B for the empirical treatment of patients with prolonged fever and neutropenia. Newer azoles have also been demonstrated to be effective in this setting. Although fluconazole is efficacious in the treatment of infections due to many Candida species, its use against serious fungal infections in immunocompromised patients is limited by its narrow spectrum: it has no activity against Aspergillus or against several non-albicans Candida species. The broad-spectrum azoles (e.g., voriconazole and posaconazole) provide another option for the treatment of Aspergillus infections (Chap. 241), including CNS infection. Clinicians should be aware that the spectrum of each azole is somewhat different and that no drug can be assumed to be efficacious against all fungi. Aspergillus terreus is resistant to amphotericin B. Although voriconazole is active against Pseudallescheria boydii, amphotericin B is not; however, voriconazole has no activity against Mucor. Posaconazole, which is administered orally, is useful as a prophylactic agent in patients with prolonged neutropenia. Studies in progress are assessing the use of these agents in combinations. For a full discussion of antifungal therapy, see Chap. 235. The availability of a variety of agents active against herpes-group viruses, including some new agents with a broader spectrum of activity, has heightened focus on the treatment of viral infections, Infections in Patients with Cancer 492 which pose a major problem in cancer patients. Viral diseases caused by the herpes group are prominent. Serious (and sometimes fatal) infections due to HSV and VZV are well documented in patients receiving chemotherapy. CMV may also cause serious disease, but fatalities from CMV infection are more common in HSCT recipients. The roles of human herpesvirus (HHV)-6, HHV-7, and HHV-8 (Kaposi’s sarcoma–associated herpesvirus) in cancer patients are still being defined (Chap. 219). EBV lymphoproliferative disease (LPD) can occur in patients receiving chemotherapy but is much more common among transplant recipients (Chap. 169). While clinical experience is most extensive with acyclovir, which can be used therapeutically or prophylactically, a number of derivative drugs offer advantages over this agent (Chap. 215e). In addition to the herpes group, several respiratory viruses (especially RSV) may cause serious disease in cancer patients. Although influenza vaccination is recommended (see below), it may be ineffective in this patient population. The availability of antiviral drugs with activity against influenza viruses gives the clinician additional options for the prophylaxis and treatment of these patients (Chaps. 215e and 224). Another way to address the problems posed by the febrile neutropenic patient is to replenish the neutrophil population. Although granulocyte transfusions may be effective in the treatment of refractory gram-negative bacteremia, they do not have a documented role in prophylaxis. Because of the expense, the risk of leukoagglutinin reactions (which has probably been decreased by improved cell-separation procedures), and the risk of transmission of CMV from unscreened donors (which has been reduced by the use of filters), granulocyte transfusion is reserved for patients whose condition is unresponsive to antibiotics. This modality is efficacious for documented gram-negative bacteremia refractory to antibiotics, particularly in situations where granulocyte numbers will be depressed for only a short period. The demonstrated usefulness of granulocyte colony-stimulating factor in mobilizing neutrophils and advances in preservation techniques may make this option more useful than in the past. A variety of cytokines, including granulocyte colony-stimulating factor and granulocyte-macrophage colony-stimulating factor, enhance granulocyte recovery after chemotherapy and consequently shorten the period of maximal vulnerability to fatal infections. The role of these cytokines in routine practice is still a matter of some debate. Most authorities recommend their use only when neutropenia is both severe and prolonged. The cytokines themselves may have adverse effects, including fever, hypoxemia, and pleural effusions or serositis in other areas (Chap. 372e). Once neutropenia has resolved, the risk of infection decreases dramatically. However, depending on what drugs they receive, patients who continue on chemotherapeutic protocols remain at high risk for certain diseases. Any patient receiving more than a maintenance dose of glucocorticoids (e.g., in many treatment regimens for diffuse lymphoma) should also receive prophylactic TMPSMX because of the risk of Pneumocystis infection; those with ALL should receive such prophylaxis for the duration of chemotherapy. Outbreaks of fatal Aspergillus infection have been associated with construction projects and materials in several hospitals. The association between spore counts and risk of infection suggests the need for a high-efficiency air-handling system in hospitals that care for large numbers of neutropenic patients. The use of laminar-flow rooms and prophylactic antibiotics has decreased the number of infectious episodes in severely neutropenic patients. However, because of the expense of such a program and the failure to show that it dramatically affects mortality rates, most centers do not routinely use laminar flow to care for neutropenic patients. Some centers use “reverse isolation,” in which health care providers and visitors to a patient who is neutropenic wear gowns and gloves. Since most of the infections these patients develop are due to organisms that colonize the patients’ own skin and bowel, the validity of such schemes is dubious, and limited clinical data do not support their use. Hand washing by all staff caring for neutropenic patients should be required to prevent the spread of resistant organisms. The presence of large numbers of bacteria (particularly P. aeruginosa) in certain foods, especially fresh vegetables, has led some authorities to recommend a special “low-bacteria” diet. A diet consisting of cooked and canned food is satisfactory to most neutropenic patients and does not involve elaborate disinfection or sterilization protocols. However, there are no studies to support even this type of dietary restriction. Counseling of patients to avoid leftovers, deli foods, undercooked meat, and unpasteurized dairy products is recommended. Although few studies address this issue, patients with cancer are predisposed to infections resulting from anatomic compromise (e.g., lymphedema resulting from node dissections after radical mastectomy). Surgeons who specialize in cancer surgery can provide specific guidelines for the care of such patients, and patients benefit from common-sense advice about how to prevent infections in vulnerable areas. Many patients with multiple myeloma or CLL have immunoglobulin deficiencies as a result of their disease, and all allogeneic bone marrow transplant recipients are hypogammaglobulinemic for a period after transplantation. However, current recommendations reserve intravenous immunoglobulin replacement therapy for those patients with severe (<400 mg of total IgG/dL), prolonged hypogammaglobulinemia and a history of repeated infections. Antibiotic prophylaxis has been shown to be cheaper and is efficacious in preventing infections in most CLL patients with hypogammaglobulinemia. Routine use of immunoglobulin replacement is not recommended. The use of condoms is recommended for severely immunocompromised patients. Any sexual practice that results in oral exposure to feces is not recommended. Neutropenic patients should be advised to avoid any practice that results in trauma, as even microscopic cuts may result in bacterial invasion and fatal sepsis. Several studies indicate that the use of oral fluoroquinolones prevents infection and decreases mortality rates among severely neutropenic patients. Prophylaxis for Pneumocystis is mandatory for patients with ALL and for all cancer patients receiving glucocorticoid-containing chemotherapy regimens. In general, patients undergoing chemotherapy respond less well to vaccines than do normal hosts. Their greater need for vaccines thus leads to a dilemma in their management. Purified proteins and inactivated vaccines are almost never contraindicated and should be given to patients even during chemotherapy. For example, all adults should receive diphtheria–tetanus toxoid boosters at the indicated times as well as seasonal influenza vaccine. However, if possible, vaccination should not be undertaken concurrent with cytotoxic chemotherapy. If patients are expected to be receiving chemotherapy for several months and vaccination is indicated (e.g., influenza vaccination in the fall), the vaccine should be given midcycle—as far apart in time as possible from the antimetabolic agents that will prevent an immune response. The meningococcal and pneumococcal polysaccharide vaccines should be given to patients before splenectomy, if possible. The H. influenzae type b conjugate vaccine should be administered to all splenectomized patients. In general, live virus (or live bacterial) vaccines should not be given to patients during intensive chemotherapy because of the risk of disseminated infection. Recommendations on vaccination are summarized in Table 104-2 (see www.cdc.gov/vaccine for updated recommendations). Cancer of the skin Walter J. Urba, Brendan D. Curti MELANOMA Pigmented lesions are among the most common findings on skin examination. The challenge is to distinguish cutaneous melanomas, which account for the overwhelming majority of deaths resulting from 105 skin cancer, from the remainder, which are usually benign. Cutaneous melanoma can occur in adults of all ages, even young individuals, and people of all colors; its location on the skin and its distinct clinical features make it detectable at a time when complete surgical excision is possible. Examples of malignant and benign pigmented lesions are shown in Fig. 105-1. Melanoma is an aggressive malignancy of melanocytes, pigment-producing cells that originate from the neural crest and migrate to FIGURE 105-1 Atypical and malignant pigmented lesions. The most common melanoma is superficial spreading melanoma (not pictured). A. Acral lentiginous melanoma is the most common melanoma in blacks, Asians, and Hispanics and occurs as an enlarging hyperpigmented macule or plaque on the palms and soles. Lateral pigment diffusion is present. B. Nodular melanoma most commonly manifests as a rapidly growing, often ulcerated or crusted black nodule. C. Lentigo maligna melanoma occurs on sun-exposed skin as a large, hyperpigmented macule or plaque with irregular borders and variable pigmentation. D. Dysplastic nevi are irregularly pigmented and shaped nevomelanocytic lesions that may be associated with familial melanoma. the skin, meninges, mucous membranes, upper esophagus, and eyes. 493 Melanocytes in each of these locations have the potential for malignant transformation. Cutaneous melanoma is predominantly a malignancy of white-skinned people (98% of cases), and the incidence correlates with latitude of residence, providing strong evidence for the role of sun exposure. Men are affected slightly more than women (1.3:1), and the median age at diagnosis is the late fifties. Dark-skinned populations (such as those of India and Puerto Rico), blacks, and East Asians also develop melanoma, albeit at rates 10–20 times lower than those in whites. Cutaneous melanomas in these populations are diagnosed more often at a higher stage, and patients tend to have worse outcomes. Furthermore, in nonwhite populations, there is a much higher frequency of acral (subungual, plantar, palmar) and mucosal melanomas. In 2014, more than 76,000 individuals in the United States were expected to develop melanoma, and approximately 9700 were expected to die. There will be nearly 50,000 annual deaths worldwide as a result of melanoma. Data from the Connecticut Tumor Registry support an unremitting increase in the incidence and mortality of melanoma. In the past 60 years, there have been 17-fold and 9-fold increases in incidence for men and women, respectively. In the same six decades, there has been a tripling of mortality rates for men and doubling for women. Mortality rates begin to rise at age 55, with the greatest increase in men age >65 years. Of particular concern is the increase in rates among women <40 years of age. Much of this increase is believed to be associated with a greater emphasis on tanned skin as a marker of beauty, the increased availability and use of indoor tanning beds, and exposure to intense ultraviolet (UV) light in childhood. These statistics highlight the need to promote prevention and early detection. RISK FACTORS Presence of Nevi The risk of developing melanoma is related to genetic, environmental, and host factors (Table 105-1). The strongest risk factors for melanoma are the presence of multiple benign or atypical nevi and a family or personal history of melanoma. The presence of melanocytic nevi, common or dysplastic, is a marker for increased risk of melanoma. Nevi have been referred to as precursor lesions because they can transform into melanomas; however, the actual risk for any specific nevus is exceedingly low. About one-quarter of melanomas are histologically associated with nevi, but the majority arise de novo. The number of clinically atypical moles may vary from one to several hundred, and they usually differ from one another in appearance. The borders are often hazy and indistinct, and the pigment pattern is more highly varied than that in benign acquired nevi. Individuals with clinically atypical moles and a strong family history of melanoma have been reported to have a >50% lifetime risk for developing melanoma and warrant close follow-up with a dermatologist. Of the 90% of patients whose disease is sporadic (i.e., who lack a family history of melanoma), ∼40% have clinically atypical moles, compared with an estimated 5–10% of the population at large. Congenital melanocytic nevi, which are classified as small (≤1.5 cm), medium (1.5–20 cm), and giant (>20 cm), can be precursors for melanoma. The risk is highest for the giant melanocytic nevus, also called the bathing trunk nevus, a rare malformation that affects 1 in 30,000–100,000 individuals. Since the lifetime risk of melanoma development is estimated to be as high as 6%, prophylactic excision early in life is prudent. This usually requires staged removal with coverage faCtors assoCiateD With inCreaseD risK of meLanoma CDKN2A, CDK4, MITF mutations Cancer of the Skin 494 by split-thickness skin grafts. Surgery cannot remove all at-risk nevus cells, as some may penetrate into the muscles or central nervous system (CNS) below the nevus. Smallto medium-size congenital melanocytic nevi affect approximately 1% of persons; the risk of melanoma developing in these lesions is not known but appears to be relatively low. The management of smallto medium-size congenital melanocytic nevi remains controversial. Personal and Family History Once diagnosed, patients with melanoma require a lifetime of surveillance because their risk of developing another melanoma is 10 times that of the general population. First-degree relatives have a higher risk of developing melanoma than do individuals without a family history, but only 5–10% of all melanomas are truly familial. In familial melanoma, patients tend to be younger at first diagnosis, lesions are thinner, survival is improved, and multiple primary melanomas are common. Genetic Susceptibility Approximately 20–40% of cases of hereditary melanoma (0.2–2% of all melanomas) are due to germline mutations in the cell cycle regulatory gene cyclindependent kinase inhibitor 2A (CDKN2A). In fact, 70% of all cutaneous melanomas have mutations or deletions affecting the CDKN2A locus on chromosome 9p21. This locus encodes two distinct tumor-suppressor proteins from alternate reading frames: p16 and ARF (p14ARF). The p16 protein inhibits CDK4/6-mediated phosphorylation and inactivation of the retinoblastoma (RB) protein, whereas ARF inhibits MDM2 ubiquitin-mediated degradation of p53. The end result of the loss of CDKN2A is inactivation of two critical tumor-suppressor pathways, RB and p53, which control entry of cells into the cell cycle. Several studies have shown an increased risk of pancreatic cancer among melanoma-prone families with CDKN2A mutations. A second high-risk locus for melanoma susceptibility, CDK4, is located on chromosome 12q13 and encodes the kinase inhibited by p16. CDK4 mutations, which also inactivate the RB pathway, are much rarer than CDKN2A mutations. Germline mutations in the melanoma lineage-specific oncogene microphthalmia-associated transcription factor (MITF) predispose to both familial and sporadic melanomas. The melanocortin-1 receptor (MC1R) gene is a moderate-risk inherited melanoma susceptibility factor. Solar radiation stimulates the production of melanocortin (α-melanocyte-stimulating hormone [α-MSH]), the ligand for MC1R, which is a G-protein-coupled receptor that signals via cyclic AMP and regulates the amount and type of pigment produced. MC1R is highly polymorphic, and among its 80 variants are those that result in partial loss of signaling and lead to the production of red/yellow pheomelanins, which are not sun-protective and produce red hair, rather than brown/black eumelanins that are photoprotective. This red hair color (RHC) phenotype is associated with fair skin, red hair, freckles, increased sun sensitivity, and increased risk of melanoma. In addition to its weak UV shielding capacity relative to eumelanin, increased pheomelanin production in patients with inactivating polymorphisms of MC1R also provides a UV-independent carcinogenic contribution to melanomagenesis via oxidative damage. A number of other more common, low-penetrance polymorphisms that have small effects on melanoma susceptibility include other genes related to pigmentation, nevus count, immune responses, DNA repair, metabolism, and the vitamin D receptor. Primary prevention of melanoma and nonmelanoma skin cancer (NMSC) is based on protection from the sun. Public health initiatives, such as the SunSmart program that started in Australia and now is operative in Europe and the United States, have demonstrated that behavioral change can decrease the incidence of NMSC and melanoma. Preventive measures should start early in life because damage from UV light begins early despite the fact that cancers develop years later. Biological factors are increasingly being understood, such as tanning addiction, which is postulated to involve stimulation of reward centers in the brain involving dopamine pathways, and cutaneous secretion of β-endorphins after UV exposure, and may represent another area for preventive intervention. Regular use of broad-spectrum sunscreens that block UVA and UVB with a sun protection factor (SPF) of at least 30 and protective clothing should be encouraged. Avoidance of tanning beds and midday (10:00 a.m. to 2:00 p.m.) sun exposure is recommended. Secondary prevention comprises education, screening, and early detection. Patients should be educated in the clinical features of melanoma (ABCDEs; see following “Diagnosis” section) and advised to report any growth or other change in a pigmented lesion. Brochures are available from the American Cancer Society, the American Academy of Dermatology, the National Cancer Institute, and the Skin Cancer Foundation. Self-examination at 6to 8-week intervals may enhance the likelihood of detecting change. Although the U.S. Preventive Services Task Force states that evidence is insufficient to recommend for or against skin cancer screening, a full-body skin exam seems to be a simple, practical way to approach reducing the mortality rate for skin cancer. Depending on the presence or absence of risk factors, strategies for early detection can be individualized. This is particularly true for patients with clinically atypical moles (dysplastic nevi) and those with a personal history of melanoma. For these individuals, surveillance should be performed by the dermatologist and include total-body photography and dermoscopy where appropriate. Individuals with three or more primary melanomas and families with at least one invasive melanoma and two or more cases of melanoma and/or pancreatic cancer among firstor second-degree relatives on the same side of the family may benefit from genetic testing. Precancerous and in situ lesions should be treated early. Early detection of small tumors allows the use of simpler treatment modalities with higher cure rates and lower morbidity. The main goal is to identify a melanoma before tumor invasion and life-threatening metastases have occurred. Early detection may be facilitated by applying the ABCDEs: asymmetry (benign lesions are usually symmetric); border irregularity (most nevi have clear-cut borders); color variegation (benign lesions usually have uniform light or dark pigment); diameter >6 mm (the size of a pencil eraser); and evolving (any change in size, shape, color, or elevation or new symptoms such as bleeding, itching, and crusting). Benign nevi usually appear on sun-exposed skin above the waist, rarely involving the scalp, breasts, or buttocks; atypical moles usually appear on sun-exposed skin, most often on the back, but can involve the scalp, breasts, or buttocks. Benign nevi are present in 85% of adults, with 10–40 moles scattered over the body; atypical nevi can be present in the hundreds. The entire skin surface, including the scalp and mucous membranes, as well as the nails should be examined in each patient. Bright room illumination is important, and a hand lens is helpful for evaluating variation in pigment pattern. Any suspicious lesions should be biopsied, evaluated by a specialist, or recorded by chart and/or photography for follow-up. A focused method for examining individual lesions, dermoscopy, employs low-level magnification of the epidermis and may allow a more precise visualization of patterns of pigmentation than is possible with the naked eye. Complete physical examination with attention to the regional lymph nodes is part of the initial evaluation in a patient with suspected melanoma. The patient should be advised to have other family members screened if either melanoma or clinically atypical moles (dysplastic nevi) are present. Patients who fit into high-risk groups should be instructed to perform monthly self-examinations. Biopsy Any pigmented cutaneous lesion that has changed in size or shape or has other features suggestive of malignant melanoma is a candidate for biopsy. An excisional biopsy with 1to 3-mm margins is suggested. This facilitates pathologic assessment of the lesion, permits accurate measurement of thickness if the lesion is melanoma, and constitutes definitive treatment if the lesion is benign. For lesions that are large or on anatomic sites where excisional biopsy may not be feasible (such as the face, hands, and feet), an incisional biopsy through the most nodular or darkest area of the lesion is acceptable; this should include the vertical growth phase of the primary tumor, if present. Incisional biopsy does not appear to facilitate the spread of melanoma. For suspicious lesions, every attempt should be made to preserve the ability to assess the deep and peripheral margins and to perform immunohistochemistry. Shave biopsies are an acceptable alternative, particularly if the suspicion of malignancy is low, but they should be deep and include underlying fat; cauterization should be avoided. The biopsy should be read by a pathologist experienced in pigmented lesions, and the report should include Breslow thickness, mitoses per square millimeter for lesions ≤1 mm, presence or absence of ulceration, and peripheral and deep margin status. Breslow thickness is the greatest thickness of a primary cutaneous melanoma measured on the slide from the top of the epidermal granular layer, or from the ulcer base, to the bottom of the tumor. To distinguish melanomas from benign nevi in cases with challenging histology, fluorescence in situ hybridization (FISH) with multiple probes and comparative genome hybridization (CGH) can be helpful. Four major types of cutaneous melanoma have been recognized (Table 105-2). In three of these types—superficial spreading melanoma, lentigo maligna melanoma, and acral lentiginous melanoma— the lesion has a period of superficial (so-called radial) growth during which it increases in size but does not penetrate deeply. It is during this period that the melanoma is most capable of being cured by surgical excision. The fourth type—nodular melanoma —does not have a recognizable radial growth phase and usually presents as a deeply invasive lesion that is capable of early metastasis. When tumors begin to penetrate deeply into the skin, they are in the so-called vertical growth phase. Melanomas with a radial growth phase are characterized by irregular and sometimes notched borders, variation in pigment pattern, and variation in color. An increase in size or change in color is noted by the patient in 70% of early lesions. Bleeding, ulceration, and pain are late signs and are of little help in early recognition. Superficial spreading melanoma is the most common variant observed in the white population. The back is the most common site for melanoma in men. In women, the back and the lower leg (from knee to ankle) are common sites. Nodular melanomas are dark brown-black to blue-black nodules. Lentigo maligna melanoma usually is confined to chronically sun-damaged sites in older individuals. Acral lentiginous melanoma occurs on the palms, soles, nail beds, and mucous membranes. Although this type occurs in whites, it occurs most frequently (along with nodular melanoma) in blacks and East Asians. A fifth type of melanoma, desmoplastic melanoma, is associated with a fibrotic response, neural invasion, and a greater tendency for local recurrence. Occasionally, melanomas appear clinically to be amelanotic, in which case the diagnosis is established microscopically after biopsy of a new or a changing skin nodule. Melanomas can also arise in the mucosa of the head and neck (nasal cavity, paranasal sinuses and oral cavity), the gastrointestinal tract, the CNS, the female genital tract (vulva, vagina), and the uveal tract of the eye. Although cutaneous melanoma subtypes are clinically and histo-495 pathologically distinct, this classification does not have independent prognostic value. Histologic subtype is not part of American Joint Committee on Cancer (AJCC) staging, although the College of American Pathologists (CAP) recommends inclusion in the pathology report. Newer classifications will increasingly emphasize molecular features of each melanoma (see below). The molecular analysis of individual melanomas will provide a basis for distinguishing benign nevi from melanomas, and determination of the mutational status of the tumor will help elucidate the molecular mechanisms of tumorigenesis and be used to identify targets that will guide therapy. Considerable evidence from epidemiologic and molecular studies suggests that cutaneous melanomas arise via multiple causal pathways. There are both environmental and genetic components. UV solar radiation causes genetic changes in the skin, impairs cutaneous immune function, increases the production of growth factors, and induces the formation of DNA-damaging reactive oxygen species that affect keratinocytes and melanocytes. A comprehensive catalog of somatic mutations from a human melanoma revealed more than 33,000 base mutations with damage to almost 300 protein-coding segments compared with normal cells from the same patient. The dominant mutational signature reflected DNA damage due to UV light exposure. The melanoma also contained previously described driver mutations (i.e., mutations that confer selective clonal growth advantage and are implicated in oncogenesis). These driver mutations affect pathways that promote cell proliferation and inhibit normal pathways of apoptosis in response to DNA repair (see below). The altered melanocytes accumulate DNA damage, and selection occurs for all the attributes that constitute the malignant phenotype: invasion, metastasis, and angiogenesis. An understanding of the molecular changes that occur during the transformation of normal melanocytes into malignant melanoma would not only help classify patients but also would contribute to the understanding of etiology and aid the development of new therapeutic options. A genome-wide assessment of melanomas classified into four groups based on their location and degree of exposure to the sun has confirmed that there are distinct genetic pathways in the development of melanoma. The four groups were cutaneous melanomas on skin without chronic sun-induced damage, cutaneous melanomas with chronic sun-induced damage, mucosal melanomas, and acral melanomas. Distinct patterns of DNA alterations were noted that varied with the site of origin and were independent of the histologic subtype of the tumor. Thus, although the genetic changes are diverse, the overall pattern of mutation, amplification, and loss of cancer genes indicates they have convergent effects on key biochemical pathways involved in proliferation, senescence, and apoptosis. The p16 mutation that affects cell cycle arrest and the ARF mutation that results in defective apoptotic responses to genotoxic damage were described earlier. The proliferative pathways affected were the mitogen-activated protein (MAP) kinase and phosphatidylinositol 3’ kinase/AKT pathways (Fig. 105-2). Cancer of the Skin Average Age at Duration of Known Type Site Diagnosis, Years Existence, Years Color aDuring much of this time, the precursor stage, lentigo maligna, is confined to the epidermis. Source: Adapted from AJ Sober, in NA Soter, HP Baden (eds): Pathophysiology of Dermatologic Diseases. New York, McGraw-Hill, 1984. in a subset of tumors FIGURE 105-2 Major pathways involved in melanoma. The MAP kinase and PI3K/AKT pathways, which promote proliferation and inhibit apoptosis, respectively, are subject to mutations in melanoma. ERK, extracellular signal-regulated kinase; MEK, mitogen-activated protein kinase kinase; NF-1; neurofibromatosis type 1 gene; PTEN, phosphatase and tensin homolog. RAS and BRAF, members of the MAP kinase pathway, which classically mediates the transcription of genes involved in cell proliferation and survival, undergo somatic mutation in melanoma and thereby generate potential therapeutic targets. N-RAS is mutated in approximately 20% of melanomas, and somatic activating BRAF mutations are found in most benign nevi and 40–60% of melanomas. Neither mutation by itself appears to be sufficient to cause melanoma; thus, they often are accompanied by other mutations. The BRAF mutation is most commonly a point mutation (T→A nucleotide change) that results in a valine-to-glutamate amino acid substitution (V600E). V600E BRAF mutations do not have the standard UV signature mutation (pyrimidine dimer); they are more common in younger patients and are present in most melanomas that arise on sites with intermittent sun exposure and are less common in melanomas from chronically sun-damaged skin. Melanomas also harbor mutations in AKT (primarily in AKT3) and PTEN (phosphatase and tensin homolog). AKT can be amplified, and PTEN may be deleted or undergo epigenetic silencing that leads to constitutive activation of the PI3K/AKT pathway and enhanced cell survival by antagonizing the intrinsic pathway of apoptosis. Loss of PTEN, which dysregulates AKT activity, and mutation of AKT3 both prolong cell survival through inactivation of BAD, Bc12-antagonist of cell death, and activation of the forkhead transcription factor FOXO1, which leads to synthesis of prosurvival genes. A loss-of-function mutation in NF1, which can affect both MAP kinase and PI3K/AKT pathways, has been described in 10–15% of melanomas. In melanoma, these two signaling pathways (MAP kinase and PI3K/AKT) enhance tumorigenesis, chemoresistance, migration, and cell cycle dysregulation. Targeted agents that inhibit these pathways have been developed, and some are available for clinical use (see below). Optimal treatment of patients with melanoma may require simultaneous inhibition of both MAPK and PI3K pathways as well as promotion of immune eradication of malignancy. The prognostic factors of greatest importance to a newly diagnosed patient are included in the staging classification (Table 105-3). The best predictor of metastatic risk is the lesion’s Breslow thickness. The Clark level, which defines melanomas on the basis of the layer of skin to which a melanoma has invaded, does not add significant prognostic information and has minimal influence on treatment decisions. The anatomic site of the primary is also prognostic; favorable sites are the forearm and leg (excluding the feet), and unfavorable sites include the scalp, hands, feet, and mucous membranes. In general, women with stage I or II disease have better survival than men, perhaps in part because of earlier diagnosis; women frequently have melanomas on the lower leg, where self-recognition is more likely and the prognosis is better. The effect of age is not straightforward. Older individuals, especially men over 60, have worse prognoses, a finding that has been explained in part by a tendency toward later diagnosis (and thus thicker tumors) and in part by a higher proportion of acral melanomas in men. However, there is a greater risk of lymph node metastasis in young patients. Other important adverse factors recognized via the staging classification include high mitotic rate, presence of ulceration, microsatellite lesions and/or in-transit metastases, evidence of nodal involvement, elevated serum lactate dehydrogenase (LDH), and presence and site of distant metastases. Once the diagnosis of melanoma has been made, the tumor must be staged to determine the prognosis and treatment. Staging helps determine prognosis and aids in treatment selection. The current melanoma staging criteria and estimated 15-year survival by stage are depicted in Table 105-3. The clinical stage of the patient is determined after the pathologic evaluation of the melanoma skin lesion and clinical/radiologic assessment for metastatic disease. Pathologic staging also includes the microscopic evaluation of the regional lymph nodes obtained at sentinel lymph node biopsy or completion lymphadenectomy as indicated. All patients should have a complete history, with attention to symptoms that may represent metastatic disease such as malaise, weight loss, headaches, visual changes, and pain, and physical examination directed to the site of the primary melanoma, looking for persistent disease or for dermal or subcutaneous nodules that could represent satellite or in-transit metastases, and to the regional draining lymph nodes, CNS, liver, and lungs. A complete blood count (CBC), complete metabolic panel, and LDH should be performed. Although these are low-yield tests for uncovering occult metastatic disease, a microcytic anemia would raise the possibility of bowel metastases, particularly in the small bowel, and an unexplained elevated LDH should prompt a more extensive evaluation, including computed tomography (CT) scan or possibly a positron emission tomography (PET) (or CT/ PET combined) scan. If signs or symptoms of metastatic disease are present, appropriate diagnostic imaging should be performed. At initial presentation, more than 80% of patients will have disease confined to the skin and a negative history and physical exam, in which case imaging is not indicated. MANAGEMENT OF CLINICALLY LOCALIZED MELANOMA (STAGE I, II) For a newly diagnosed cutaneous melanoma, wide surgical excision of the lesion with a margin of normal skin is necessary to remove all malignant cells and minimize possible local recurrence. The following margins are recommended for a primary melanoma: in situ, 0.5–1.0 cm; invasive up to 1 mm thick, 1 cm; >1.01–2 mm, 1–2 cm; and >2 mm, 2 cm. For lesions on the face, hands, and feet, strict adherence to these margins must give way to individual considerations about the constraints of surgery and minimization of morbidity. In all instances, however, inclusion of subcutaneous fat in the surgical specimen facilitates adequate thickness measurement and assessment of surgical margins by the pathologist. Topical imiquimod also has been used, particularly for lentigo maligna, in cosmetically sensitive locations. Sentinel lymph node biopsy (SLNB) is a valuable staging tool that has replaced elective regional nodal dissection for the evaluation of regional nodal status. SLNB provides prognostic information and Cancer of the Skin helps identify patients at high risk for relapse who may be candidates for adjuvant therapy. The initial (sentinel) draining node(s) from the primary site is (are) identified by injecting a blue dye and a radioisotope around the primary site. The sentinel node(s) then is (are) identified by inspection of the nodal basin for the blue-stained node and/or the node with high uptake of the radioisotope. The identified nodes are removed and subjected to careful histopathologic analysis with serial section using hematoxylin and eosin stains as well as immunohistochemical stains (e.g., S100, HMB45, and MelanA) to identify melanocytes. Not every patient requires a SLNB. Patients whose melanomas are ≤0.75 mm thick have <5% risk of sentinel lymph node (SLN) disease and do not require a SLNB. Patients with tumors >1 mm thick generally undergo SLNB. For melanomas 0.76–1.0 mm thick, SLNB may be considered for lesions with high-risk features such as ulceration, high mitotic index, or lymphovascular invasion, but wide excision alone is the usual definitive therapy. Most other patients with clinically negative lymph nodes should undergo a SLNB. Patients whose SLNB is negative are spared a complete node dissection and its attendant morbidities, and can simply be followed or, based on the features of the primary melanoma, be considered for adjuvant therapy or a clinical trial. The current standard of care for all patients with a positive SLN is to perform a complete lymphadenectomy; however, ongoing clinical studies will determine whether patients with small-volume SLN metastases can be managed safely without additional surgery. Patients with microscopically positive lymph nodes should be considered for adjuvant therapy with interferon or enrollment in a clinical trial. Melanomas may recur at the edge of the scar or graft, as satellite metastases, which are separate from but within 2 cm of the scar; as in-transit metastases, which are recurrences >2 cm from the primary lesion but not beyond the regional nodal basin; or, most commonly, as metastasis to a draining lymph node basin. Each of these presentations is managed surgically, following which there 498 is the possibility of long-term disease-free survival. Isolated limb perfusion or infusion with melphalan and hyperthermia are options for patients with extensive cutaneous regional recurrences in an extremity. High complete response rates have been reported and significant palliation of symptoms can be achieved, but there is no change in overall survival. Patients rendered free of disease after surgery may be at high risk for a local or distant recurrence and should be considered for adjuvant therapy. Radiotherapy can reduce the risk of local recurrence after lymphadenectomy, but does not affect overall survival. Patients with large nodes (>3–4 cm), four or more involved lymph nodes, or extranodal spread on microscopic examination should be considered for radiation. Systemic adjuvant therapy is indicated primarily for patients with stage III disease, but high-risk, node-negative patients (>4 mm thick or ulcerated lesions) and patients with completely resected stage IV disease also may benefit. Either interferon α2b (IFN-α2b), which is given at 20 million units/m2 IV 5 days a week for 4 weeks followed by 10 million units/m2 SC three times a week for 11 months (1 year total), or subcutaneous peginterferon α2b (6 μg/kg per week for 8 weeks followed by 3 μg/kg per week for a total of 5 years) is acceptable adjuvant therapy. Treatment is accompanied by significant toxicity, including a flu-like illness, decline in performance status, and the development of depression. Side effects can be managed in most patients by appropriate treatment of symptoms, dose reduction, and treatment interruption. Sometimes IFN must be permanently discontinued before all of the planned doses are administered because of unacceptable toxicity. The high-dose regimen is significantly more toxic than peginterferon, but the latter requires 4 additional years of therapy. Adjuvant treatment with IFN improves disease-free survival, but its impact on overall survival remains controversial. Enrollment in a clinical trial is appropriate for these patients, many of whom will otherwise be observed without treatment either because they are poor candidates for IFN or because the patient (or their oncologist) does not believe the beneficial effects of IFN outweigh the toxicity. The recently approved immunotherapy and targeted agents are being evaluated in the adjuvant setting. At diagnosis, most patients with melanoma will have early-stage disease; however, some will present with metastases, and others will develop metastases after initial therapy. Patients with a history of melanoma who develop signs or symptoms suggesting recurrent disease should undergo restaging that includes physical examination, CBC, complete metabolic panel, LDH, and appropriate diagnostic imaging that may include a magnetic resonance image (MRI) of the brain and total-body PET/CT or CT scans of the chest, abdomen, and pelvis. Distant metastases (stage IV), which may involve any organ, commonly involve the skin and lymph nodes as well as viscera, bone, or the brain. Historically, metastatic melanoma was considered incurable; median survival ranges from 6 to 15 months, depending on the organs involved. The prognosis is better for patients with skin and subcutaneous metastases (M1a) than for lung (M1b) and worst for those with metastases to liver, bone, and brain (M1c). An elevated serum LDH is a poor prognostic factor and places the patient in stage M1c regardless of the site of the metastases (Table 105-3). Although historical data suggest that the 15-year survival of patients with M1a, M1b, and M1c disease is less than 10%, there is optimism that newer therapies will increase the number of melanoma patients with long-term survival, especially patients with M1a and M1b disease. The treatment for patients with stage IV melanoma has changed dramatically in the past 2 years. Two new classes of therapeutic agents for melanoma have been approved by the U.S. Food and Drug Administration (FDA). The immune T cell checkpoint inhibitor, ipilimumab, and three new oral agents that target the MAP kinase pathway: the BRAF inhibitors, vemurafenib and dabrafenib, and the Surgery: Metastasectomy for small number of lesions Immunotherapy: Anti-CTLA-4: ipilimumab Anti-PD-1: nivolumab, lambrolizumab Molecular targeted therapy: BRAF inhibitor: vemurafenib, dabrafenib MEK inhibitor: trametinib Chemotherapy: dacarbazine, temozolomide, paclitaxel, albumin-bound paclitaxel (Abraxane), carboplatin MEK inhibitor, trametinib, are now available, so patients with stage IV disease now have multiple therapeutic options (Table 105-4). Patients with oligometastatic disease should be referred to a surgical oncologist for consideration of metastasectomy, because they may experience long-term disease-free survival after surgery. Patients with solitary metastases are the best candidates, but surgery increasingly is being used even for patients with metastases at more than one site. Patients rendered free of disease can be considered for IFN therapy or a clinical trial because their risk of developing additional metastases is very high. Surgery can also be used as an adjunct to immunotherapy when only a few of many metastatic lesions prove resistant to systemic therapy. The cytokine interleukin 2 (IL-2 or aldesleukin) has been approved to treat patients with melanoma since 1995. IL-2 is used to treat stage IV patients who have a good performance status and is administered at centers with experience managing IL-2-related toxicity. Patients require hospitalization in an intensive care unit– like setting to receive high-dose IL-2 600,000 or 720,000 IU every 8 h for up to 14 doses (one cycle). Patients continue treatment until they achieve maximal benefit, usually 4–6 cycles. Treatment is associated with long-term disease-free survival (probable cures) in 5% of treated patients. The mechanism by which IL-2 causes tumor regression has not been identified, but it is presumed that IL-2 induces melanoma-specific T cells that eliminate tumor cells by recognizing specific antigens. Rosenberg and his colleagues at the National Cancer Institute (NCI) have combined adoptive transfer of in vitro–expanded tumor-infiltrating lymphocytes with high-dose IL-2 in patients who were preconditioned with nonmyeloablative chemotherapy (sometimes combined with total-body irradiation). Tumor regression was observed in more than 50% of patients with IL-2-refractory metastatic melanoma. Immune checkpoint blockade with monoclonal antibodies to the inhibitory immune receptors CTLA-4 and PD-1 has shown promising clinical efficacy. An array of inhibitory receptors are upregulated during an immune response. An absolute requirement to ensure proper regulation of a normal immune response, the continued expression of inhibitory receptors during chronic infection (hepatitis, HIV) and in cancer patients denotes exhausted T cells with limited potential for proliferation, cytokine production, or cytotoxicity (Fig. 105-3). Checkpoint blockade with a monoclonal antibody results in improved T cell function with eradication of tumor cells in preclinical animal models. Ipilimumab, a fully human IgG antibody that binds CTLA-4 and blocks inhibitory signals, was the first treatment of any kind to improve survival in patients with metastatic melanoma. A full course of therapy is four IV outpatient infusions of ipilimumab 3 mg/kg every 3 weeks. Although response rates were low (∼10%) in randomized clinical trials, survival of both previously treated and untreated patients was improved, and ipilimumab was approved by the FDA in March 2011. FIGURE 105-3 Inhibitory regulatory pathways that influence T cell function, memory, and lifespan after engagement of the T cell receptor by antigen presented by antigen-presenting cells in the context of MHC I/II. CTLA-4 and PD-1 are part of the CD28 family and have inhibitory effects that can be mitigated by antagonistic antibodies to the receptors or ligand, resulting in enhanced T cell function and antitumor effects. CTLA-4, cytotoxic T lymphocyte antigen-4; MHC, major histocompatibility complex; PD-1, programmed death-1; PD-L1, programmed death ligand-1; PD-L2, programmed death ligand-2; TCR, T cell receptor. In addition to its antitumor effects, ipilimumab’s interference with normal regulatory mechanisms produced a novel spectrum of side effects that resembled autoimmunity. The most common immune-related adverse events were skin rash and diarrhea (sometimes severe, life-threatening colitis), but toxicity could involve most any organ (e.g., hypophysitis, hepatitis, nephritis, pneumonitis, myocarditis, neuritis). Vigilance and early treatment with steroids that do not appear to interfere with the antitumor effects are required to manage these patients safely. Widespread use of ipilimumab has not been completely embraced by the oncology community because of the low objective response rate, significant toxicity (including death), and high cost (drug cost alone for a course of therapy is approximately $120,000 in 2013). Despite these reservations, ipilimumab’s overall survival benefit (17% of patients alive at 7 years) indicates that treatment should be strongly considered for all eligible patients. Chronic T cell activation also leads to induction of PD-1 on the surface of T cells. Expression of one of its ligands, PD-L1, on tumor cells can protect them from immune destruction (Fig. 105-3). Early trials attempting to block the PD-1:PD-L1 axis by IV administration of anti-PD-1 or anti-PD-L1 have shown substantial clinical activity in patients with advanced melanoma (and lung cancer) with significantly less toxicity than ipilimumab. Anti-PD-1 therapy looks promising, but is not currently available except by participation in clinical trials. Intriguingly, preliminary results from a clinical trial indicate that blocking both inhibitory pathways with ipilimumab and anti-PD-1 leads to superior antitumor activity than treatment with either agent alone. The main benefit to patients from immune-based therapy (IL-2, ipilimumab, and anti-PD-1) is the durability of the responses achieved. Although the percentage of patients whose tumors regress following immunotherapy is lower than the response rate after targeted therapy (see below), the durability of immunotherapy-induced responses (>10 years in some cases) appears to be superior to responses after targeted therapy and suggests that many of these patients have been cured. RAF and MEK inhibitors of the MAP kinase pathway are a new and exciting approach for patients whose melanomas harbor a BRAF mutation. The high frequency of oncogenic mutations in the RASRAF-MEK-ERK pathway, which delivers proliferation and survival signals from the cell surface to the cytoplasm and nucleus, has led to the development of inhibitors to BRAF and MEK. Two BRAF inhibitors, vemurafenib and dabrafenib, have been approved for the treatment of stage IV patients whose melanomas harbor a mutation at position 600 in the gene for BRAF. The oral BRAF inhibitors cause 499 tumor regression in approximately 50% of patients, and overall survival is improved compared to treatment with chemotherapy. Treatment is accompanied by manageable side effects that differ from those following immunotherapy or chemotherapy. A class-specific complication of BRAF inhibition is the development of numerous skin lesions, some of which are well-differentiated squamous cell skin cancers (seen in up to a quarter of patients). Patients should be co-managed with a dermatologist as these skin cancers will need excision. Metastases have not been reported, and treatment can be continued safely following simple excision. Long-term results following treatment with BRAF inhibitors are not yet available, but the current concern is that over time the vast majority of patients will relapse and eventually die from drug-resistant disease. There are a number of mechanisms by which resistance develops, usually via maintenance of MAP kinase signaling; however, mutations in the BRAF gene that affect binding of the inhibitor are not among them. The MEK inhibitor trametinib has activity as a single agent, but appears to be less effective than either of the BRAF inhibitors. Combined therapy with the BRAF inhibitor and MEK inhibitor showed improved progression-free survival compared to BRAF inhibitor therapy alone; and, interestingly, the neoplastic skin lesions that were so troubling with BRAF inhibition alone did not occur. Although the durability of responses following combined therapy remains to be determined, its use in metastatic melanoma is FDA approved. Activating mutations in the c-kit receptor tyrosine kinase are found in a minority of cutaneous melanomas with chronic sun damage, but more commonly in mucosal and acral lentiginous subtypes. Overall, the number of patients with c-kit mutations is exceedingly small, but when present, they are largely identical to mutations found in gastrointestinal stromal tumors (GISTs); melanomas with activating c-kit mutations can have clinically meaningful responses to imatinib. No chemotherapy regimen has ever been shown to improve survival in metastatic melanoma, and the advances in immunotherapy and targeted therapy have relegated chemotherapy to the palliation of symptoms. Drugs with antitumor activity include dacarbazine (DTIC) or its orally administered analog temozolomide (TMZ), cisplatin and carboplatin, the taxanes (paclitaxel alone or albumin-bound and docetaxel), and carmustine (BCNU), which have reported response rates of 12–20%. Upon diagnosis of stage IV disease, whether by biopsy or diagnostic imaging, a sample of the patient’s tumor needs to undergo molecular testing to determine whether a druggable mutation (e.g., BRAF) is present. Analysis of a metastatic lesion is preferred, but any biopsy will suffice because there is little discordance between primary and metastatic lesions. Treatment algorithms start with the tumor’s BRAF status. For BRAF “wild-type” tumors, immunotherapy is recommended. For patients whose tumors harbor a BRAF mutation, initial therapy with either a BRAF inhibitor or immunotherapy is acceptable. Molecular testing may also include N-RAS and c-kit in appropriate tumors. The majority of patients still die from their melanoma, despite improvements in therapy. Therefore, enrollment in a clinical trial is always an important consideration, even for previously untreated patients. Most patients with stage IV disease will eventually progress despite advances in therapy, and many, because of disease burden, poor performance status, or concomitant illness, will be unsuitable for therapy. Therefore, a major focus of care should be the timely integration of palliative care and hospice. Skin examination and surveillance at least once a year are recommended for all patients with melanoma. The National Comprehensive Cancer Network (NCCN) guidelines for patients with stage IA–IIA Cancer of the Skin 500 melanoma recommend a comprehensive history and physical examination every 6–12 months for 5 years, and then annually as clinically indicated. Particular attention should be paid to the draining lymph nodes in stage I–III patients as resection of lymph node recurrences may still be curative. A CBC, LDH, and chest x-ray are recommended at the physician’s discretion, but are ineffective tools for the detection of occult metastases. Routine imaging for metastatic disease is not recommended at this time. For patients with higher stage disease (IIB–IV), imaging (chest x-ray, CT, and/or PET/CT scans) every 4–12 months can be considered. Because no discernible survival benefit has been demonstrated for routine surveillance, it is reasonable to perform scans only if clinically indicated. Nonmelanoma skin cancer (NMSC) is the most common cancer in the United States. Although tumor registries do not routinely gather data on the incidence of basal cell and squamous cell skin cancers, it is estimated that the annual incidence is 1.5–2 million cases in the United States. Basal cell carcinomas (BCCs) account for 70–80% of NMSCs. Squamous cell carcinomas (SCCs), which comprise ∼20% of NMSCs, are more significant because of their ability to metastasize and account for 2400 NMSC deaths annually. There has also been an increase in the incidence of nonepithelial skin cancer, especially Merkel cell carcinoma, with nearly 5000 new diagnoses and 3000 deaths annually. The most significant cause of BCC and SCC is UV exposure, whether through direct exposure to sunlight or by artificial UV light sources (tanning beds). Both UVA and UVB can induce DNA damage through free radical formation (UVA) or induction of pyrimidine dimers (UVB). The sun emits energy across the UV spectrum, whereas tanning bed equipment typically emits 97% UVA and 3% UVB. DNA damage induced by UV irradiation can result in cell death or repair of damaged DNA by nucleotide excision repair (NER). Inherited disorders of NER, such as xeroderma pigmentosum, are associated with a greatly increased incidence of skin cancer and help to establish the link between UV-induced DNA damage, inadequate DNA repair, and skin cancer. The genes damaged most commonly by UV in BCC involve the Hedgehog pathway (Hh). In SCC, p53 and N-RAS are commonly affected. There is a dose-response relationship between tanning bed use and the incidence of skin cancer. As few as four tanning bed visits per year confers a 15% increase in BCC and an 11% increase in SCC and melanoma. Tanning bed use as a teenager or young adult confers greater risk than comparable exposure in older individuals. Other associations include blond or red hair, blue or green eyes, a tendency to sunburn easily, and an outdoor occupation. The incidence of NMSC increases with decreasing latitude. Most tumors develop on sun-exposed areas of the head and neck. The risk of lip or oral SCC is increased with cigarette smoking. Human papillomaviruses and UV radiation may act as cocarcinogens. Solid organ transplant recipients on chronic immunosuppression have a 65-fold increase in SCC and a 10-fold increase in BCC. The frequency of skin cancer is proportional to the level and duration of immunosuppression and the extent of sun exposure before and after transplantation. SCCs in this population also demonstrate higher rates of local recurrence, metastasis, and mortality. There is increasing use of tumor necrosis factor (TNF) antagonists to treat inflammatory bowel disease and autoimmune disorders such as rheumatoid and psoriatic arthritis. TNF antagonists may also confer an increased risk of NMSC. BRAF-targeted therapy can induce SCCs including keratoacanthoma-type SCCs in keratinocytes, with preexisting H-RAS overexpression present in approximately 60% of patients. Other risk factors include HIV infection, ionizing radiation, thermal burn scars, and chronic ulcerations. Albinism, xeroderma pigmentosum, Muir-Torre syndrome, Rombo’s syndrome, BazexDupré-Christol syndrome, dyskeratosis congenita, and basal cell nevus syndrome (Gorlin syndrome) also increase the incidence of NMSC. Mutations in Hh genes encoding the tumor-suppressor patched FIGURE 105-4 Influence of vismodegib on the hedgehog (Hh) pathway. Normally, one of three Hh ligands (sonic [SHh], Indian, or desert) binds to patched homolog 1 (PTCH1), causing its degradation and release of smoothened homolog (SMO). The downstream events of SMO release are the activation of Gli1, Gli2, and Gli3 through the transcriptional regulator known as SUFU. Gli1 and Gli2 translocate to the nucleus and promote gene transcription. Vismodegib is an SMO antagonist that decreases the interaction between SMO and PTCH1, resulting in decreased Hh pathway signaling, gene transcription, and cell division. The downstream Hh pathway events inhibited by vismodegib are indicated in red. homolog 1 (PTCH1) and smoothened homolog (SMO) occur in BCC. Aberrant PTCH1 signaling is propagated by the nuclear transcription factors Gli1 and Gli2, which are salient in the development of BCC and have led to the FDA approval of an oral SMO inhibitor, vismodegib, to treat advanced inoperable or metastatic BCC (Fig. 105-4). Vismodegib also reduces the incidence of BCC in patients with basal cell nevus syndrome who have PTCH1 mutations, affirming the importance of Hh in the onset of BCC. CLINICAL PRESENTATION Basal Cell Carcinoma BCC arises from epidermal basal cells. The least invasive of BCC subtypes, superficial BCC, consists of often subtle, erythematous scaling plaques that slowly enlarge and are most commonly seen on the trunk and proximal extremities (Fig. 105-5). This BCC subtype may be confused with benign inflammatory dermatoses, especially nummular eczema and psoriasis. BCC also can present as a small, slowly growing pearly nodule, often with tortuous telangiectatic vessels on its surface, rolled borders, and a central crust (nodular BCC). The occasional presence of melanin in this variant of nodular BCC (pigmented BCC) may lead to confusion with melanoma. Morpheaform (fibrosing), infiltrative, and micronodular BCC, the most invasive and potentially aggressive subtypes, manifest as solitary, flat or slightly depressed, indurated whitish, yellowish, or pink scar-like plaques. Borders are typically indistinct, and lesions can be subtle; thus, delay in treatment is common, and tumors can be more extensive than expected clinically. Squamous Cell Carcinoma Primary cutaneous SCC is a malignant neoplasm of keratinizing epidermal cells. SCC has a variable clinical course, ranging from indolent to rapid growth kinetics, with the potential for metastasis to regional and distant sites. Commonly, SCC appears as an ulcerated erythematous nodule or superficial erosion on sun-exposed skin of the head, neck, trunk, and extremities (Fig. 105-5). It may also appear as a banal, firm, dome-shaped papule or rough-textured plaque. It is commonly mistaken for a wart or callous when the inflammatory response to the lesion is minimal. Clinically visible overlying telangiectasias are uncommon, although dotted or coiled vessels are a hallmark of SCC when viewed through a dermatoscope. FIGURE 105-5 Cutaneous neoplasms. A. Non-Hodgkin’s lymphoma involves the skin with typical violaceous, “plum-colored” nodules. B. Squamous cell carcinoma is seen here as a hyperkeratotic crusted and somewhat eroded plaque on the lower lip. Sun-exposed skin in areas such as the head, neck, hands, and arms represent other typical sites of involvement. C. Actinic keratoses consist of hyperkeratotic erythematous papules and patches on sun-exposed skin. They arise in middle-aged to older adults and have some potential for malignant transformation. D. Metastatic carcinoma to the skin is characterized by inflammatory, often ulcerated dermal nodules. E. Mycosis fungoides is a cutaneous T cell lymphoma, and plaque-stage lesions are seen in this patient. F. Keratoacanthoma is a low-grade squamous cell carcinoma that presents as an exophytic nodule with central keratinous debris. G. This basal cell carcinoma shows central ulceration and a pearly, rolled telangiectatic tumor border. Cancer of the Skin The margins of this tumor may be ill defined, and fixation to underlying structures may occur (“tethering”). A very rapidly growing but low-grade form of SCC, called keratoacanthoma (KA), typically appears as a large dome-shaped papule with a central keratotic crater. Some KAs regress spontaneously without therapy, but because progression to metastatic SCC has been documented, KAs should be treated in the same manner as other types of cutaneous SCC. KAs are also associated with medications that target BRAF mutations and occur in 15–25% of patients receiving these medications. Actinic keratoses and cheilitis (actinic keratoses occurring on the lip), both premalignant forms of SCC, present as hyperkeratotic papules on sun-exposed areas. The potential for malignant degeneration in untreated lesions ranges from 0.25 to 20%. SCC in situ, also called Bowen’s disease, is the intraepidermal form of SCC and usually presents as a scaling, erythematous plaque. As with invasive SCC, SCC in situ most commonly arises on sun-damaged skin, but can occur anywhere on the body. Bowen’s disease forming secondary to infection with human papillomavirus (HPV) can arise on skin with minimal or no prior sun exposure, such as the buttock or posterior thigh. Treatment of premalignant and in situ lesions reduces the subsequent risk of invasive disease. NATURAL HISTORY Basal Cell Carcinoma The natural history of BCC is that of a slowly enlarging, locally invasive neoplasm. The degree of local destruction and risk of recurrence vary with the size, duration, location, and histologic subtype of the tumor. Location on the central face, ears, or scalp may portend a higher risk. Small nodular, pigmented, cystic, or superficial BCCs respond well to most treatments. Large lesions and micronodular, infiltrative, and morpheaform subtypes may be more aggressive. The metastatic potential of BCC is low (0.0028–0.1% in immunocompetent patients), but the risk of recurrence or a new primary NMSC is about 40% over 5 years. Squamous Cell Carcinoma The natural history of SCC depends on tumor and host characteristics. Tumors arising on sun-damaged skin have a lower metastatic potential than do those on non-sunexposed areas. Cutaneous SCC metastasizes in 0.3–5.2% of individuals, most frequently to regional lymph nodes. Tumors occurring on the lower lip and ear develop regional metastases in 13 and 11% of patients, respectively, whereas the metastatic potential of SCC arising in scars, chronic ulcerations, and genital or mucosal surfaces is higher. Recurrent SCC has a much higher potential for metastatic disease, approaching 30%. Large, poorly differentiated, deep tumors with perineural or lymphatic invasion, multifocal tumors, and those arising in immunosuppressed patients often behave aggressively. Treatments used for BCC include electrodesiccation and curettage (ED&C), excision, cryosurgery, radiation therapy (RT), laser therapy, Mohs micrographic surgery (MMS), topical 5-fluorouracil, photo-dynamic therapy (PDT), and topical immunomodulators such as imiquimod. The therapy chosen depends on tumor characteristics including depth and location, patient age, medical status, and patient preference. ED&C remains the most commonly employed method for superficial, minimally invasive nodular BCCs and low-risk tumors (e.g., a small tumor of a less aggressive subtype in a favorable location). Wide local excision with standard margins is usually selected for invasive, ill-defined, and more aggressive subtypes of tumors, or for cosmetic reasons. MMS, a specialized type of surgical excision that provides the best method for tumor removal while preserving uninvolved tissue, is associated with cure rates >98%. It is the preferred modality for lesions that are recurrent, in high-risk or cosmetically sensitive locations (including recurrent tumors in these locations), and in which maximal tissue conservation is critical (e.g., the eyelids, lips, ears, nose, and digits). RT can cure patients not considered surgical candidates and can be used as a surgical adjunct in high-risk tumors. Younger patients may not be good candidates 502 for RT because of the risks of long-term carcinogenesis and radio-dermatitis. Imiquimod can be used to treat superficial and smaller nodular BCCs, although it is not FDA-approved for nodular BCC. Topical 5-fluorouracil therapy should be limited to superficial BCC. PDT, which uses selective activation of a photoactive drug by visible light, has been used in patients with numerous tumors. Intralesional chemotherapy (5-fluorouracil and IFN) for NMSC has existed since the mid-twentieth century, but is used so infrequently that recent consensus guidelines for the treatment of BCC and SCC do not include it. Like RT, it remains an option for well-selected patients who cannot or will not undergo surgery. Therapy for cutaneous SCC should be based on the size, location, histologic differentiation, patient age, and functional status. Surgical excision and MMS are standard treatments. Cryosurgery and ED&C have been used for premalignant lesions and small, superficial, in situ primary tumors. Lymph node metastases are treated with surgical resection, RT, or both. Systemic chemotherapy combinations that include cisplatin can palliate patients with advanced disease. SCC and keratoacanthomas that develop in patients receiving BRAF-targeted therapy should be excised, but their development should not deter the continued use of BRAF therapy. Retinoid prophylaxis can also be considered for patients receiving BRAF-targeted therapy, although no prospective studies have been completed thus far. The general principles for prevention are those described for melanoma earlier. Unique strategies for NMSC include active surveillance for patients on immunosuppressive medications or BRAF-targeted therapy. Chemoprophylaxis using synthetic retinoids and immunosuppression reduction when possible may be useful in controlling new lesions and managing patients with multiple tumors. Neoplasms of cutaneous adnexae and sarcomas of fibrous, mesenchymal, fatty, and vascular tissues make up the remaining 1–2% of NMSCs. Merkel cell carcinoma (MCC) is a neural crest–derived highly aggressive malignancy with mortality rates approaching 33% at 3 years. An oncogenic Merkel cell polyomavirus is present in 80% of tumors. Many patients have detectable cellular or humoral immune responses to polyoma viral proteins, although this immune response is insufficient to eradicate the malignancy. Survival depends on extent of disease: 90% survive with local disease, 52% with nodal involvement, and only 10% with distant disease at 3 years. MCC incidence tripled over the last 20 years with an estimated 1600 cases per year in the United States. Immunosuppression can increase incidence and diminish prognosis. MCC lesions typically present as an asymptomatic rapidly expanding bluish-red/violaceous tumor on sun-exposed skin of older white patients. Treatment is surgical excision with sentinel lymph node biopsy for accurate staging in patients with localized disease, often followed by adjuvant RT. Patients with extensive disease can be offered systemic chemotherapy; however, there is no convincing survival benefit. Whenever possible a clinical trial should be considered for this rare but aggressive NMSC, especially in light of the potential for new treatments directed at the oncogenic virus that causes this malignancy. Extramammary Paget’s disease is an uncommon apocrine malignancy arising from stem cells of the epidermis that are characterized histologically by the presence of Paget cells. These tumors present as moist erythematous patches on anogenital or axillary skin of the elderly. Outcomes are generally good with site-directed surgery, and 5-year disease specific survival is approximately 95% with localized disease. Advanced age and extensive disease at presentation are factors that confer diminished prognosis. RT or topical imiquimod can be considered for more extensive disease. Local management may be challenging because these tumors often extend far beyond clinical margins; surgical excision with MMS has the highest cure rates. Similarly, MMS is the treatment of choice in other rare cutaneous tumors with extensive subclinical extension such as dermatofibromasarcoma protuberans. Kaposi’s sarcoma (KS) is a soft tissue sarcoma of vascular origin that is induced by the human herpesvirus 8. The incidence of KS increased dramatically during the AIDS epidemic, but has now decreased tenfold with the institution of highly active antiretroviral therapy. Carl V. Washington, MD, and Hari Nadiminti, MD, contributed to this chapter in the 18th edition, and material from that chapter is included here. Claudia Taylor, MD, and Steven Kolker, MD, provided valued feedback and suggested many improvements to this chapter. Everett E. Vokes Epithelial carcinomas of the head and neck arise from the mucosal surfaces in the head and neck and typically are squamous cell in origin. This category includes tumors of the paranasal sinuses, the oral cavity, and the nasopharynx, oropharynx, hypopharynx, and larynx. Tumors of the salivary glands differ from the more common carcinomas of the head and neck in etiology, histopathology, clinical presentation, and therapy. They are rare and histologically highly heterogeneous. Thyroid malignancies are described in Chap. 405. The number of new cases of head and neck cancers (oral cav ity, pharynx, and larynx) in the United States was 53,640 in 2013, accounting for about 3% of adult malignancies; 11,520 people died from the disease. The worldwide incidence exceeds half a million cases annually. In North America and Europe, the tumors usually arise from the oral cavity, oropharynx, or larynx. The incidence of oropharyngeal cancers is increasing in recent years. Nasopharyngeal cancer is more commonly seen in the Mediterranean countries and in the Far East, where it is endemic in some areas. Alcohol and tobacco use are the most significant risk factors for head and neck cancer, and when used together, they act synergistically. Smokeless tobacco is an etiologic agent for oral cancers. Other potential carcinogens include marijuana and occupational exposures such as nickel refining, exposure to textile fibers, and woodworking. Some head and neck cancers have a viral etiology. Epstein-Barr virus (EBV) infection is frequently associated with nasopharyngeal cancer, especially in endemic areas of the Mediterranean and Far East. EBV antibody titers can be measured to screen high-risk populations. Nasopharyngeal cancer has also been associated with consumption of salted fish and in-door pollution. In Western countries, the human papilloma virus (HPV) is associated with a rising incidence of tumors arising from the oropharynx, i.e., the tonsillar bed and base of tongue. Over 50% of oropharyngeal tumors are caused by HPV in the United States. HPV 16 is the dominant viral subtype, although HPV 18 and other oncogenic subtypes are seen as well. Alcoholand tobacco-related cancers, on the other hand, have decreased in incidence. HPV-related oropharyngeal cancer occurs in a younger patient population and is associated with increased numbers of sexual partners and oral sexual practices. It is associated with a better prognosis, especially for nonsmokers. Dietary factors may contribute. The incidence of head and neck cancer is higher in people with the lowest consumption of fruits and vegetables. Certain vitamins, including carotenoids, may be protective if included in a balanced diet. Supplements of retinoids, such as cisretinoic acid, have not been shown to prevent head and neck cancers (or lung cancer) and may increase the risk in active smokers. No specific risk factors or environmental carcinogens have been identified for salivary gland tumors. HISTOPATHOLOGY, CARCINOGENESIS, AND MOLECULAR BIOLOGY Squamous cell head and neck cancers are divided into well-differentiated, moderately well-differentiated, and poorly differentiated categories. Poorly differentiated tumors have a worse prognosis than well-differentiated tumors. For nasopharyngeal cancers, the less common differentiated squamous cell carcinoma is distinguished from nonkeratinizing and undifferentiated carcinoma (lymphoepithelioma) that contains infiltrating lymphocytes and is commonly associated with EBV. Salivary gland tumors can arise from the major (parotid, submandibular, sublingual) or minor salivary glands (located in the submucosa of the upper aerodigestive tract). Most parotid tumors are benign, but half of submandibular and sublingual gland tumors and most minor salivary gland tumors are malignant. Malignant tumors include mucoepidermoid and adenoid cystic carcinomas and adenocarcinomas. The mucosal surface of the entire pharynx is exposed to alcohol-and tobacco-related carcinogens and is at risk for the development of a premalignant or malignant lesion. Erythroplakia (a red patch) or leukoplakia (a white patch) can be histopathologically classified as hyperplasia, dysplasia, carcinoma in situ, or carcinoma. However, most head and neck cancer patients do not present with a history of premalignant lesions. Multiple synchronous or metachronous cancers can also be observed. In fact, over time, patients with early-stage head and neck cancer are at greater risk of dying from a second malignancy than from a recurrence of the primary disease. Second head and neck malignancies are usually not therapy-induced; they reflect the exposure of the upper aerodigestive mucosa to the same carcinogens that caused the first cancer. These second primaries develop in the head and neck area, the lung, or the esophagus. Thus, computed tomography (CT) screening for lung cancer in heavy smokers who have already developed a head and neck cancer should be considered. Rarely, patients can develop a radiation therapy–induced sarcoma after having undergone prior radiotherapy for a head and neck cancer. Much progress has been made in describing the molecular features of head and neck cancer. These features have allowed investigators to describe the genetic and epigenetic alterations and the mutational spectrum of these tumors. Early reports demonstrated frequent overexpression of the epidermal growth factor receptor (EGFR). Overexpression was shown to correlate with poor prognosis. However, it has not proved to be a good predictor of tumor response to EGFR inhibitors, which are successful in only about 10–15% of patients. p53 mutations are also found frequently with other major affected oncogenic driver pathways including the mitotic signaling and Notch pathways and cell cycle regulation. The PI3K pathway is frequently altered, especially in HPV-positive tumors, where it is the only mutated cancer gene identified to date. Overall, these alterations affect mitogenic signaling, genetic stability, cellular proliferation, and differentiation. HPV is known to act through inhibition of the p53 and RB tumor-suppressor genes, thereby initiating the carcinogenic process, and has a mutational spectrum distinct from alcoholand tobacco-related tumors. Most tobacco-related head and neck cancers occur in patients older than age 60 years. HPV-related malignancies are frequently diagnosed in younger patients, usually in their forties or fifties, whereas EBV-related nasopharyngeal cancer can occur in all ages, including teenagers. The manifestations vary according to the stage and primary site of the tumor. Patients with nonspecific signs and symptoms in the head and neck area should be evaluated with a thorough otolaryngologic exam, particularly if symptoms persist longer than 2–4 weeks. Males are more frequently affected than women by head and neck cancers, including HPV-positive tumors. Cancer of the nasopharynx typically does not cause early symptoms. However, it may cause unilateral serous otitis media due to obstruction of the eustachian tube, unilateral or bilateral nasal obstruction, or 503 epistaxis. Advanced nasopharyngeal carcinoma causes neuropathies of the cranial nerves due to skull base involvement. Carcinomas of the oral cavity present as nonhealing ulcers, changes in the fit of dentures, or painful lesions. Tumors of the tongue base or oropharynx can cause decreased tongue mobility and alterations in speech. Cancers of the oropharynx or hypopharynx rarely cause early symptoms, but they may cause sore throat and/or otalgia. HPV-related tumors frequently present with neck lymphadenopathy as the first sign. Hoarseness may be an early symptom of laryngeal cancer, and persistent hoarseness requires referral to a specialist for indirect laryngoscopy and/or radiographic studies. If a head and neck lesion treated initially with antibiotics does not resolve in a short period, further workup is indicated; to simply continue the antibiotic treatment may be to lose the chance of early diagnosis of a malignancy. Advanced head and neck cancers in any location can cause severe pain, otalgia, airway obstruction, cranial neuropathies, trismus, odynophagia, dysphagia, decreased tongue mobility, fistulas, skin involve ment, and massive cervical lymphadenopathy, which may be unilateral or bilateral. Some patients have enlarged lymph nodes even though no primary lesion can be detected by endoscopy or biopsy; these patients are considered to have carcinoma of unknown primary (Fig. 106-1). If the enlarged nodes are located in the upper neck and the tumor cells are of squamous cell histology, the malignancy probably arose from a mucosal surface in the head or neck. Tumor cells in supraclavicular lymph nodes may also arise from a primary site in the chest or abdomen. The physical examination should include inspection of all visible mucosal surfaces and palpation of the floor of the mouth and of the tongue and neck. In addition to tumors themselves, leukoplakia (a white mucosal patch) or erythroplakia (a red mucosal patch) may be observed; these “premalignant” lesions can represent hyperplasia, dysplasia, or carcinoma in situ and require biopsy. Further examination should be performed by a specialist. Additional staging procedures include CT of the head and neck to identify the extent of the disease. Patients with lymph node involvement should have CT scan of the chest and upper abdomen to screen for distant metastases. In heavy smokers, the CT scan of the chest can also serve as a screening tool to rule out a second lung primary tumor. A positron emission tomography (PET) scan may also be administered and can help to identify or exclude distant metastases. The definitive staging procedure is an Physical Examination in Office If lymphoma, sarcoma, or salivary gland tumor Panendoscopy and directed biopsies. Search for occult primary with biopsies of tonsils, nasopharynx, base of tongue, and pyriform sinus. FNA or excision of lymph node Specific workup If squamous cell carcinoma FIGURE 106-1 Evaluation of a patient with cervical adenopathy without a primary mucosal lesion; a diagnostic workup. FNA, fine-needle aspiration. 504 endoscopic examination under anesthesia, which may include laryngoscopy, esophagoscopy, and bronchoscopy; during this procedure, multiple biopsy samples are obtained to establish a primary diagnosis, define the extent of primary disease, and identify any additional premalignant lesions or second primaries. Head and neck tumors are classified according to the tumornode-metastasis (TNM) system of the American Joint Committee on Cancer (Fig. 106-2). This classification varies according to the specific anatomic subsite. In general, primary tumors are classified as T1 to T3 by increasing size, whereas T4 usually represents invasion of another structure such as bone, muscle, or root of tongue. Lymph nodes are staged by size, number, and location (ipsilateral vs contralateral to the primary). Distant metastases are found in <10% of patients at initial diagnosis and are more common in patients with advanced lymph node stage; microscopic involvement of the lungs, bones, or liver is more common, particularly in patients with advanced neck lymph node disease. Modern imaging techniques may increase the number of patients with clinically detectable distant metastases in the future. In patients with lymph node involvement and no visible primary, the diagnosis should be made by lymph node excision (Fig. 106-1). If the results indicate squamous cell carcinoma, a panendoscopy should be performed, with biopsy of all suspicious-appearing areas and directed biopsies of common primary sites, such as the nasopharynx, tonsil, tongue base, and pyriform sinus. HPV-positive tumors especially can have small primary tumors that spread early to locoregional lymph nodes. Patients with head and neck cancer can be grossly categorized into three clinical groups: those with localized disease, those with locally or regionally advanced disease (lymph node positive), and those with recurrent and/or metastatic disease. Comorbidities associated with tobacco and alcohol abuse can affect treatment outcome and define long-term risks for patients who are cured of their disease. Nearly one-third of patients have localized disease, that is, T1 or T2 (stage I or stage II) lesions without detectable lymph node involvement or distant metastases. These patients are treated with curative intent by either surgery or radiation therapy. The choice of modality differs according to anatomic location and institutional expertise. Radiation therapy is often preferred for laryngeal cancer to preserve voice function, and surgery is preferred for small lesions in the oral cavity to avoid the long-term complications of radiation, such as xerostomia and dental decay. Overall 5-year survival is 60–90%. Most recurrences occur within the first 2 years following diagnosis and are usually local. Locally or regionally advanced disease—disease with a large primary tumor and/or lymph node metastases—is the stage of presentation for >50% of patients. Such patients can also be treated with curative intent, but not with surgery or radiation therapy alone. Combined-modality therapy including surgery, radiation therapy, and chemotherapy is most successful. It can be administered as induction chemotherapy (chemotherapy before surgery and/or radiotherapy) or as concomitant (simultaneous) chemotherapy and radiation therapy. The latter is currently most commonly used and supported by the best evidence. Five-year survival rates exceed 50% in many trials, but part of this increased survival may be due to an increasing fraction of study populations with HPV-related tumors who carry a better prognosis. HPV testing of newly diagnosed tumors is now performed for most patients at the time of diagnosis, and clinical trials for HPV-related tumors are focused on exploring reductions in treatment intensity, especially radiation dose, in order to ameliorate long-term toxicities (fibrosis, swallowing dysfunction). In patients with intermediate-stage tumors (stage III and early stage IV), concomitant chemoradiotherapy can be administered either as a primary treatment for patients with unresectable disease, to pursue an organ-preserving approach, or in the postoperative setting for intermediate-stage resectable tumors. Induction Chemotherapy In this strategy, patients receive chemotherapy (current standard is a three-drug regimen of docetaxel, cisplatin, and fluorouracil [5-FU]) before surgery and radiation therapy. Most patients who receive three cycles show tumor reduction, and the response is clinically “complete” in up to half of patients. This “sequential” multimodality therapy allows for organ preservation (omission of surgery) in patients with laryngeal and hypopharyngeal cancer, and it has been shown to result in higher cure rates compared with radiotherapy alone. Concomitant Chemoradiotherapy With the concomitant strategy, chemotherapy and radiation therapy are given simultaneously rather than in sequence. Tumor recurrences from head and neck cancer develop most commonly locoregionally (in the head and neck area of the primary and draining lymph nodes). The concomitant approach is aimed at enhancing tumor cell killing by radiation therapy in the presence of chemotherapy (radiation enhancement) and is a conceptually attractive approach for bulky tumors. Toxicity 505 (especially mucositis, grade 3 or 4 in 70–80%) is increased with concomitant chemoradiotherapy. However, meta-analyses of randomized trials document an improvement in 5-year survival of 8% with concomitant chemotherapy and radiation therapy. Results seem more favorable in recent trials as more active drugs or more intensive radiotherapy schedules are used. In addition, concomitant chemoradiotherapy produces better laryngectomy-free survival (organ preservation) than radiation therapy alone in patients with advanced larynx cancer. The use of radiation therapy together with cisplatin has also produced improved survival in patients with advanced nasopharyngeal cancer. The outcome of HPV-related cancers seems to be especially favorable following cisplatin-based chemoradiotherapy. The success of concomitant chemoradiotherapy in patients with unresectable disease has led to the testing of a similar approach in patients with resected intermediate-stage disease as a postoperative therapy. Concomitant chemoradiotherapy produces a signifi cant improvement over postoperative radiation therapy alone for patients whose tumors demonstrate higher risk features, such as extracapsular spread beyond involved lymph nodes, involvement of multiple lymph nodes, or positive margins at the primary site following surgery. A monoclonal antibody to EGFR (cetuximab) increases survival rates when administered during radiotherapy. EGFR blockade results in radiation sensitization and has milder systemic side effects than traditional chemotherapy agents, although an acneiform skin rash is commonly observed. Nevertheless, the integration of cetuximab into current standard chemoradiotherapy regimens has failed to show additional improvement in survival and is not recommended. Five to ten percent of patients present with metastatic disease, and 30–50% of patients with locoregionally advanced disease experience recurrence, frequently outside the head and neck region. Patients with recurrent and/or metastatic disease are, with few exceptions, treated with palliative intent. Some patients may require local or regional radiation therapy for pain control, but most are given chemotherapy. Response rates to chemotherapy average only 30–50%; the durations of response are short, and the median survival time is 8–10 months. Therefore, chemotherapy provides transient symptomatic benefit. Drugs with single-agent activity in this setting include methotrexate, 5-FU, cisplatin, paclitaxel, and docetaxel. Combinations of cisplatin with 5-FU, carboplatin with 5-FU, and cisplatin or carboplatin with paclitaxel or docetaxel are frequently used. EGFR-directed therapies, including monoclonal antibodies (e.g., cetuximab) and tyrosine kinase inhibitors (TKIs) of the EGFR signaling pathway (e.g., erlotinib or gefitinib), have single-agent activity of approximately 10%. Side effects are usually limited to an acneiform rash and diarrhea (for the TKIs). The addition of cetuximab to standard combination chemotherapy with cisplatin or carboplatin and 5-FU was shown to result in a significant increase in median survival. Drugs targeting specific mutations are under investigation, but no such strategy has yet been shown to be feasible in head and neck cancer. Complications from treatment of head and neck cancer are usually correlated to the extent of surgery and exposure of normal tissue structures to radiation. Currently, the extent of surgery has been limited or completely replaced by chemotherapy and radiation therapy as the primary approach. Acute complications of radiation include mucositis and dysphagia. Long-term complications include xerostomia, loss of taste, decreased tongue mobility, second malignancies, dysphagia, and neck fibrosis. The complications of chemotherapy vary with the regimen used but usually include myelosuppression, mucositis, nausea and vomiting, and nephrotoxicity (with cisplatin). 506 The mucosal side effects of therapy can lead to malnutrition and dehydration. Many centers address issues of dentition before starting treatment, and some place feeding tubes to ensure control of hydration and nutrition intake. About 50% of patients develop hypothyroidism from the treatment; thus, thyroid function should be monitored. Most benign salivary gland tumors are treated with surgical excision, and patients with invasive salivary gland tumors are treated with surgery and radiation therapy. These tumors may recur regionally; adenoid cystic carcinoma has a tendency to recur along the nerve tracks. Distant metastases may occur as late as 10–20 years after the initial diagnosis. For metastatic disease, therapy is given with palliative intent, usually chemotherapy with doxorubicin and/or cisplatin. Identification of novel agents with activity in these tumors is a high priority. neoplasms of the Lung Leora Horn, Christine M. Lovly, David H. Johnson Lung cancer, which was rare prior to 1900 with fewer than 400 cases described in the medical literature, is considered a disease of modern man. By the mid-twentieth century, lung cancer had become epidemic and firmly established as the leading cause of cancer-related death in North America and Europe, killing over three times as many men as prostate cancer and nearly twice as many women as breast cancer. This fact is particularly troubling because lung cancer is one of the most preventable of all of the major malignancies. Tobacco consumption is the primary cause of lung cancer, a reality firmly established in the mid-twentieth century and codified with the release of the U.S. Surgeon General’s 1964 report on the health effects of tobacco smoking. Following the report, cigarette use started to decline in North America and parts of Europe, and with it, so did the incidence of lung cancer. To date, the decline in lung cancer is seen most clearly in men; only recently has the decline become apparent among women in the United States. Unfortunately, in many parts of the world, especially in countries with developing economies, cigarette use continues to increase, and along with it, the incidence of lung cancers is also rising. Although tobacco smoking remains the primary cause of lung cancer worldwide, approximately 60% of new lung cancers in the United States occur in former smokers (smoked ≥100 cigarettes per lifetime, quit ≥1 year), many of whom quit decades ago, or never smokers (smoked <100 cigarettes per lifetime). Moreover, one in five women and one in 12 men diagnosed with lung cancer have never smoked. Given the magnitude of the problem, it is incumbent that every internist has a general knowledge of lung cancer and its management. Lung cancer is the most common cause of cancer death among American men and women. More than 225,000 individuals will be diagnosed with lung cancer in the United States in 2013, and over 150,000 individuals will die from the disease. The incidence of lung cancer peaked among men in the late 1980s and has plateaued in women. Lung cancer is rare below age 40, with rates increasing until age 80, after which the rate tapers off. The projected lifetime probability of developing lung cancer is estimated to be approximately 8% among males and approximately 6% among females. The incidence of lung cancer varies by racial and ethnic group, with the highest age-adjusted incidence rates among African Americans. The excess in age-adjusted rates among African Americans occurs only among men, but examinations of age-specific rates show that below age 50, mortality from lung cancer is more than 25% higher among African American than Caucasian women. Incidence and mortality rates among Hispanics and Native and Asian Americans are approximately 40–50% those of whites. Cigarette smokers have a 10-fold or greater increased risk of developing lung cancer compared to those who have never smoked. A deep sequencing study suggested that one genetic mutation is induced for every 15 cigarettes smoked. The risk of lung cancer is lower among persons who quit smoking than among those who continue smoking; former smokers have a ninefold increased risk of developing lung cancer compared to men who have never smoked versus the 20-fold excess in those who continue to smoke. The size of the risk reduction increases with the length of time the person has quit smoking, although generally even long-term former smokers have higher risks of lung cancer than those who never smoked. Cigarette smoking has been shown to increase the risk of all the major lung cancer cell types. Environmental tobacco smoke (ETS) or second-hand smoke is also an established cause of lung cancer. The risk from ETS is less than from active smoking, with about a 20–30% increase in lung cancer observed among never smokers married for many years to smokers, in comparison to the 2000% increase among continuing active smokers. Although cigarette smoking is the cause of the majority of lung cancers, several other risk factors have been identified, including occupational exposures to asbestos, arsenic, bischloromethyl ether, hexavalent chromium, mustard gas, nickel (as in certain nickel-refining processes), and polycyclic aromatic hydrocarbons. Occupational observations also have provided insight into possible mechanisms of lung cancer induction. For example, the risk of lung cancer among asbestos-exposed workers is increased primarily among those with underlying asbestosis, raising the possibility that the scarring and inflammation produced by this fibrotic nonmalignant lung disease may in many cases (although likely not in all) be the trigger for asbestos-induced lung cancer. Several other occupational exposures have been associated with increased rates of lung cancer, but the causal nature of the association is not as clear. The risk of lung cancer appears to be higher among individuals with low fruit and vegetable intake during adulthood. This observation led to hypotheses that specific nutrients, in particular retinoids and carotenoids, might have chemopreventative effects for lung cancer. However, randomized trials failed to validate this hypothesis. In fact, studies found the incidence of lung cancer was increased among smokers with supplementation. Ionizing radiation is also an established lung carcinogen, most convincingly demonstrated from studies showing increased rates of lung cancer among survivors of the atom bombs dropped on Hiroshima and Nagasaki and large excesses among workers exposed to alpha irradiation from radon in underground uranium mining. Prolonged exposure to low-level radon in homes might impart a risk of lung cancer equal or greater than that of ETS. Prior lung diseases such as chronic bronchitis, emphysema, and tuberculosis have been linked to increased risks of lung cancer as well. Smoking Cessation Given the undeniable link between cigarette smoking and lung cancer (not even addressing other tobacco-related illnesses), physicians must promote tobacco abstinence. Physicians also must help their patients who smoke to stop smoking. Smoking cessation, even well into middle age, can minimize an individual’s subsequent risk of lung cancer. Stopping tobacco use before middle age avoids more than 90% of the lung cancer risk attributable to tobacco. However, there is little health benefit derived from just “cutting back.” Importantly, smoking cessation can even be beneficial in individuals with an established diagnosis of lung cancer, as it is associated with improved survival, fewer side effects from therapy, and an overall improvement in quality of life. Moreover, smoking can alter the metabolism of many chemotherapy drugs, potentially adversely altering the toxicities and therapeutic benefits of the agents. Consequently, it is important to promote smoking cessation even after the diagnosis of lung cancer is established. Physicians need to understand the essential elements of smoking cessation therapy. The individual must want to stop smoking and must be willing to work hard to achieve the goal of smoking abstinence. Self-help strategies alone only marginally affect quit rates, whereas individual and combined pharmacotherapies in combination with counseling can significantly increase rates of cessation. Therapy with an antidepressant (e.g., bupropion) and nicotine replacement therapy (varenicline, a α4β2 nicotinic acetylcholine receptor partial agonist) are approved by the U.S. Food and Drug Administration (FDA) as first-line treatments for nicotine dependence. However, both drugs have been reported to increase suicidal ideation and must be used with caution. In a randomized trial, varenicline was shown to be more efficacious than bupropion or placebo. Prolonged use of varenicline beyond the initial induction phase proved useful in maintaining smoking abstinence. Clonidine and nortriptyline are recommended as second-line treatments. Of note, reducing cigarettes smoked before quit day and quitting abruptly, with no prior reduction, yield comparable quit rates. Therefore, patients can be given the choice to quit in either of these ways (Chap. 470). Inherited Predisposition to Lung Cancer Exposure to environmental carcinogens, such as those found in tobacco smoke, induce or facilitate the transformation from bronchoepithelial cells to the malignant phenotype. The contribution of carcinogens on transformation is modulated by polymorphic variations in genes that affect aspects of carcinogen metabolism. Certain genetic polymorphisms of the P450 enzyme system, specifically CYP1A1, and chromosome fragility are associated with the development of lung cancer. These genetic variations occur at relatively high frequency in the population, but their contribution to an individual’s lung cancer risk is generally low. However, because of their population frequency, the overall impact on lung cancer risk could be high. In addition, environmental factors, as modified by inherited modulators, likely affect specific genes by deregulating important pathways to enable the cancer phenotype. First-degree relatives of lung cancer probands have a twoto threefold excess risk of lung cancer and other cancers, many of which are not smoking-related. These data suggest that specific genes and/or genetic variants may contribute to susceptibility to lung cancer. However, very few such genes have yet been identified. Individuals with inherited mutations in RB (patients with retinoblastoma living to adulthood) and p53 (Li-Fraumeni syndrome) genes may develop lung cancer. Common gene variants involved in lung cancer have been recently identified through large, collaborative, genome-wide association studies. These studies identified three separate loci that are associated with lung cancer (5p15, 6p21, and 15q25) and include genes that regulate acetylcholine nicotinic receptors and telomerase production. A rare germline mutation (T790M) involving the epidermal growth factor receptor (EGFR) maybe be linked to lung cancer susceptibility in never smokers. Likewise, a susceptibility locus on chromosome 6q greatly increases risk lung cancer risk among light and never smokers. Although progress has been made, there is a significant amount of work that remains to be done in identifying heritable risk factors for lung cancer. Currently no molecular criteria are suitable to select patients for more intense screening programs or for specific chemopreventative strategies. The World Health Organization (WHO) defines lung cancer as tumors arising from the respiratory epithelium (bronchi, bronchioles, and alveoli). The WHO classification system divides epithelial lung cancers into four major cell types: small-cell lung cancer (SCLC), adenocarcinoma, squamous cell carcinoma, and large-cell carcinoma; the latter three types are collectively known as non-small-cell carcinomas (NSCLCs) (Fig. 107-1). Small-cell carcinomas consist of small cells with scant cytoplasm, ill-defined cell borders, finely granular nuclear chromatin, absent or inconspicuous nucleoli, and a high mitotic count. SCLC may be distinguished from NSCLC by the presence of neuroendocrine markers including CD56, neural cell adhesion molecule (NCAM), synaptophysin, and chromogranin. In North America, adenocarcinoma is the most common histologic type of lung cancer. Adenocarcinomas possess glandular differentiation or mucin production and may show acinar, papillary, lepidic, or solid features or a mixture of these patterns. Squamous cell carcinomas of the Small Cell Lung Cancer (SCLC) Non-Small Cell Lung Cancer (NSCLC): FIGURE 107-1 Traditional histologic view of lung cancer. lung are morphologically identical to extrapulmonary squamous cell carcinomas and cannot be distinguished by immunohistochemistry alone. Squamous cell tumors show keratinization and/or intercellular bridges that arise from bronchial epithelium. The tumor tends to consists of sheets of cells rather than the three-dimensional groups of cells characteristic of adenocarcinomas. Large-cell carcinomas comprise less than 10% of lung carcinomas. These tumors lack the cytologic and architectural features of small-cell carcinoma and glandular or squamous differentiation. Together these four histologic types account for approximately 90% of all epithelial lung cancers. All histologic types of lung cancer can develop in current and former smokers, although squamous and small-cell carcinomas are most commonly associated with heavy tobacco use. Through the first half of the twentieth century, squamous carcinoma was the most common subtype of NSCLC diagnosed in the United States. However, with the decline in cigarette consumption over the past four decades, adenocarcinoma has become the most frequent histologic subtype of lung cancer in the United States as both squamous carcinoma and small-cell carcinoma are on the decline. In lifetime never smokers or former light smokers (<10 pack-year history), women, and younger adults (<60 years), adenocarcinoma tends to be the most common form of lung cancer. Historically, the major pathologic distinction was simply between SCLC and NSCLC, because these tumors have quite different natural histories and therapeutic approaches (see below). Likewise, until fairly recently, there was no apparent need to distinguish among the various subtypes of NSCLC because there were no clear differences in therapeutic outcome based on histology alone. However, this perspective radically changed in 2004 with the recognition that a small percentage of lung adenocarcinomas harbored mutation in EGFR that rendered those tumors exquisitely sensitive to inhibitors of the EGFR tyrosine kinases (e.g., gefitinib and erlotinib). This observation, coupled with the subsequent identification of other “actionable” molecular alterations (Table 107-1) and the recognition that some active Neoplasms of the Lung 508 chemotherapy agents performed quite differently in squamous carcinomas versus adenocarcinomas, firmly established the need for modifications in the then-existing 2004 WHO lung cancer classification system. The revised 2011 classification system, developed jointly by the International Association for the Study of Lung Cancer, the American Thoracic Society, and the European Respiratory Society, provides an integrated approach to the classification of lung adenocarcinomas that includes clinical, molecular, radiographic, and pathologic information. It also recognizes that most lung cancers present in an advanced stage and are often diagnosed based on small biopsies or cytologic specimens, rendering clear histologic distinctions difficult if not impossible. Previously, in the 2004 classification system, tumors failing to show definite glandular or squamous morphology in a small biopsy or cytologic specimen were simply classified as non-small-cell carcinoma, not otherwise specified. However, because the distinction between adenocarcinoma and squamous carcinoma is now viewed as critical to optimal therapeutic decision making, the modified classification approach recommends these lesions be further characterized using a limited special stain workup. This distinction can be achieved using a single marker for adenocarcinoma (thyroid transcription factor-1 or napsin-A) plus a squamous marker (p40 or p63) and/or mucin stains. The modified classification system also recommends preservation of sufficient specimen material for appropriate molecular testing necessary to help guide therapeutic decision making (see below). Another significant modification to the WHO classification system is the discontinuation of the terms bronchioloalveolar carcinoma and mixed-subtype adenocarcinoma. The term bronchioloalveolar carcinoma was dropped due to its inconsistent use and because it caused confusion in routine clinical care and research. As formerly used, the term encompassed at least five different entities with diverse clinical and molecular properties. The terms adenocarcinoma in situ and minimally invasive adenocarcinoma are now recommended for small solitary adenocarcinomas (≤3 cm) with either pure lepidic growth (term used to describe single-layered growth of atypical cuboidal cells coating the alveolar walls) or predominant lepidic growth with ≤5 mm invasion. Individuals with these entities experience 100% or near 100% 5-year disease-free survival with complete tumor resection. Invasive adenocarcinomas, representing more than 70–90% of surgically resected lung adenocarcinomas, are now classified by their predominant pattern: lepidic, acinar, papillary, and solid patterns. Lepidic-predominant subtype has a favorable prognosis, acinar and papillary have an intermediate prognosis, and solid-predominant has a poor prognosis. The terms signet ring and clear cell adenocarcinoma have been eliminated from the variants of invasive lung adenocarcinoma, whereas the term micropapillary, a subtype with a particularly poor prognosis, has been added. Although EGFR mutations are encountered most frequently in nonmucinous adenocarcinomas with a lepidicor papillary-predominant pattern, most adenocarcinoma subtypes can harbor EGFR or KRAS mutations. The same is true of ALK, RET, and ROS1 rearrangements. What was previously termed mucinous bronchioloalveolar carcinoma is now called invasive mucinous adenocarcinoma. These tumors generally lack EGFR mutations and show a strong correlation with KRAS mutations. Overall, the revised WHO reclassification of lung cancer addresses important advances in diagnosis and treatment, most importantly, the critical advances in understanding the specific genes and molecular pathways that initiate and sustain lung tumorigenesis resulting in new “targeted” therapies with improved specificity and better antitumor efficacy. The diagnosis of lung cancer most often rests on the morphologic or cytologic features correlated with clinical and radiographic findings. Immunohistochemistry may be used to verify neuroendocrine differentiation within a tumor, with markers such as neuron-specific enolase (NSE), CD56 or NCAM, synaptophysin, chromogranin, and Leu7. Immunohistochemistry is also helpful in differentiating primary from metastatic adenocarcinomas; thyroid transcription factor-1 (TTF-1), identified in tumors of thyroid and pulmonary origin, is positive in over 70% of pulmonary adenocarcinomas and is a reliable indicator of primary lung cancer, provided a thyroid primary has been excluded. A negative TTF-1, however, does not exclude the possibility of a lung primary. TTF-1 is also positive in neuroendocrine tumors of pulmonary and extrapulmonary origin. Napsin-A (Nap-A) is an aspartic protease that plays an important role in maturation of surfactant B7 and is expressed in cytoplasm of type II pneumocytes. In several studies, Nap-A has been reported in >90% of primary lung adenocarcinomas. Notably, a combination of Nap-A and TTF-1 is useful in distinguishing primary lung adenocarcinoma (Nap-A positive, TTF-1 positive) from primary lung squamous cell carcinoma (Nap-A negative, TTF-1 negative) and primary SCLC (Nap-A negative, TTF-1 positive). Cytokeratins 7 and 20 used in combination can help narrow the differential diagnosis; nonsquamous NSCLC, SCLC, and mesothelioma may stain positive for CK7 and negative for CK20, whereas squamous cell lung cancer often will be both CK7 and CK20 negative. p63 is a useful marker for the detection of NSCLCs with squamous differentiation when used in cytologic pulmonary samples. Mesothelioma can be easily identified ultrastructurally, but it has historically been difficult to differentiate from adenocarcinoma through morphology and immunohistochemical staining. Several markers in the last few years have proven to be more helpful including CK5/6, calretinin, and Wilms tumor gene-1 (WT-1), all of which show positivity in mesothelioma. Cancer is a disease involving dynamic changes in the genome. As proposed by Hanahan and Weinberg, virtually all cancer cells acquire six hallmark capabilities: self-sufficiency in growth signals, insensitivity to antigrowth signals, evading apoptosis, limitless replicative potential, sustained angiogenesis, and tissue invasion and metastasis. The order in which these hallmark capabilities are acquired appears quite variable and can differ from tumor to tumor. Events leading to acquisition of these hallmarks can vary widely, although broadly, cancers arise as a result from accumulations of gain-of-function mutations in oncogenes and loss-of-function mutations in tumor-suppressor genes. Further complicating the study of lung cancer, the sequence of events that lead to disease is clearly different for the various histopathologic entities. The exact cell of origin for lung cancers is not clearly defined. Whether one cell of origin leads to all histologic forms of lung cancer is unclear. However, for lung adenocarcinoma, evidence suggests that type II epithelial cells (or alveolar epithelial cells) have the capacity to give rise to tumors. For SCLC, cells of neuroendocrine origin have been implicated as precursors. For cancers in general, one theory holds that a small subset of the cells within a tumor (i.e., “stem cells”) are responsible for the full malignant behavior of the tumor. As part of this concept, the large bulk of the cells in a cancer are “offspring” of these cancer stem cells. While clonally related to the cancer stem cell subpopulation, most cells by themselves cannot regenerate the full malignant phenotype. The stem cell concept may explain the failure of standard medical therapies to eradicate lung cancers, even when there is a clinical complete response. Disease recurs because therapies do not eliminate the stem cell component, which may be more resistant to chemotherapy. Precise human lung cancer stem cells have yet to be identified. Lung cancer cells harbor multiple chromosomal abnormalities, including mutations, amplifications, insertions, deletions, and translocations. One of the earliest sets of oncogenes found to be aberrant was the MYC family of transcription factors (MYC, MYCN, and MYCL). MYC is most frequently activated via gene amplification or transcriptional dysregulation in both SCLC and NSCLC. Currently, there are no MYC-specific drugs. Among lung cancer histologies, adenocarcinomas have been the most extensively catalogued for recurrent genomic gains and losses as well as for somatic mutations (Fig. 107-2). While multiple different kinds of aberrations have been found, a major class involves “driver mutations,” which are mutations that occur in genes encoding signaling proteins that when aberrant, drive initiation and maintenance of tumor cells. Importantly, driver mutations can serve as potential Achilles’ heels for tumors, if their gene products can be targeted FIGURE 107-2 Driver mutations in adenocarcinomas. appropriately. For example, one set of mutations involves EGFR, which belongs to the ERBB (HER) family of protooncogenes, including EGFR (ERBB1), HER2/neu (ERBB2), HER3 (ERBB3), and HER4 (ERBB4). These genes encode cell-surface receptors consisting of an extracellular ligand-binding domain, a transmembrane structure, and an intracellular tyrosine kinase (TK) domain. The binding of ligand to receptor activates receptor dimerization and TK autophosphorylation, initiating a cascade of intracellular events, and leading to increased cell proliferation, angiogenesis, metastasis, and a decrease in apoptosis. Lung adenocarcinomas can arise when tumors express mutant EGFR. These same tumors display high sensitivity to small-molecule EGFR TK inhibitors (TKIs). Additional examples of driver mutations in lung adenocarcinoma include the GTPase KRAS, the serine-threonine kinase BRAF, and the lipid kinase PIK3CA. More recently, more subsets of lung adenocarcinoma have been identifed as defined by the presence of specific chromsomal rearrangements resulting in the abberant activation of the TKs ALK, ROS1, and RET. Notably, most driver mutations in lung cancer appear to be mutually exclusive, suggesting that acquisition of one of these mutations is sufficient to drive tumorigenesis. Although driver mutations have mostly been found in adenocarinomas, three potential molecular targets recently have been identified in squamous cell lung carcinomas: FGFR1 amplification, DDR2 mutations, and PIK3CA mutations/PTEN loss (Table 107-1). Together, these potentially “actionable” defects occur in up to 50% of squamous carcinomas. A large number of tumor-suppressor genes have also been identified that are inactivated during the pathogenesis of lung cancer. These include TP53, RB1, RASSF1A, CDKN2A/B, LKB1 (STK11), and FHIT. Nearly 90% of SCLCs harbor mutations in TP53 and RB1. Several tumor-suppressor genes on chromosome 3p appear to be involved in nearly all lung cancers. Allelic loss for this region occurs very early in lung cancer pathogenesis, including in histologically normal smoking-damaged lung epithelium. In lung cancer, clinical outcome is related to the stage at diagnosis, and hence, it is generally assumed that early detection of occult tumors will lead to improved survival. Early detection is a process that involves screening tests, surveillance, diagnosis, and early treatment. Screening refers to the use of simple tests across a healthy population in order to identify individuals who harbor asymptomatic disease. For a screening program to be successful, there must be a high burden of disease within the target population; the test must be sensitive, specific, accessible, and cost effective; and there must be effective treatment that can reduce mortality. With any screening procedure, it is important to consider the possible influence of lead-time bias (detecting the cancer earlier without an effect on survival), length-time bias (indolent cancers are detected on screening and may not affect survival, whereas aggressive cancers are likely to cause symptoms earlier in patients and are less likely to be detected), and overdiagnosis (diagnosing cancers so slow growing that they are unlikely to cause the death of the patient) (Chap. 100). Because a majority of lung cancer patients present with advanced 509 disease beyond the scope of surgical resection, there is understandable skepticism about the value of screening in this condition. Indeed, randomized controlled trials conducted in the 1960s to 1980s using screening chest x-rays (CXR), with or without sputum cytology, reported no impact on lung cancer–specific mortality in patients characterized as high risk (males age ≥45 years with a smoking history). These studies have been criticized for their design, statistical analyses, and outdated imaging modalities. The results of the more recently conducted Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) are consistent with these earlier reports. Initiated in 1993, participants in the PLCO lung cancer screening trial received annual CXR screening for 4 years, whereas participants in the usual care group received no interventions other than their customary medical care. The diagnostic follow-up of positive screening results was determined by participants and their physicians. The PLCO trial differed from previous lung cancer screening studies in that women and never smokers were eligible. The study was designed to detect a 10% reduction in lung cancer mortality in the interventional group. A total of 154,901 individuals between 55 and 74 years of age were enrolled (77,445 assigned to annual CXR screenings; 77,456 assigned to usual care). Participant demographics and tumor characteristics were well balanced between the two groups. Through 13 years of follow-up, cumulative lung cancer incidence rates (20.1 vs 19.2 per 10,000 person-years; rate ratio [RR], 1.05; 95% confidence interval [CI], 0.98–1.12) and lung cancer mortality (n = 1213 vs n = 1230) were identical between the two groups. The stage and histology of detected cancers in the two groups also were similar. These data corroborate previous recommendations against CXR screening for lung cancer. In contrast to CXR, low-dose, noncontrast, thin-slice spiral chest computed tomography (LDCT) has emerged as an effective tool to screen for lung cancer. In nonrandomized studies conducted in the 1990s, LDCT scans were shown to detect more lung nodules and cancers than standard CXR in selected high-risk populations (e.g., age ≥60 years and a smoking history of ≥10 pack-years). Notably, up to 85% of the lung cancers discovered in these trials were classified as stage I disease and therefore considered potentially curable with surgical resection. These data prompted the National Cancer Institute (NCI) to initiate the National Lung Screening Trial (NLST), a randomized study designed to determine if LDCT screening could reduce mortality from lung cancer in high-risk populations as compared with standard posterior anterior CXR. High-risk patients were defined as individuals between 55 and 74 years of age, with a ≥30 pack-year history of cigarette smoking; former smokers must have quit within the previous 15 years. Excluded from the trial were individuals with a previous lung cancer diagnosis, a history of hemoptysis, an unexplained weight loss of >15 lb in the preceding year, or a chest CT within 18 months of enrollment. A total of 53,454 persons were enrolled and randomized to annual screening yearly for three years (LDCT screening, n = 26,722; CXR screening, n = 26,732). Any noncalcified nodule measuring ≥4 mm in any diameter found on LDCT and CXR images with any noncalcified nodule or mass were classified as “positive.” Participating radiologists had the option of not calling a final screen positive if a noncalcified nodule had been stable on the three screening exams. Overall, 39.1% of participants in the LDCT group and 16% in the CXR group had at least one positive screening result. Of those who screened positive, the false-positive rate was 96.4% in the LDCT group and 94.5% in the CXR group. This was consistent across all three rounds. In the LDCT group, 1060 cancers were identified compared with 941 cancers in the CXR group (645 vs 572 per 100,000 person-years; RR, 1.13; 95% CI, 1.03 to 1.23). Nearly twice as many early-stage IA cancers were detected in the LDCT group compared with the CXR group (40% vs 21%). The overall rates of lung cancer death were 247 and 309 deaths per 100,000 participants in the LDCT and CXR groups, respectively, representing a 20% reduction in lung cancer mortality in the LDCT-screened population (95% CI, 6.8–26.7%; p = .004). Compared with the CXR group, the rate of death in the LDCT group from any cause was reduced by 6.7% (95% CI, 1.2–13.6; p = .02) (Table 107-2). The Neoplasms of the Lung Rates of Events per 100,000 Relative Risk Event Number Person-Years (95% CI) Source: Modified from PB Bach et al: JAMA 307:2418, 2012. number needed to screen (NNTS) to prevent one lung cancer death was calculated to be 320. LDCT screening for lung cancer comes with known risks including a high rate of false-positive results, false-negative results, potential for unnecessary follow-up testing, radiation exposure, overdiagnosis, changes in anxiety and quality of life, and substantial financial costs. By far the biggest challenge confronting the use of CT screening is the high false-positive rate. False positives can have a substantial impact on patients through the expense and risk of unneeded further evaluation and emotional stress. The management of these patients usually consists of serial CT scans over time to see if the nodules grow, attempted fine-needle aspirates, or surgical resection. At $300 per scan (NCI estimated cost), the outlay for initial LDCT alone could run into the billions of dollars annually, an expense that only further escalates when factoring in various downstream expenditures an individual might incur in the assessment of positive findings. A formal cost-effectiveness analysis of the NLST is expected soon that should help resolve this crucial concern. Despite the aforementioned caveats, screening of individuals who meet the NLST criteria for lung cancer risk (or in some cases, modified versions of these criteria) seems warranted, provided comprehensive multidisciplinary coordinated care and follow-up similar to those provided to NLST participants are available. Algorithms to improve candidate selection are under development. When discussing the option of LDCT screening, use of absolute risks rather than relative risks is helpful because studies indicate the public can process absolute terminology more effectively than relative risk projections. A useful guide has been developed by the NCI to help patients and physicians assess the benefits and harms of LDCT screening for lung cancer (Table 107-3). Finally, even a small negative effect of screening on smoking behavior (either lower quit rates or higher recidivism) could easily offset the potential gains in a population. Fortunately no such impact has been reported to date. Nonetheless, smoking cessation must be included as an indispensable component of any screening program. Benefits: How Did CT Scans Help Compared To CXR? Harms: What Problems Did CT Scans Cause Compared to CXR? Abbreviations: CXR, chest x-ray; LDCT, low-dose computed tomography; NLST, National Lung Screening Trial. Source: Modified from S Woloshin et al: N Engl J Med 367:1677, 2012. Over half of all patients diagnosed with lung cancer present with locally advanced or metastatic disease at the time of diagnosis. The majority of patients present with signs, symptoms, or laboratory abnormalities that can be attributed to the primary lesion, local tumor growth, invasion or obstruction of adjacent structures, growth at distant metastatic sites, or a paraneoplastic syndrome (Tables 107-4 and 107-5). The prototypical lung cancer patient is a current or former smoker of either sex, usually in the seventh decade of life. A history of chronic cough with or without hemoptysis in a current or former smoker with chronic obstructive pulmonary disease (COPD) age 40 years or older should prompt a thorough investigation for lung cancer even in the face of a normal CXR. A persistent pneumonia without constitutional symptoms and unresponsive to repeated courses of antibiotics also should prompt an evaluation for the underlying cause. Lung cancer arising in a lifetime never smoker is more common in women and East Asians. Such patients also tend to be younger than their smoking counterparts at the time of diagnosis. The clinical presentation of lung cancer in never smokers tends to mirror that of current and former smokers. Patients with central or endobronchial growth of the primary tumor may present with cough, hemoptysis, wheeze, stridor, dyspnea, or postobstructive pneumonitis. Peripheral growth of the primary tumor may cause pain from pleural or chest wall involvement, dyspnea on a restrictive basis, and symptoms of a lung abscess resulting from tumor cavitation. Regional spread of tumor in the thorax (by contiguous growth or by metastasis to regional lymph nodes) may cause tracheal obstruction, esophageal compression with dysphagia, recurrent laryngeal paralysis with hoarseness, phrenic nerve palsy with elevation of the hemidiaphragm and dyspnea, and sympathetic nerve paralysis with Horner’s syndrome (enophthalmos, ptosis, miosis, and anhydrosis). Malignant pleural effusions can cause pain, dyspnea, or cough. Pancoast (or superior sulcus tumor) syndromes result from local extension of a tumor growing in the apex of the lung with involvement of the eighth cervical and first and second thoracic nerves, and present with shoulder pain that characteristically radiates in the ulnar Symptom and Signs Range of Frequency Source: Reproduced with permission from MA Beckles: Chest 123:97-104, 2003. Symptoms elicited in • Constitutional: weight loss >10 lb history • Musculoskeletal: • Neurologic: headaches, syncope, seizures, extremity weakness, recent change in mental status • Hoarseness, neurologic signs, papilledema Routine laboratory tests • Hematocrit, <40% in men; <35% in women • Elevated alkaline phosphatase, GGT, SGOT, and Abbreviations: GGT, gamma-glutamyltransferase; SGOT, serum glutamic-oxaloacetic transaminase. Source: Reproduced with permission from GA Silvestri et al: Chest 123(1 Suppl):147S, 2003. distribution of the arm, often with radiologic destruction of the first and second ribs. Often Horner’s syndrome and Pancoast syndrome coexist. Other problems of regional spread include superior vena cava syndrome from vascular obstruction; pericardial and cardiac extension with resultant tamponade, arrhythmia, or cardiac failure; lymphatic obstruction with resultant pleural effusion; and lymphangitic spread through the lungs with hypoxemia and dyspnea. In addition, lung cancer can spread transbronchially, producing tumor growth along multiple alveolar surfaces with impairment of gas exchange, respiratory insufficiency, dyspnea, hypoxemia, and sputum production. Constitutional symptoms may include anorexia, weight loss, weakness, fever, and night sweats. Apart from the brevity of symptom duration, these parameters fail to clearly distinguish SCLC from NSCLC or even from neoplasms metastatic to lungs. Extrathoracic metastatic disease is found at autopsy in more than 50% of patients with squamous carcinoma, 80% of patients with adenocarcinoma and large-cell carcinoma, and greater than 95% of patients with SCLC. Approximately one-third of patients present with symptoms as a result of distant metastases. Lung cancer metastases may occur in virtually every organ system, and the site of metastatic involvement largely determines other symptoms. Patients with brain metastases may present with headache, nausea and vomiting, seizures, or neurologic deficits. Patients with bone metastases may present with pain, pathologic fractures, or cord compression. The latter may also occur with epidural metastases. Individuals with bone marrow invasion may present with cytopenias or leukoerythroblastosis. Those with liver metastases may present with hepatomegaly, right upper quadrant pain, fever, anorexia, and weight loss. Liver dysfunction and biliary obstructions are rare. Adrenal metastases are common but rarely cause pain or adrenal insufficiency unless they are large. Paraneoplastic syndromes are common in patients with lung cancer, especially those with SCLC, and may be the presenting finding or the first sign of recurrence. In addition, paraneoplastic syndromes may mimic metastatic disease and, unless detected, lead to inappropriate palliative rather than curative treatment. Often the paraneoplastic syndrome may be relieved with successful treatment of the tumor. In some cases, the pathophysiology of the paraneoplastic syndrome is known, particularly when a hormone with biological activity is secreted by a tumor. However, in many cases, the pathophysiology is unknown. Systemic symptoms of anorexia, cachexia, weight loss (seen in 30% of patients), fever, and suppressed immunity are paraneoplastic syndromes of unknown etiology or at least not well defined. Weight loss greater than 10% of total body weight is considered a bad prognostic sign. Endocrine syndromes are seen in 12% of patients; hypercalcemia resulting from ectopic production of parathyroid hormone (PTH), or more commonly, PTH-related peptide, is the most common life-threatening metabolic complication of malignancy, primarily occurring with squamous cell carcinomas of the lung. Clinical symptoms include nausea, vomiting, abdominal pain, constipation, polyuria, 511 thirst, and altered mental status. Hyponatremia may be caused by the syndrome of inappropriate secretion of antidiuretic hormone (SIADH) or possibly atrial natriuretic peptide (ANP). SIADH resolves within 1-4 weeks of initiating chemotherapy in the vast majority of cases. During this period, serum sodium can usually be managed and maintained above 128 mEq/L via fluid restriction. Demeclocycline can be a useful adjunctive measure when fluid restriction alone is insufficient. Vasopressin receptor antagonists like tolvaptan also have been used in the management of SIADH. However, there are significant limitations to the use of tolvaptan including liver injury and overly rapid correction of the hyponatremia, which can lead to irreversible neurologic injury. Likewise, the cost of tolvaptan may be prohibitive (as high as $300 per tablet in some areas). Of note, patients with ectopic ANP may have worsening hyponatremia if sodium intake is not concomitantly increased. Accordingly, if hyponatremia fails to improve or worsens after 3–4 days of adequate fluid restriction, plasma levels of ANP should be measured to determine the causative syndrome. Ectopic secretion of ACTH by SCLC and pulmonary carcinoids usually results in additional electrolyte disturbances, especially hypokalemia, rather than the changes in body habitus that occur in Cushing’s syndrome from a pituitary adenoma. Treatment with standard medications, such as metyrapone and ketoconazole, is largely ineffective due to extremely high cortisol levels. The most effective strategy for management of the Cushing’s syndrome is effective treatment of the underlying SCLC. Bilateral adrenalectomy may be consid ered in extreme cases. Skeletal–connective tissue syndromes include clubbing in 30% of cases (usually NSCLCs) and hypertrophic primary osteoarthropathy in 1–10% of cases (usually adenocarcinomas). Patients may develop periostitis, causing pain, tenderness, and swelling over the affected bones and a positive bone scan. Neurologic-myopathic syndromes are seen in only 1% of patients but are dramatic and include the myasthenic Eaton-Lambert syndrome and retinal blindness with SCLC, whereas peripheral neuropathies, subacute cerebellar degeneration, cortical degeneration, and polymyositis are seen with all lung cancer types. Many of these are caused by autoimmune responses such as the development of anti-voltage-gated calcium channel antibodies in Eaton-Lambert syndrome. Patients with this disorder present with proximal muscle weakness, usually in the lower extremities, occasional autonomic dysfunction, and rarely, cranial nerve symptoms or involvement of the bulbar or respiratory muscles. Depressed deep tendon reflexes are frequently present. In contrast to patients with myasthenia gravis, strength improves with serial effort. Some patients who respond to chemotherapy will have resolution of the neurologic abnormalities. Thus, chemotherapy is the initial treatment of choice. Paraneoplastic encephalomyelitis and sensory neuropathies, cerebellar degeneration, limbic encephalitis, and brainstem encephalitis occur in SCLC in association with a variety of antineuronal antibodies such as anti-Hu, antiCRMP5, and ANNA-3. Paraneoplastic cerebellar degeneration may be associated with anti-Hu, anti-Yo, or P/Q calcium channel autoantibodies. Coagulation or thrombotic or other hematologic manifestations occur in 1–8% of patients and include migratory venous thrombophlebitis (Trousseau’s syndrome), nonbacterial thrombotic (marantic) endocarditis with arterial emboli, and disseminated intravascular coagulation with hemorrhage, anemia, granulocytosis, and leukoerythroblastosis. Thrombotic disease complicating cancer is usually a poor prognostic sign. Cutaneous manifestations such as dermatomyositis and acanthosis nigricans are uncommon (1%), as are the renal manifestations of nephrotic syndrome and glomerulonephritis (≤1%). Tissue sampling is required to confirm a diagnosis in all patients with suspected lung cancer. In patients with suspected metastatic disease, a biopsy of the most distant site of disease is preferred for tissue confirmation. Given the greater emphasis placed on molecular testing for NSCLC patients, a core biopsy is preferred to ensure adequate Neoplasms of the Lung 512 tissue for analysis. Tumor tissue may be obtained via minimally invasive techniques such as bronchial or transbronchial biopsy during fiberoptic bronchoscopy, by fine-needle aspiration or percutaneous biopsy using image guidance, or via endobronchial ultrasound (EBUS) guided biopsy. Depending on the location, lymph node sampling may occur via transesophageal endoscopic ultrasound-guided biopsy (EUS), EBUS, or blind biopsy. In patients with clinically palpable disease such as a lymph node or skin metastasis, a biopsy may be obtained. In patients with suspected metastatic disease, a diagnosis may be confirmed by percutaneous biopsy of a soft tissue mass, lytic bone lesion, bone marrow, pleural or liver lesion, or an adequate cell block obtained from a malignant pleural effusion. In patients with a suspected malignant pleural effusion, if the initial thoracentesis is negative, a repeat thoracentesis is warranted. Although the majority of pleural effusions are due to malignant disease, particularly if they are exudative or bloody, some may be parapneumonic. In the absence of distant disease, such patients should be considered for possible curative treatment. The diagnostic yield of any biopsy depends on several factors including location (accessibility) of the tumor, tumor size, tumor type, and technical aspects of the diagnostic procedure including the experience level of the bronchoscopist and pathologist. In general, central lesions such as squamous cell carcinomas, small-cell carcinomas, or endobronchial lesions such as carcinoid tumors are more readily diagnosed by bronchoscopic examination, whereas peripheral lesions such as adenocarcinomas and large-cell carcinomas are more amenable to transthoracic biopsy. Diagnostic accuracy for SCLC versus NSCLC for most specimens is excellent, with lesser accuracy for subtypes of NSCLC. Bronchoscopic specimens include bronchial brush, bronchial wash, bronchioloalveolar lavage, transbronchial fine-needle aspiration (FNA), and core biopsy. For more accurate histologic classification, mutation analysis, or investigational purposes, reasonable efforts (e.g., a core needle biopsy) should be made to obtain more tissue than what is contained in a routine cytology specimen obtained by FNA. Overall sensitivity for combined use of bronchoscopic methods is approximately 80%, and together with tissue biopsy, the yield increases to 85–90%. Like transbronchial core biopsy specimens, transthoracic core biopsy specimens are also preferred. Sensitivity is highest for larger lesions and peripheral tumors. In general, core biopsy specimens, whether transbronchial, transthoracic, or EUS-guided, are superior to other specimen types. This is primarily due to the higher percentage of tumor cells with fewer confounding factors such as obscuring inflammation and reactive nonneoplastic cells. Sputum cytology is inexpensive and noninvasive but has a lower yield than other specimen types due to poor preservation of the cells and more variability in acquiring a good-quality specimen. The yield for sputum cytology is highest for larger and centrally located tumors such as squamous cell carcinoma and small-cell carcinoma histology. The specificity for sputum cytology averages close to 100%, although sensitivity is generally <70%. The accuracy of sputum cytology improves with increased numbers of specimens analyzed. Consequently, analysis of at least three sputum specimens is recommended. Lung cancer staging consists of two parts: first, a determination of the location of the tumor and possible metastatic sites (anatomic staging), and second, an assessment of a patient’s ability to withstand various antitumor treatments (physiologic staging). All patients with lung cancer should have a complete history and physical examination, with evaluation of all other medical problems, determination of performance status, and history of weight loss. The most significant dividing line is between those patients who are candidates for surgical resection and those who are inoperable but will benefit from chemotherapy, radiation therapy, or both. Staging with regard to a patient’s potential for surgical resection is principally applicable to NSCLC. The accurate staging of patients with NSCLC is essential for determining the appropriate treatment in patients with resectable disease and avoiding unnecessary surgical procedures in patients with advanced disease (Fig. 107-3). All patients with NSCLC should undergo initial Complete history and physical examination Determination of performance status and weight loss Complete blood count with platelet determination Measurement of serum electrolytes, glucose, and calcium; renal and liver function tests PET scan to evaluate mediastinum and detect metastatic disease MRI brain if clinically indicated Positive for metastatic disease No signs, symptoms, or imaging to suggest metastatic disease Patient has no contraindication to surgery or radiation therapy combined with chemotherapy Refer to surgeon for evaluation of mediastinum and possible resection No surgery Treatment with combined chemoradiation therapy N0 or N1 nodes N2 or N3 nodes Single suspicious lesion detected on imaging Multiple lesions detected on imaging Biopsy lesion See Fig. 107-6 Pulmonary function tests and arterial blood-gas measurements Cardiopulmonary exercise testing if performance status or pulmonary function tests are borderline Coagulation tests Negative for metastatic disease Stage IB <4 cm surgery alone >4 cm surgery followed by adjuvant chemotherapy Stage II or III Surgery followed by adjuvant chemotherapy Stage IA Surgery alone FIGURE 107-3 Algorithm for management of non-small-cell lung cancer. MRI, magnetic resonance imaging; PET, positron emission tomography. radiographic imaging with CT scan, positron emission tomography (PET), or preferably CT-PET. PET scanning attempts to identify sites of malignancy based on glucose metabolism by measuring the uptake of 18F-fluorodeoxyglucose (FDG). Rapidly dividing cells, presumably in the lung tumors, will preferentially take up 18F-FDG and appear as a “hot spot.” To date, PET has been mostly used for staging and detection of metastases in lung cancer and in the detection of nodules >15 mm in diameter. Combined 18F-FDG PET-CT imaging has been shown to improve the accuracy of staging in NSCLC compared to visual correlation of PET and CT or either study alone. CT-PET has been found to be superior in identifying pathologically enlarged mediastinal lymph nodes and extrathoracic metastases. A standardized uptake value (SUV) of >2.5 on PET is highly suspicious for malignancy. False negatives can be seen in diabetes, in lesions <8 mm, and in slow-growing tumors (e.g., carcinoid tumors or well-differentiated adenocarcinoma). False positives can be seen in certain infections and granulomatous disease (e.g., tuberculosis). Thus, PET should never be used alone to diagnose lung cancer, mediastinal involvement, or metastases. Confirmation with tissue biopsy is required. For brain metastases, magnetic resonance imaging (MRI) is the most effective method. MRI can also be useful in selected circumstances, such as superior sulcus tumors to rule out brachial plexus involvement, but in general, MRI does not play a major role in NSCLC staging. In patients with NSCLC, the following are contraindications to potential curative resection: extrathoracic metastases, superior vena cava syndrome, vocal cord and, in most cases, phrenic nerve paralysis, malignant pleural effusion, cardiac tamponade, tumor within 2 cm of the carina (potentially curable with combined chemoradiotherapy), metastasis to the contralateral lung, metastases to supraclavicular lymph nodes, contralateral mediastinal node metastases (potentially curable with combined chemoradiotherapy), and involvement of the main pulmonary artery. In situations where it will make a difference in treatment, abnormal scan findings require tissue confirmation of malignancy so that patients are not precluded from having potentially curative therapy. The best predictor of metastatic disease remains a careful history and physical examination. If signs, symptoms, or findings from the physical examination suggest the presence of malignancy, then sequential imaging starting with the most appropriate study should be performed. If the findings from the clinical evaluation are negative, then imaging studies beyond CT-PET are unnecessary and the search for metastatic disease is complete. More controversial is how one should assess patients with known stage III disease. Because these patients are more likely to have asymptomatic occult metastatic disease, current guidelines recommend a more extensive imaging evaluation including imaging of the brain with either CT scan or MRI. In patients in whom distant metastatic disease has been ruled out, lymph node status needs to be assessed via a combination of radiographic imaging and/or minimally invasive techniques such as those mentioned above and/or invasive techniques such as mediastinoscopy, mediastinotomy, thoracoscopy, or thoracotomy. Approximately one-quarter to one-half of patients diagnosed with NSCLC will have mediastinal lymph node metastases at the time of diagnosis. Lymph node sampling is recommended in all patients with enlarged nodes detected by CT or PET scan and in patients with large tumors or tumors occupying the inner third of the lung. The extent of mediastinal lymph node involvement is important in determining the appropriate treatment strategy: surgical resection followed by adjuvant chemotherapy versus combined chemoradiation alone (see below). A standard nomenclature for referring to the location of lymph nodes involved with lung cancer has evolved (Fig. 107-4). In SCLC patients, current staging recommendations include a CT scan of the chest and abdomen (because of the high frequency of hepatic and adrenal involvement), MRI of the brain (positive in 10% of asymptomatic patients), and radionuclide bone scan if symptoms or signs suggest disease involvement in these areas (Fig. 107-5). Although there are less data on the use of CT-PET in SCLC, the most recent American College of Chest Physicians Evidence-Based Clinical Practice Guidelines recommend PET scans in patients with clinical stage I SCLC who are being considered for curative intent surgical 513 resection. In addition, invasive mediastinal staging and extrathoracic imaging (head MRI/CT and PET or abdominal CT plus bone scan) is also recommended for patients with clinical stage I SCLC if curative intent surgical resection is contemplated. Some practice guidelines also recommend the use of PET scanning in the staging of SCLC patients who are potential candidates for the addition of thoracic radiotherapy to chemotherapy. Bone marrow biopsies and aspirations are rarely performed now given the low incidence of isolated bone marrow metastases. Confirmation of metastatic disease, ipsilateral or contra-lateral lung nodules, or metastases beyond the mediastinum may be achieved by the same modalities recommended earlier for patients with NSCLC. If a patient has signs or symptoms of spinal cord compression (pain, weakness, paralysis, urinary retention), a spinal CT or MRI scan and examination of the cerebrospinal fluid cytology should be performed. If metastases are evident on imaging, a neurosurgeon should be consulted for possible palliative surgical resection and/or a radiation oncologist should be consulted for palliative radiotherapy to the site of compression. If signs or symptoms of leptomeningitis develop at any time in a patient with lung cancer, an MRI of the brain and spinal cord should be performed, as well as a spinal tap, for detection of malignant cells. If the spinal tap is negative, a repeat spinal tap should be considered. There is currently no approved therapy for the treatment of leptomeningeal disease. The tumor-node-metastasis (TNM) international staging system provides useful prognostic information and is used to stage all patients with NSCLC. The various T (tumor size), N (regional node involvement), and M (presence or absence of distant metastasis) are combined to form different stage groups (Tables 107-6 and 107-7). The previous edition of the TNM staging system for lung cancer was developed based on a relatively small database of patients from a single institution. The latest seventh edition of the TNM staging system went into effect in 2010 and developed using a much more robust database of more than 100,000 patients with lung cancer who were treated in multiple countries between 1990 and 2000. Data from 67,725 patients with NSCLC were then used to reevaluate the prognostic value of the TNM descriptors (Table 107-8). The major distinction between the sixth and seventh editions of the international staging systems is within the T classification; T1 tumors are divided into tumors ≤2 cm in size, as these patients were found to have a better prognosis compared to patients with tumors >2 cm but ≤3 cm. T2 tumors are divided into those that are >3 cm but ≤5 cm and those that are >5 cm but ≤7 cm. Tumors that are >7 cm are considered T3 tumors. T3 tumors also include tumors with invasion into local structures such as chest wall and diaphragm and additional nodules in the same lobe. T4 tumors include tumors of any size with invasion into mediastinum, heart, great vessels, trachea, or esophagus or multiple nodules in the ipsilateral lung. No changes have been made to the current classification of lymph node involvement (N). Patients with metastasis may be classified as M1a (malignant pleural or pericardial effusion, pleural nodules, or nodules in the contralateral lung) or M1b (distant metastasis; e.g., bone, liver, adrenal, or brain metastasis). Based on these data, approximately one-third of patients have localized disease that can be treated with curative attempt (surgery or radiotherapy), one-third have local or regional disease that may or may not be amenable to a curative attempt, and one-third have metastatic disease at the time of diagnosis. In patients with SCLC, it is now recommended that both the Veterans Administration system and the American Joint Committee on Cancer/ International Union Against Cancer seventh edition system (TNM) be used to classify the tumor stage. The Veterans Administration system is a distinct two-stage system dividing patients into those with limitedor extensive-stage disease. Patients with limited-stage disease (LD) have cancer that is confined to the ipsilateral hemithorax and can be encompassed within a tolerable radiation port. Thus, contralateral Neoplasms of the Lung Brachiocephalic (innominate) a. Azygos v. 12, 13, 14R 11L 10L 9 8 12, 13, 14L 7 Inf.pulm. ligt. L.pulmonary a. 3 6 5 Ao PA Phrenic n. N2 = single digit, ipsilateral N3 = single digit, contralateral or supraclavicular FIGURE 107-4 Lymph node stations in staging non-small-cell lung cancer. The International Association for the Study of Lung Cancer (IASLC) lymph node map, including the proposed grouping of lymph node stations into “zones” for the purposes of prognostic analyses. a., artery; Ao, aorta; Inf. pulm. ligt., inferior pulmonary ligament; n., nerve; PA, pulmonary artery; v., vein. supraclavicular nodes, recurrent laryngeal nerve involvement, and superior vena caval obstruction can all be part of LD. Patients with extensive-stage disease (ED) have overt metastatic disease by imaging or physical examination. Cardiac tamponade, malignant pleural effusion, and bilateral pulmonary parenchymal involvement generally qualify disease as ED, because the involved organs cannot be encompassed safely or effectively within a single radiation therapy port. Sixty to 70% of patients are diagnosed with ED at presentation. The TNM staging system is preferred in the rare SCLC patient presenting with what appears to be clinical stage I disease (see above). Patients with lung cancer often have other comorbid conditions related to smoking including cardiovascular disease and COPD. To improve their preoperative condition, correctable problems (e.g., anemia, electrolyte and fluid disorders, infections, cardiac disease, and arrhythmias) should be addressed, appropriate chest physical therapy should be instituted, and patients should be encouraged to stop smoking. Because it is not always possible to predict whether a lobectomy or pneumonectomy will be required until the time of operation, a conservative approach is to restrict surgical resection to patients who could potentially tolerate a pneumonectomy. Patients with a forced expiratory volume in 1 s (FEV1) of greater than 2 L or greater than 80% of predicted can tolerate a pneumonectomy, and those with an FEV1 greater than 1.5 L have adequate reserve for a lobectomy. In patients with borderline lung function but a resectable tumor, cardiopulmonary exercise testing could be performed as part of the physiologic evaluation. This test allows an estimate of the maximal oxygen consumption (Vo ). A Vo <15 mL/(kg⋅min) predicts for a higher risk of postoperative complications. Patients deemed unable to tolerate lobectomy or pneumonectomy from a pulmonary functional standpoint may be candidates for more limited resections, such as wedge or anatomic segmental resection, although such procedures are associated with significantly higher rates of local Complete history and physical examination Determination of performance status and weight loss Complete blood count with platelet determination Measurement of serum electrolytes, glucose, and calcium; renal and liver function tests CT scan of chest abdomen and pelvis to evaluate for metastatic disease MRI of brain Bone scan if clinically indicated No signs, symptoms, or imaging to suggest metastatic disease Single lesion detected on imaging (For clinical stage I SCLC see “Anatomic Staging of Patients with Lung Cancer”) Multiple lesions detected on imaging Chemotherapy alone and/or radiation therapy for palliation of symptoms Patient has no contraindication to combined chemotherapy and radiation therapy Combined modality treatment with platinum-based therapy and etoposide and radiation therapy Sequential treatment with chemotherapy and radiation therapy Patient has contraindication to combined chemotherapy and radiation therapy Negative for metastatic disease Positive for metastatic disease Biopsy lesion Note: Regardless of disease stage, patients who have a good response to initial therapy should be considered for prophylactic cranial irradiation after therapy is completed. FIGURE 107-5 Algorithm for management of small-cell lung cancer. CT, computed tomography; MRI, magnetic resonance imaging. Neoplasms of the Lung recurrence and a trend toward decreased overall survival. All patients should be assessed for cardiovascular risk using American College of Cardiology and American Heart Association guidelines. A myocardial infarction within the past 3 months is a contraindication to thoracic surgery because 20% of patients will die of reinfarction. An infarction in the past 6 months is a relative contraindication. Other major contraindications include uncontrolled arrhythmias, an FEV1 of less than 1 L, CO2 retention (resting Pco2 >45 mmHg), DLco <40%, and severe pulmonary hypertension. The overall treatment approach to patients with NSCLC is shown in Fig. 107-3. Patients with severe atypia on sputum cytology have an increased risk of developing lung cancer compared to those without atypia. In the uncommon circumstance where malignant cells are identified in a sputum or bronchial washing specimen but the chest imaging appears normal (TX tumor stage), the lesion must be localized. More than 90% of tumors can be localized by meticulous examination of the bronchial tree with a fiberoptic bronchoscope under general anesthesia and collection of a series of differential brushings and biopsies. Surgical resection following bronchoscopic localization has been shown to improve survival compared to no treatment. Close follow-up of these patients is indicated because of the high incidence of second primary lung cancers (5% per patient per year). A solitary pulmonary nodule is defined as an x-ray density completely surrounded by normal aerated lung with circumscribed margins, of any shape, usually 1–6 cm in greatest diameter. The approach to a patient with a solitary pulmonary nodule is based on an estimate of the probability of cancer, determined according to the patient’s smoking history, age, and characteristics on imaging (Table 107-9). Prior CXRs and CT scans should be obtained if available for comparison. A PET scan may be useful if the lesion is greater than 7–8 mm in diameter. If no diagnosis is apparent, Mayo investigators reported that clinical characteristics (age, cigarette smoking status, and prior cancer diagnosis) and three radiologic characteristics (nodule diameter, spiculation, and upper lobe location) were independent predictors of malignancy. At present, only two radiographic criteria are thought to predict the benign nature of a solitary pulmonary nodule: lack of growth over a period >2 years and certain characteristic patterns of calcification. Calcification alone, however, does not exclude malignancy; a dense central nidus, multiple punctuate foci, and “bulls eye” (granuloma) and “popcorn ball” (hamartoma) calcifications are highly suggestive of a benign lesion. In contrast, a relatively large lesion, lack of or asymmetric calcification, chest symptoms, associated atelectasis, pneumonitis, or growth of the lesion revealed by comparison with an old x-ray or CT scan or a positive PET scan may be suggestive of a malignant process and warrant further attempts to establish a histologic diagnosis. An algorithm for assessing these lesions is shown in Fig. 107-6. Since the advent of screening CTs, small “ground-glass” opacities (GGOs) have often been observed, particularly as the increased sensitivity of CTs enables detection of smaller lesions. Many of these GGOs, when biopsied, are found to be atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), or minimally invasive adenocarcinoma (MIA). AAH is usually a nodule of <5 mm and is minimally hazy, also called nonsolid or ground glass (i.e., hazy slightly increased attenuation, no solid component, and preservation of bronchial and vascular margins). On thin-section CT, AIS is T1 Tumor ≤3 cm diameter, surrounded by lung or visceral pleura, without invasion more proximal than lobar bronchus T1a Tumor ≤2 cm in diameter T1b Tumor >2 cm but ≤ 3 cm in diameter T2 Tumor >3 cm but ≤7 cm, or tumor with any of the following features: Involves main bronchus, ≥2 cm distal to carina Invades visceral pleura Associated with atelectasis or obstructive pneumonitis that extends to the hilar region but does not involve the entire lung T2a Tumor >3 cm but ≤5 cm T2b Tumor >5 cm but ≤7 cm T3 Tumor >7 cm or any of the following: Directly invades any of the following: chest wall, diaphragm, phrenic nerve, mediastinal pleura, parietal pericardium, main bronchus <2 cm from carina (without involvement of carina) Atelectasis or obstructive pneumonitis of the entire lung Separate tumor nodules in the same lobe T4 Tumor of any size that invades the mediastinum, heart, great vessels, trachea, recurrent laryngeal nerve, esophagus, vertebral body, carina, or with separate tumor nodules in a different ipsilateral lobe M0 No distant metastasis M1 Distant metastasis M1a Separate tumor nodule(s) in a contralateral lobe; tumor with pleural nodules or malignant pleural or pericardial effusion M1b Distant metastasis (in extrathoracic organs) Abbreviation: TNM, tumor-node-metastasis. Source: Reproduced with permission from P Goldstraw et al: J Thorac Oncol 2:706, 2007. Stage IIA T1a,T1b,T2a N1 M0 Stage IIIA T1a,T1b,T2a,T2b N2 M0 T3 N1,N2 M0 T4 N0,N1 M0 Stage IIIB T4 N2 M0 Any T N3 M0 Stage IV Any T Any N M1a or M1b Abbreviation: TNM, tumor-node-metastasis. Source: Reproduced with permission from P Goldstraw P et al: J Thorac Oncol 2:706, 2007. Abbreviation: TNM, tumor-node-metastasis. usually a nonsolid nodule and tends to be slightly more opaque than AAH. MIA is mainly solid, usually with a small (<5 mm) central solid component. However, overlap exists among the imaging features of the preinvasive and minimally invasive lesions in the lung adenocarcinoma spectrum. Lepidic adenocarcinomas are usually solid but may be nonsolid. Likewise, the small invasive adenocarcinomas also are usually solid but may exhibit a small nonsolid component. MANAGEMENT OF STAGES I AND II NSCLC Surgical Resection of Stage I and II NSCLC Surgical resection, ideally by an experienced thoracic surgeon, is the treatment of choice for patients with clinical stage I and II NSCLC who are able to tolerate the procedure. Operative mortality rates for patients resected by thoracic or cardiothoracic surgeons are lower compared to general surgeons. Moreover, survival rates are higher in patients who undergo resection in facilities with a high surgical volume compared to those performing fewer than 70 procedures per year, even though the higher-volume facilities often serve older and less socioeconomic advantaged populations. The improvement in survival is most evident in the immediate postoperative period. The extent of resection is a matter of surgical judgment based on findings at exploration. In patients with stage IA NSCLC, lobectomy is superior to wedge resection with respect to rates of local recurrence. There is also a trend toward improvement in overall survival. In patients with comorbidities, compromised pulmonary reserve, and small peripheral lesions, a limited resection, wedge resection, and segmentectomy (potentially by video-assisted thoracoscopic surgery) may be reasonable surgical option. Pneumonectomy is reserved for patients with central tumors and should be performed only in margins Source: Reproduced with permission from D Ost et al: N Engl J Med 348:2535, 2003. SPN detected on CXR Obtain old films. Assess stability over ˜2 years or benign pattern of calcification Step 1 Solid SPN Negative Positive PET or PET-CT Obtain tissue diagnosis. (TTNA, TBBx, surgical resection)*** Serial CT chest** Neoplasms of the Lung FIGURE 107-6 A. Algorithm for evaluation of solitary pulmonary nodule (SPN). B. Algorithm for evaluation of solid SPN. C. Algorithm for evaluation of semisolid SPN. CT, computed tomography; CXR, chest radiograph; GGN, ground-glass nodule; PET, positron emission tomography; TTBx, transbronchial biopsy; TTNA, transthoracic needle biopsy. (Adapted from VK Patel et al: Chest 143:840, 2013.) No follow-up required Follow-up thin-section CT in 3 months. If nodule is unchanged, consider yearly low-dose CT scans.* If there is change in size or nodule characteristics, surgical resection should be strongly considered *Frequency and duration of follow-up CT scans have yet to be definitively established. patients with excellent pulmonary reserve. The 5-year survival rates are 60–80% for patients with stage I NSCLC and 40–50% for patients with stage II NSCLC. Accurate pathologic staging requires adequate segmental, hilar, and mediastinal lymph node sampling. Ideally this includes a mediastinal lymph node dissection. On the right side, mediastinal stations 2R, 4R, 7, 8R, and 9R should be dissected; on the left side, stations 5, 6, 7, 8L, and 9L should be dissected. Hilar lymph nodes are typically resected and sent for pathologic review, although it is helpful to specifically dissect and label level 10 lymph nodes when possible. On the left side, level 2 and sometimes level 4 lymph nodes are generally obscured by the aorta. Although the therapeutic benefit of nodal dissection versus nodal sampling is controversial, a pooled analysis of three trials involving patients with stages I to IIIA NSCLC demonstrated a superior 4-year survival in patients undergoing resection and a complete mediastinal lymph node dissection compared with lymph node sampling. Moreover, complete mediastinal lymphadenectomy added little morbidity to a pulmonary resection for lung cancer when carried out by an experienced thoracic surgeon. Radiation Therapy in Stages I and II NSCLC There is currently no role for postoperative radiation therapy in patients following resection of stage I or II NSCLC. However, patients with stage I and II disease Follow-up thin-section CT in months. If nodule unchanged and solid component is >8 mm, consider PET-CT. Further recommendations may include surgical resection, nodule biopsy, or serial CT scans. If there is a change in size or nodule characteristics, surgical resection should be strongly considered *Fleischner society guidelines; modified from: H. MacMahon, et al: Radiology 2005; 237;395–400 **ACCP guidelines (see MK Gould et al: Chest 2007;132(suppl 3):108s-130S. ***Consider patient preference, severity of medical comorbidities, center specific expertise prior to tissue diagnosis. who either refuse or are not suitable candidates for surgery should be considered for radiation therapy with curative intent. Stereotactic body radiation therapy (SBRT) is a relatively new technique used to treat patients with isolated pulmonary nodules (≤5 cm) who are not candidates for or refuse surgical resection. Treatment is typically administered in three to five fractions delivered over 1–2 weeks. In uncontrolled studies, disease control rates are >90%, and 5-year survival rates of up to 60% have been reported with SBRT. By comparison, survival rates typically range from 13 to 39% in patients with stage I or II NSCLC treated with standard external-beam radiotherapy. Cryoablation is another technique occasionally used to treat small, isolated tumors (i.e., ≤3 cm). However, very little data exist on long-term outcomes with this technique. Chemotherapy in Stages I and II NSCLC Although a landmark meta-analysis of cisplatin-based adjuvant chemotherapy trials in patients with resected stages I to IIIA NSCLC (the Lung Adjuvant Cisplatin Evaluation [LACE] Study) demonstrated a 5.4% improvement in 5-year survival for adjuvant chemotherapy compared to surgery alone, the survival benefit was seemingly confined to patients with stage II or III disease (Table 107-10). By contrast, survival was actually worsened in stage IA patients with the application of adjuvant therapy. In stage IB, there was a modest improvement in survival of questionable clinical significance. Adjuvant chemotherapy was .03 240 54 407 60 .017 433 58 548 50 .49 540 45 192 60 .90 189 58 173 59 .10 171 57 Abbreviations: ALPI, Adjuvant Lung Cancer Project Italy; ANITA, Adjuvant Navelbine International Trialist Association; BLT, Big Lung Trial; CALGB, Cancer and Lung Cancer Group B; IALT, International Adjuvant Lung Cancer Trial; MVP, mitomycin, vindesine, and cisplatin. also detrimental in patients with poor performance status (Eastern Cooperative Oncology Group [ECOG] performance status = 2). These data suggest that adjuvant chemotherapy is best applied in patients with resected stage II or III NSCLC. There is no apparent role for adjuvant chemotherapy in patients with resected stage IA or IB NSCLC. A possible exception to the prohibition of adjuvant therapy in this setting is the stage IB patient with a resected lesion ≥4 cm. As with any treatment recommendation, the risks and benefits of adjuvant chemotherapy should be considered on an individual patient basis. If a decision is made to proceed with adjuvant chemotherapy, in general, treatment should be initiated 6–12 weeks after surgery, assuming the patient has fully recovered, and should be administered for no more than four cycles. Although a cisplatinbased chemotherapy is the preferred treatment regimen, carboplatin can be substituted for cisplatin in patients who are unlikely to tolerate cisplatin for reasons such as reduced renal function, presence of neuropathy, or hearing impairment. No specific chemotherapy regimen is considered optimal in this setting, although platinum plus vinorelbine is most commonly used. Neoadjuvant chemotherapy, which is the application of chemotherapy administered before an attempted surgical resection, has been advocated by some experts on the assumption that such an approach will more effectively extinguish occult micrometastases compared to postoperative chemotherapy. In addition, it is thought that preoperative chemotherapy might render an inoperable lesion resectable. With the exception of superior sulcus tumors, however, the role of neoadjuvant chemotherapy in stage I to III disease is not well defined. However, a meta-analysis of 15 randomized controlled trials involving more than 2300 patients with stage I to III NSCLC suggested there may be a modest 5-year survival benefit (i.e., ∼5%) that is virtually identical to the survival benefit achieved with postoperative chemotherapy. Accordingly, neoadjuvant therapy may prove useful in selected cases (see below). A decision to use neoadjuvant chemotherapy should always be made in consultation with an experienced surgeon. In should be noted that all patients with resected NSCLC are at high risk of recurrence, most of which occurs within 18–24 months of surgery, or developing a second primary lung cancer. Thus, it is reasonable to follow these patients with periodic imaging studies. Given the results of the NLST, periodic CT scans appear to be the most appropriate screening modality. Based on the timing of most recurrences, some guidelines recommend a contrasted chest CT scan every 6 months for the first 3 years after surgery, followed by yearly CT scans of the chest without contrast thereafter. Management of patients with stage III NSCLC usually requires a combined-modality approach. Patients with stage IIIA disease commonly are stratified into those with “nonbulky” or “bulky” mediastinal lymph node (N2) disease. Although the definition of “bulky” N2 disease varies somewhat in the literature, the usual criteria include the size of a dominant lymph node (i.e., >2–3 cm in short-axis diameter as measured by CT), groupings of multiple smaller lymph nodes, evidence of extracapsular nodal involvement, or involvement of more than two lymph node stations. The distinction between nonbulky and bulky stage IIIA disease is mainly used to select potential candidates for upfront surgical resection or for resection after neoadjuvant therapy. Many aspects of therapy of patients with stage III NSCLC remain controversial, and the optimal treatment strategy has not been clearly defined. Moreover, although there are many potential treatment options, none yields a very high probability of cure. Furthermore, because stage III disease is highly heterogeneous, no single treatment approach can be recommended for all patients. Key factors guiding treatment choices include the particular combination of tumor (T) and nodal (N) disease, the ability to achieve a complete surgical resection if indicated, and the patient’s overall physical condition and preferences. For example, in carefully selected patients with limited stage IIIA disease where involved mediastinal lymph nodes can be completed resected, initial surgery followed by postoperative chemotherapy (with or without radiation therapy) may be indicated. By contrast, for patients with clinically evident bulky mediastinal lymph node involvement, the standard approach to treatment is concurrent chemoradiotherapy. Nevertheless, in some cases, the latter group of patients may be candidates for surgery following chemoradiotherapy. Absent and Nonbulky Mediastinal (N2, N3) Lymph Node Disease For the subset of stage IIIA patients initially thought to have clinical stage I or II disease (i.e., pathologic involvement of mediastinal [N2] lymph nodes is not detected preoperatively), surgical resection is often the treatment of choice. This is followed by adjuvant chemotherapy in patients with microscopic lymph node involvement in a resection specimen. Postoperative radiation therapy (PORT) may also have a role for those with close or positive surgical margins. Patients with tumors involving the chest wall or proximal airways within 2 cm of the carina with hilar lymph node involvement (but not N2 disease) are classified as having T3N1 stage IIIA disease. They too are best managed with surgical resection, if technically feasible, followed by adjuvant chemotherapy if completely resected. Patients with tumors exceeding 7 cm in size also are now classified as T3 and are consider stage IIIA if tumor has spread to N1 nodes. The appropriate initial management of these patients involves surgical resection when feasible, provided the mediastinal staging is negative, followed by adjuvant chemotherapy for those who achieve complete tumor resection. Patients with T3N0 or T3N1 disease due to the presence of satellite nodules within the same lobe as the primary tumor also are candidates for surgery, as are patients with ipsilateral nodules in another lobe and negative mediastinal nodes (IIIA, T4N0 or T4N1). Although data regarding adjuvant chemotherapy in the latter subsets of patients are limited, it is often recommended. Patients with T4N0-1 were reclassified as having stage IIIA tumors in the seventh edition of the TNM system. These patients may have involvement of the carina, superior vena cava, or a vertebral body and yet still be candidates for surgical resection in selected circumstances. The decision to proceed with an attempted resection must be made in consultation with an experienced thoracic surgeon often in association with a vascular or cardiac surgeon and an orthopedic surgeon depending on tumor location. However, if an incomplete resection is inevitable or if there is evidence of N2 involvement (stage IIIB), surgery for T4 disease is contraindicated. Most T4 lesions are best treated with chemoradiotherapy. The role of PORT in patients with completely resected stage III NSCLC is controversial. To a large extent, the use of PORT is dictated by the presence or absence of N2 involvement and, to a lesser degree, by the biases of the treating physician. Using the Surveillance, Epidemiology, and End Results (SEER) database, a recent meta-analysis of PORT identified a significant increase in survival in patients with N2 disease but not in patients with N0 or N1 disease. An earlier analysis by the PORT Meta-analysis Trialist Group using an older database produced similar results. Known Mediastinal (N2, N3) Lymph Node Disease When pathologic involvement of mediastinal lymph nodes is documented preoperatively, a combined-modality approach is recommended assuming the patient is a candidate for treatment with curative intent. These patients are at high risk for both local and distant recurrence if managed with resection alone. For patients with stage III disease who are not candidates for initial surgical resection, concurrent chemoradiotherapy is most commonly used as the initial treatment. Concurrent chemoradiotherapy has been shown to produce superior survival compared to sequential chemoradiotherapy; however, it also is associated with greater host toxicities (including fatigue, esophagitis, and neutropenia). Therefore, for patients with a good performance status, concurrent chemoradiotherapy is the preferred treatment approach, whereas sequential chemoradiotherapy may be more appropriate for patients with a performance status that is not as good. For patients who are not candidates for a combined-modality treatment approach, typically due to a poor performance status or a comorbidity that makes chemotherapy untenable, radiotherapy alone may provide a modest survival benefit in addition to symptom palliation. For patients with potentially resectable N2 disease, it remains uncertain whether surgery after neoadjuvant chemoradiotherapy improves survival. In an NCI-sponsored Intergroup randomized trial comparing concurrent chemoradiotherapy alone to concurrent chemoradiotherapy followed by attempted surgical resection, no survival benefit was observed in the trimodality arm compared to the bimodality therapy. In fact, patients subjected to a pneumonectomy had a worse survival outcome. By contrast, those treated with a lobectomy appeared to have a survival advantage based on a retrospective subset analysis. Thus, in carefully selected, otherwise healthy patients with nonbulky mediastinal lymph node involvement, surgery may be a reasonable option if the primary tumor can be fully resected with a lobectomy. This is not the case if a pneumonectomy is required to achieve complete resection. Superior Sulcus Tumors (Pancoast Tumors) Superior sulcus tumors represent a distinctive subset of stage III disease. These tumors arise in the apex of the lung and may invade the second and third ribs, the brachial plexus, the subclavian vessels, the stellate ganglion, and adjacent vertebral bodies. They also may be associated with Pancoast syndrome, characterized by pain that may arise in the shoulder or chest wall or radiate to the neck. Pain characteristically radiates to the ulnar surface of the hand. Horner’s syndrome (enophthalmos, ptosis, miosis, and anhydrosis) due to invasion of the paravertebral sympathetic chain may be present as well. Patients with these tumors should undergo the same staging procedures as all patients with stage II and III NSCLC. Neoadjuvant chemotherapy or 519 combined chemoradiotherapy followed by surgery is reserved for those without N2 involvement. This approach yields excellent survival outcomes (>50% 5-year survival in patients with an R0 resection). Patients with N2 disease are less likely to benefit from surgery and can be managed with chemoradiotherapy alone. Patients presenting with metastatic disease can be treated with radiation therapy (with or without chemotherapy) for symptom palliation. Approximately 40% of NSCLC patients present with advanced, stage IV disease at the time of diagnosis. These patients have a poor median survival (4–6 months) and a 1-year survival of 10% when managed with best supportive care alone. In addition, a significant number of patients who first presented with early-stage NSCLC will eventually relapse with distant disease. Patients who have recurrent disease have a better prognosis than those presenting with metastatic disease at the time of diagnosis. Standard medical manage ment, the judicious use of pain medications, and the appropriate use of radiotherapy and chemotherapy form the cornerstone of management. Chemotherapy palliates symptoms, improves the quality of life, and improves survival in patients with stage IV NSCLC, particularly in patients with good performance status. In addition, economic analysis has found chemotherapy to be cost-effective palliation for stage IV NSCLC. However, the use of chemotherapy for NSCLC requires clinical experience and careful judgment to balance potential benefits and toxicities. Of note, the early application of palliative care in conjunction with chemotherapy is associated with improved survival and a better quality of life. First-Line Chemotherapy for Metastatic or Recurrent NSCLC A landmark meta-analysis published in 1995 provided the earliest meaningful indication that chemotherapy could provide a survival benefit in metastatic NSCLC as opposed to supportive care alone. However, the survival benefit was seemingly confined to cisplatin-based chemotherapy regimens (hazard ratio 0.73; 27% reduction in the risk of death; 10% improvement in survival at 1 year). These data launched two decades of clinical research aimed at detecting the optimal chemotherapy regimen for advanced NSCLC. For the most part, however, these efforts proved unsuccessful because the overwhelming majority of randomized trials showed no major survival improvement with any one regimen versus another (Table 107-11). On the other hand, differences in progression-free survival, cost, side effects, and schedule were frequently observed. These first-line studies were later extended to elderly patients, where doublet chemotherapy was found to improve overall survival compared to single agents in the “fit” elderly (e.g., elderly patients with no major comorbidities) and in patients with an ECOG performance status of 2. An ongoing debate in the treatment of patients with advanced NSCLC is the appropriate duration of platinum-based chemotherapy. Several large phase III randomized trials have failed to show a meaningful benefit for increasing the duration of platinum-based doublet chemotherapy beyond four to six cycles. In fact, longer duration of chemotherapy has been associated with increased toxicities and impaired quality of life. Therefore, prolonged front-line therapy (beyond four to six cycles) with platinum-based regimens is not recommended. Maintenance therapy following initial platinum-based therapy is discussed below. Although specific tumor histology was once considered irrelevant to treatment choice in NSCLC, with the recent recognition that selected chemotherapy agents perform quite differently in squamous versus adenocarcinomas, accurate determination of histology has become essential. Specifically, in a landmark randomized phase III trial, patients with nonsquamous NSCLC were found to have an improved survival when treated with cisplatin and pemetrexed compared to cisplatin and gemcitabine. By contrast, patients with squamous carcinoma had an improved survival when treated with cisplatin and gemcitabine. This survival difference is thought to be related to the differential expression of thymidylate synthase (TS), Neoplasms of the Lung Median No. of Cisplatin + paclitaxel Cisplatin + gemcitabine Cisplatin + docetaxel Carboplatin + paclitaxel Cisplatin + docetaxel Cisplatin + vinorelbine Carboplatin + docetaxel Cisplatin + paclitaxel Cisplatin + gemcitabine Paclitaxel + gemcitabine Cisplatin + gemcitabine Carboplatin + paclitaxel Cisplatin + vinorelbine Cisplatin + vinorelbine Carboplatin + paclitaxel Cisplatin + irinotecan Carboplatin + paclitaxel Cisplatin + gemcitabine Cisplatin + vinorelbine Cisplatin + gemcitabine Cisplatin + pemetrexed Carboplatin + paclitaxel Gefitinib 288 21 7.8 288 22 8.1 289 17 7.4 290 17 8.1 406 32 11.3 394 25 10.1 404 24 9.4 159 32 8.1 160 37 8.9 161 28 6.7 205 30 9.8 204 32 9.9 203 30 9.5 202 28 8.0 206 25 8.0 145 31 13.9 145 32 12.3 146 30 14.0 145 33 11.4 863 28 10.3 862 31 10.3 608 32 17.3 609 43% 18.6 aEnrolled selected patients: 18 years of age or older, had histologic or cytologically confirmed stage IIIB or IV non-small-cell lung cancer with histologic features of adenocarcinoma (including bronchioloalveolar carcinoma), were nonsmokers (defined as patients who had smoked <100 cigarettes in their lifetime) or former light smokers (those who had stopped smoking at least 15 years previously and had a total of ≤10 pack-years of smoking), and had had no previous chemotherapy or biologic or immunologic therapy. Abbreviations: ECOG, Eastern Cooperative Oncology Group; EORTC, European Organization for Research and Treatment of Cancer; ILCP, Italian Lung Cancer Project; SWOG, Southwest Oncology Group; FACS, Follow-up After Colorectal Surgery; iPASS, Iressa Pan-Asian Study. one of the targets of pemetrexed, between tumor types. Squamous cancers have a much higher expression of TS compared to adenocarcinomas, accounting for their lower responsiveness to pemetrexed. By contrast, the activity of gemcitabine is not impacted by the levels of TS. Bevacizumab, a monoclonal antibody against VEGF, has been shown to improve response rate, progression-free survival, and overall survival in patients with advanced disease when combined with chemotherapy (see below). However, bevacizumab cannot be given to patients with squamous cell histology NSCLC because of their tendency to experience serious hemorrhagic effects. Agents That Inhibit Angiogenesis Bevacizumab, a monoclonal antibody directed against VEGF, was the first antiangiogenic agent approved for the treatment of patients with advanced NSCLC in the United States. This drug primarily acts by blocking the growth of new blood vessels, which are required for tumor viability. Two randomized phase III trials of chemotherapy with or without bevacizumab had conflicting results. The first trial, conducted in North America, compared carboplatin plus paclitaxel with or without bevacizumab in patients with recurrent or advanced nonsquamous NSCLC and reported a significant improvement in response rate, progression-free survival, and overall survival in patients treated with chemotherapy plus bevacizumab versus chemotherapy alone. Bevacizumab-treated patients had a significantly higher incidence of toxicities. The second trial, conducted in Europe, compared cisplatin/gemcitabine with or without bevacizumab in patients with recurrent or advanced nonsquamous NSCLC and reported a significant improvement in progression-free survival but no improvement in overall survival for bevacizumab-treated patients. A randomized phase III trial compared carboplatin/pemetrexed and bevacizumab to carboplatin/paclitaxel and bevacizumab as first-line therapy in patients with recurrent or advanced nonsquamous NSCLC and reported no significant difference in progression-free survival or overall survival between treatment groups. Therefore, currently carboplatin/paclitaxel and bevacizumab or carboplatin/pemetrexed and bevacizumab are appropriate regimens for first-line treatment for stage IV nonsquamous NSCLC patients. Of note, there are many small-molecule inhibitors of VEGFR; however, these VEGFR TKIs have not proven to be effective in the treatment of NSCLC. Maintenance Therapy for Metastatic NSCLC Maintenance chemotherapy in nonprogressing patients (patients with a complete response, partial response, or stable disease) is a controversial topic in the treatment of NSCLC. Conceptually, there are two types of maintenance strategies: (1) switch maintenance therapy, where patients receive four to six cycles of platinum-based chemotherapy and are switched to an entirely different regimen; and (2) continuation maintenance therapy, where patients receive four to six cycles of platinum-based chemotherapy and then the platinum agent is discontinued but the agent it is paired with is continued (Table 107-12). Two studies investigated switch maintenance single-agent chemotherapy with docetaxel or pemetrexed in nonprogressing patients following treatment with first-line platinum-based chemotherapy. Both trials randomized patients to immediate single-agent therapy versus observation and reported improvements in progression-free and overall survival. In both trials, a significant portion of patients in the observation arm did not receive therapy with the agent under investigation upon disease progression; 37% of study patients never received docetaxel in the docetaxel study and 81% of patients never received pemetrexed in the pemetrexed study. In the trial of maintenance docetaxel versus observation, survival was identical to the treatment group in the subset of patients who received docetaxel on progression, indicating this is an active agent in NSCLC. These data are not available for the pemetrexed study. Two additional trials evaluated switch maintenance therapy with erlotinib after platinum-based chemotherapy in patients with advanced Fidias Immediate docetaxel 153 12.3 5.7 Delayed docetaxel 156 9.7 2.7 Ciuleanu Pemetrexed 444 13.4 4.3 BSC 222 10.6 2.6 Paramount Pemetrexed 472 13.9 4.1 BSC 297 11.0 2.8 ATLAS Bev + erlotinib 384 15.9 4.8 Bev + placebo 384 13.9 3.8 SATURN Erlotinib 437 12.3 2.9 Placebo 447 11.1 2.6 ECOG4599 Bev 15 mg/kg 444 12.3 6.2 BSC 434 10.3 4.5 AVAiL Bev 15 mg/kg 351 13.4 6.5 Bev 7.5 mg/kg 345 13.6 6.7 Placebo 347 13.1 6.1 8.6 Bev 15 mg/kg Bev 15 mg/kg 6.9 Abbreviations: Bev, bevacizumab; BSC, best supportive care; CT, chemotherapy; OS, overall survival; PFS, progression-free survival. NSCLC and reported an improvement in progression-free survival and overall survival in the erlotinib treatment group. Currently, maintenance pemetrexed or erlotinib following platinum-based chemotherapy in patients with advanced NSCLC are approved by the U.S. FDA. However, maintenance therapy is not without toxicity and, at this time, should be considered on an individual patient basis. Targeted Therapies for Select Molecular Cohorts of NSCLC As the efficacy of traditional cytotoxic chemotherapeutic agents plateaued in NSCLC, there was a critical need to define novel therapeutic treatment strategies. These novel strategies have largely been based on the identification of somatic driver mutations within the tumor. These driver mutations occur in genes encoding signaling proteins that, when aberrant, drive initiation and maintenance of tumor cells. Importantly, driver mutations can serve as Achilles’ heels for tumors, if their gene products can be targeted therapeutically with small-molecule inhibitors. For example, EGFR mutations have been detected in 10–15% of North American patients diagnosed with NSCLC. EGFR mutations are associated with younger age, light (<10 pack-year) and nonsmokers, and adenocarcinoma histology. Approximately 90% of these mutations are exon 19 deletions or exon 21 L858R point mutations within the EGFR TK domain, resulting in hyperactivation of both EGFR kinase activity and downstream signaling. Lung tumors that harbor activating mutations within the EGFR kinase domain display high sensitivity to small-molecule EGFR TKIs. Erlotinib and afatinib are FDA-approved oral small-molecule TKIs that inhibit EGFR. Outside the United States, gefitinib also is available. Several large, international, phase III studies have demonstrated improved response rates, progression-free survival, and overall survival in patients with EGFR mutation–positive NSCLC patients treated with an EGFR TKI as compared with standard first-line chemotherapy regimens (Table 107-13). Although response rates with EGFR TKI therapy are clearly superior in patients with lung tumors harboring activating EGFR kinase domain mutations, the EGFR TKI erlotinib is also FDA approved for secondand third-line therapy in patients with advanced NSCLC irrespective of tumor genotype. The reason for this apparent discrepancy is that erlotinib was initially evaluated in lung cancer before the discovery of EGFR activating mutations. In fact, EGFR mutations were initially identified in lung cancer by studying the tumors of patients who had dramatic responses to this agent. With the rapid pace of scientific discovery, additional driver mutations in lung cancer have been identified and targeted therapeutically with impressive clinical results. For example, chromosomal rearrangements involving the anaplastic lymphoma kinase (ALK) gene on chromosome 2 have been found in ∼3–7% of NSCLC. The result of these ALK rearrangements is hyperactivation of the ALK TK domain. Similar to No. of Study Therapy Patients ORR (%) PFS (months) Abbreviations: CbP, carboplatin and paclitaxel; CD, cisplatin and docetaxel; CG, cisplatin and gemcitabine; CP, cisplatin and paclitaxel; ORR, overall response rate; PFS, progression-free survival. EGFR, ALK rearrangements are typically (but not exclusively) associ-521 ated with younger age, light (<10 pack-year) and nonsmokers, and adenocarcinoma histology. Remarkably, ALK rearrangements were initially described in lung cancer in 2007, and by 2011, the first ALK inhibitor, crizotinib, received FDA approval for patients with lung tumors harboring ALK rearrangements. In addition to EGFR and ALK, other driver mutations have been discovered with varying frequencies in NSCLC, including KRAS, BRAF, PIK3CA, NRAS, AKT1, MET, MEK1 (MAP2K1), ROS1, and RET. Mutations within the KRAS GTPase are found in approximately 20% of lung adenocarcinomas. To date, however, no small-molecule inhibitors are available to specifically target mutant KRAS. Each of the other driver mutations occurs in less than 1–3% of lung adenocarcinomas. The great majority of the driver mutations are mutually exclusive, and there are ongoing clinical studies for their specific inhibitors. For example, the BRAF inhibitor vemurafenib and the RET inhibitor cabozantinib have already demonstrated efficacy in patients with lung cancer harboring BRAF mutations or RET gene fusions, respectively. Most of these mutations are present in adenocarcinoma; however, mutations that may be linked to future targeted therapies in squamous cell carcinomas are emerging. In addition, there are active research efforts aimed at defining novel targetable mutations in lung cancer as well as defining mechanisms of acquired resistance to small-molecule inhibitors used in the treatment of patients with NSCLC. to supportive care alone. As first-line chemotherapy regimens improve, a substantial number of patients will maintain a good performance status and a desire for further therapy when they develop recurrent disease. Currently, several agents are FDA approved for second-line use in NSCLC including docetaxel, pemetrexed, erlotinib (approved for second-line therapy regardless of tumor genotype), and crizotinib (for patients with ALK -mutant lung cancer only). Most of the survival benefit for any of these agents is realized in patients who maintain a good performance status. Immunotherapy For more than 30 years, the investigation of vaccines and immunotherapies in lung cancer has yielded little in the way of meaningful benefit. Recently, however, this perception has changed based on preliminary results of studies using monoclonal antibodies that activate antitumor immunity through blockade of immune checkpoints. For example, ipilimumab, a monoclonal antibody directed at cytotoxic T lymphocyte antigen-4 (CTLA-4), was studied in combination with paclitaxel plus carboplatin in patients with both SCLC and NSCLC. There appeared to be a small but not statistically significant advantage to the combination when ipilimumab was instituted after several cycles of chemotherapy. A randomized phase III trial in SCLC is under way to validate these data. Antibodies to the T cell programmed cell death receptor 1 (PD-1), nivolumab and pembrolizumab, have been shown to produce responses in lung cancer, renal cell cancer, and melanoma. Many of these responses have had very long duration (i.e., >1 year). Monoclonal antibodies to the PD-1 ligand (anti-PDL-1), which may be expressed on the tumor cell, have also been shown to produce responses in patients with melanoma and lung cancer. Preliminary studies in melanoma suggest that the combination of ipilimumab and nivolumab could produce higher response rates compared to either agent alone. A similar strategy is being investigated in SCLC patients. Further evaluation of these agents in both NSCLC and SCLC is ongoing in combination with already approved chemotherapy and targeted agents. Supportive Care No discussion of the treatment strategies for patients with advanced lung cancer would be complete without a mention of supportive care. Coincident with advances in chemotherapy and targeted therapy was a pivotal study that demonstrated that the early integration of palliative care with standard treatment strategies improved both quality of life and mood for patients with advanced lung cancer. Aggressive pain and symptom control is an important component for optimal treatment of these patients. Neoplasms of the Lung only FDA-approved agent for second-line therapy in patients with SCLC. Topotecan has only modest activity and can be given either SCLC is a highly aggressive disease characterized by its rapid doubling time, high growth fraction, early development of disseminated disease, and dramatic response to first-line chemotherapy and radiation. In general, surgical resection is not routinely recommended for patients because even patients with LD-SCLC still have occult micrometastases. However, the most recent American College of Chest Physicians Evidence-Based Clinical Practice Guidelines recommend surgical resection over nonsurgical treatment in SCLC patients with clinical stage I disease after a thorough evaluation for distant metastases and invasive mediastinal stage evaluation (grade 2C). After resection, these patients should receive platinum-based adjuvant chemotherapy (grade 1C). If the histologic diagnosis of SCLC is made in patients on review of a resected surgical specimen, such patients should receive standard SCLC chemotherapy as well. Chemotherapy significantly prolongs survival in patients with SCLC. Four to six cycles of platinum-based chemotherapy with either cisplatin or carboplatin plus either etoposide or irinotecan has been the mainstay of treatment for nearly three decades and is recommended over other chemotherapy regimens irrespective of initial stage. Cyclophosphamide, doxorubicin (Adriamycin), and vincristine (CAV) may be an alternative for patients who are unable to tolerate a platinum-based regimen. Despite response rates to first-line therapy as high as 80%, the median survival ranges from 12 to 20 months for patients with LD and from 7 to 11 months for patients with ED. Regardless of disease extent, the majority of patients relapse and develop chemotherapy-resistant disease. Only 6–12% of patients with LD-SCLC and 2% of patients with ED-SCLC live beyond 5 years. The prognosis is especially poor for patients who relapse within the first 3 months of therapy; these patients are said to have chemotherapy-resistant disease. Patients are said to have sensitive disease if they relapse more than 3 months after their initial therapy and are thought to have a somewhat better overall survival. These patients also are thought to have the greatest potential benefit from second-line chemotherapy (Fig. 107-7). Topotecan is the FIGURE 107-7 Management of recurrent small-cell lung cancer (SCLC). CAV, cyclophosphamide, doxorubicin, and vincristine. (Adapted with permission from JP van Meerbeeck et al: Lancet 378:1741, 2011.) intravenously or orally. In one randomized trial, 141 patients who were not considered candidates for further IV chemotherapy were randomized to receive either oral topotecan or best supportive care. Although the response rate to oral topotecan was only 7%, overall survival was significantly better in patients receiving chemotherapy (median survival time, 26 weeks vs 14 weeks; p = .01). Moreover, patients given topotecan had a slower decline in quality of life than did those not receiving chemotherapy. Other agents with similar low levels of activity in the second-line setting include irinotecan, paclitaxel, docetaxel, vinorelbine, oral etoposide, and gemcitabine. Clearly novel treatments for this all too common disease are desperately needed. Thoracic radiation therapy (TRT) is a standard component of induction therapy for good performance status and limited-stage SCLC patients. Meta-analyses indicate that chemotherapy combined with chest irradiation improves 3-year survival by approximately 5% as compared with chemotherapy alone. The 5-year survival rate, however, remains disappointingly low at ∼10–15%. Most commonly, TRT is combined with cisplatin and etoposide chemotherapy due to a superior toxicity profile as compared to anthracycline-containing chemotherapy regimens. As observed in locally advanced NSCLC, concurrent chemoradiotherapy is more effective than sequential chemoradiation but is associated with significantly more esophagitis and hematologic toxicity. Ideally TRT should be administered with the first two cycles of chemotherapy because later application appears slightly less effective. If for reasons of fitness or availability, this regimen cannot be offered, TRT should follow induction chemotherapy. With respect to fractionation of TRT, twice-daily 1.5-Gy fractioned radiation therapy has been shown to improve survival in LD-SCLC patients but is associated with higher rates of grade 3 esophagitis and pulmonary toxicity. Although it is feasible to deliver once-daily radiation therapy doses up to 70 Gy concurrently with cisplatin-based chemotherapy, there are no data to support equivalency of this approach compared with the 45-Gy twice-daily radiotherapy dose. Therefore, the current standard regimen of a 45-Gy dose administered in 1.5-Gy fractions twice daily for 30 days is being compared with higher-dose regimens in two phase III trials, one in the United States and one in Europe. Patients should be carefully selected for concurrent chemoradiation therapy based on good performance status and adequate pulmonary reserve. The role of radiotherapy in ED-SCLC is largely restricted to palliation of tumor-related symptoms such as bone pain and bronchial obstruction. Prophylactic cranial irradiation (PCI) should be considered in all patients with either LD-SCLC or ED-SCLC who have responded well to initial therapy. A meta-analysis including seven trials and 987 patients with LD-SCLC who had achieved a complete remission after upfront chemotherapy yielded a 5.4% improvement in overall survival for patients treated with PCI. In patients with ED-SCLC who have responded to first-line chemotherapy, a prospective randomized phase III trial showed that PCI reduced the occurrence of symptomatic brain metastases and prolonged disease-free and overall survival compared to no radiation therapy. Long-term toxicities, including deficits in cognition, have been reported after PCI but are difficult to sort out from the effects of chemotherapy or normal aging. The management of NSCLC has undergone major change in the past decade. To a lesser extent, the same is true for SCLC. For patients with early-stage disease, advances in radiotherapy and surgical procedures as well as new systemic therapies have greatly improved prognosis in both diseases. For patients with advanced disease, major progress in understanding tumor genetics has led to the development of Core biopsy of most distant site of disease Squamous carcinoma Adenocartcinoma Obtain tissue Determine histology Determine molecular status Treatment options EGFRmut Erlotinib or afatinib Crizotinib No mutation or mutation for which there is no FDA approved therapy Platinum-based chemothearpy ± bevacizumab Cisplatin or carboplatin + gemcitabine, doc-etaxel, paclitaxel, or nab-paclitaxel Platinum-based chemotherapy ALK (+) Large-cell neuroendocrine carcinoma FIGURE 107-8 Approach to first-line therapy in a patient with stage IV non-small-cell lung cancer (NSCLC). EGFRmut, EGFR mutation; FDA, Food and Drug Administration. targeted inhibitors based specifically on the tumor’s molecular profile. Furthermore, increased understanding of how to activate the immune system to drive antitumor immunity is proving to be a promising therapeutic strategy for some patients with advanced lung cancer. In Fig. 107-8, we propose an algorithm of the treatment approach for patient with stage IV NSCLC. However, the reality is that the majority of patients treated with targeted therapies or chemotherapy eventually develop resistance, which provides strong motivation for further research and enrollment of patients onto clinical trials in this rapidly evolving area. Marc E. Lippman Breast cancer is a malignant proliferation of epithelial cells lining the ducts or lobules of the breast. In the year 2014, about 180,000 cases of invasive breast cancer and 40,000 deaths will occur in the United States. In addition, about 2000 men will be diagnosed with breast cancer. Epithelial malignancies of the breast are the most common cause of cancer in women (excluding skin cancer), accounting for about one-third of all cancer in women. As a result of improved treatment and earlier detection, the mortality rate from breast cancer has begun to decrease very substantially in the United States. This Chapter will not consider rare malignancies presenting in the breast, such as sarcomas and lymphomas, but will focus on the epithelial cancers. Human breast cancer is a clonal disease; a single transformed cell—the product of a series of somatic (acquired) or germline mutations—is eventually able to express full malignant potential. Thus, breast cancer may exist for a long period as either a noninvasive disease or an invasive but nonmetastatic disease. These facts have significant clinical ramifications. Not more than 10% of human breast cancers can be linked directly to germline mutations. Several genes have been implicated in familial cases. The Li-Fraumeni syndrome is characterized by inherited mutations in the p53 tumor-suppressor gene, which lead to an increased incidence of breast cancer, osteogenic sarcomas, and other malignancies. Inherited mutations in PTEN have also been reported in breast cancer. Another tumor-suppressor gene, BRCA1, has been identified at the chromosomal locus 17q21; this gene encodes a zinc finger protein, and the protein product functions as a transcription factor and is involved in gene repair. Women who inherit a mutated allele of this gene from either parent have at least a 60–80% lifetime chance of developing breast cancer and about a 33% chance of developing ovarian cancer. The risk is higher among women born after 1940, presumably due to promotional effects of hormonal factors. Men who carry a mutant allele of the gene have an increased incidence of prostate cancer and breast cancer. A fourth gene, termed BRCA2, which has been localized to chromosome 13q12, is also associated with an increased incidence of breast cancer in men and women. Germline mutations in BRCA1 and BRCA2 can be readily detected; patients with these mutations should be counseled appropriately. All women with strong family histories for breast cancer should be referred to genetic screening programs, particularly women of Ashkenazi Jewish descent who have a high likelihood of a specific founder BRCA1 mutation (substitution of adenine for guanine at position 185). Even more important than the role these genes play in inherited forms of breast cancer may be their role in sporadic breast cancer. A p53 mutation is present in nearly 40% of human breast cancers as an acquired defect. Acquired mutations in PTEN occur in about 10% of the cases. BRCA1 mutation in sporadic primary breast cancer has not been reported. However, decreased expression of BRCA1 mRNA (possibly via gene methylation) and abnormal cellular location of the BRCA1 protein have been found in some breast cancers. Loss of heterozygosity of BRCA1 and BRCA2 suggests that tumor-suppressor 524 activity may be inactivated in sporadic cases of human breast cancer. Finally, increased expression of a dominant oncogene plays a role in about a quarter of human breast cancer cases. The product of this gene, a member of the epidermal growth factor receptor superfamily, is called erbB2 (HER/2 neu) and is overexpressed in these breast cancers due to gene amplification; this overexpression can contribute to transformation of human breast epithelium and is the target of effective systemic therapy in adjuvant and metastatic disease settings. A series of acquired “driver” mutations have been identified in sporadic breast cancer by major sequencing consortia. Unfortunately, most occur in no more than 5% of cases and generally do not have effective agents to target them, so “personalized medicine” is for now more of a dream than a reality. Breast cancer is a hormone-dependent disease. Women without functioning ovaries who never receive estrogen replacement therapy do not develop breast cancer. The female-to-male ratio is about 150:1. For most epithelial malignancies, a log-log plot of incidence versus age shows a single-component straight-line increase with every year of life. A similar plot for breast cancer shows two components: a straight-line increase with age but with a decrease in slope beginning at the age of menopause. The three dates in a woman’s life that have a major impact on breast cancer incidence are age at menarche, age at first full-term pregnancy, and age at menopause. Women who experience menarche at age 16 years have only 50–60% of the breast cancer risk of a woman having menarche at age 12 years; the lower risk persists throughout life. Similarly, menopause occurring 10 years before the median age of menopause (52 years), whether natural or surgically induced, reduces lifetime breast cancer risk by about 35%. Women who have a first full-term pregnancy by age 18 years have a 30–40% lower risk of breast cancer compared with nulliparous women. Thus, length of menstrual life—particularly the fraction occurring before first full-term pregnancy—is a substantial component of the total risk of breast cancer. These three factors (menarche, age of first full-term pregnancy, and menopause) can account for 70–80% of the variation in breast cancer frequency in different countries. Also, duration of maternal nursing correlates with substantial risk reduction independent of either parity or age at first full-term pregnancy. International variation in incidence has provided some of the most important clues on hormonal carcinogenesis. A woman living to age 80 years in North America has one chance in nine of developing invasive breast cancer. Asian women have one-fifth to one-tenth the risk of breast cancer of women in North America or Western Europe. Asian women have substantially lower concentrations of estrogens and progesterone. These differences cannot be explained on a genetic basis because Asian women living in a Western environment have sex steroid hormone concentrations and risks identical to those of their Western counterparts. These migrant women, and more notably their daughters, also differ markedly in height and weight from Asian women in Asia; height and weight are critical regulators of age of menarche and have substantial effects on plasma concentrations of estrogens. The role of diet in breast cancer etiology is controversial. While there are associative links between total caloric and fat intake and breast cancer risk, the exact role of fat in the diet is unproven. Increased caloric intake contributes to breast cancer risk in multiple ways: earlier menarche, later age at menopause, and increased post-menopausal estrogen concentrations reflecting enhanced aromatase activities in fatty tissues. On the other hand, central obesity is both a risk factor for occurrence and recurrence of breast cancer. Moderate alcohol intake also increases the risk by an unknown mechanism. Folic acid supplementation appears to modify risk in women who use alcohol but is not additionally protective in abstainers. Recommendations favoring abstinence from alcohol must be weighed against other social pressures and the possible cardioprotective effect of moderate alcohol intake. Chronic low-dose aspirin use is associated with a decreased incidence of breast cancer. Depression is also associated with both occurrence and recurrence of breast cancer. Understanding the potential role of exogenous hormones in breast cancer is of extraordinary importance because millions of American women regularly use oral contraceptives and postmenopausal hormone replacement therapy. The most credible meta-analyses of oral contraceptive use suggest that these agents cause a small increased risk of breast cancer. By contrast, oral contraceptives offer a substantial protective effect against ovarian epithelial tumors and endometrial cancers. Hormone replacement therapy (HRT) has a powerful effect on breast cancer risk. Data from the Women’s Health Initiative (WHI) trial showed that conjugated equine estrogens plus progestins increased the risk of breast cancer and adverse cardiovascular events but decreased the risk of bone fractures and colorectal cancer. On balance, there were more negative events with HRT; 6–7 years of HRT nearly doubled the risk of breast cancer. A parallel WHI trial with >12,000 women enrolled testing conjugated estrogens alone (estrogen replacement therapy in women who have had hysterectomies) showed no significant increase in breast cancer incidence. Thus, there are serious concerns about longterm HRT use in terms of cardiovascular disease and breast cancer. The WHI trial of conjugated equine estrogen alone demonstrated few adverse effects for women age <70; however, no comparable safety data are available for other more potent forms of estrogen replacement, and they should not be routinely used as substitutes. HRT in women previously diagnosed with breast cancer increases recurrence rates. Rapid decrease in the number of women on HRT has already led to a coincident decrease in breast cancer incidence. In addition to the other factors, radiation is a risk factor in younger women. Women who have been exposed before age 30 years to radiation in the form of multiple fluoroscopies (200–300 cGy) or treatment for Hodgkin’s disease (>3600 cGy) have a substantial increase in risk of breast cancer, whereas radiation exposure after age 30 years appears to have a minimal carcinogenic effect on the breast. Because the breasts are a common site of potentially fatal malignancy in women, examination of the breast is an essential part of the physical examination. Unfortunately, internists frequently do not examine breasts in men, and in women, they are apt to defer this evaluation to gynecologists. Because of the plausible association between early detection and improved outcome, it is the duty of every physician to identify breast abnormalities at the earliest possible stage and to institute a diagnostic workup. Women should be trained in breast self-examination (BSE). Although breast cancer in men is unusual, unilateral lesions should be evaluated in the same manner as in women, with the recognition that gynecomastia in men can sometimes begin unilaterally and is often asymmetric. Virtually all breast cancer is diagnosed by biopsy of a nodule detected either on a mammogram or by palpation. Algorithms have been developed to enhance the likelihood of diagnosing breast cancer and reduce the frequency of unnecessary biopsy (Fig. 108-1). Women should be strongly encouraged to examine their breasts monthly. A potentially flawed study from China has suggested that BSE does not alter survival, but given its safety, the procedure should still be encouraged. At worst, this practice increases the likelihood of detecting a mass at a smaller size when it can be treated with more limited surgery. Breast examination by the physician should be performed in good light so as to see retractions and other skin changes. The nipple and areolae should be inspected, and an attempt should be made to elicit nipple discharge. All regional lymph node groups should be examined, and any lesions should be measured. Physical examination alone cannot exclude malignancy. Lesions with certain features are more likely to be cancerous (hard, irregular, tethered or fixed, or painless lesions). A negative mammogram in the presence of a persistent lump in the breast does not exclude malignancy. Palpable lesions require additional diagnostic procedures, including biopsy. In premenopausal women, lesions that are either equivocal or nonsuspicious on physical examination should be reexamined in 2–4 Questionable mass “thickening” Reexamine follicular phase menstrual cycle Biopsy Mammogram Solid mass Postmenopausal Patient (with dominant mass) Management by “triple diagnosis” or biopsy Premenopausal Patient Routine screening Mass gone Cyst (see Fig. 108-3) Mass persists Suspicious Aspiration Dominant mass “Benign” FIGURE 108-1 Approach to a palpable breast mass. weeks, during the follicular phase of the menstrual cycle. Days 5–7 of the cycle are the best time for breast examination. A dominant mass in a postmenopausal woman or a dominant mass that persists through a menstrual cycle in a premenopausal woman should be aspirated by fine-needle biopsy or referred to a surgeon. If nonbloody fluid is aspirated, the diagnosis (cyst) and therapy have been accomplished together. Solid lesions that are persistent, recurrent, complex, or bloody cysts require mammography and biopsy, although in selected patients the so-called triple diagnostic technique (palpation, mammography, aspiration) can be used to avoid biopsy (Figs. 108-1, 108-2, and 108-3). Ultrasound can be used in place of fine-needle aspiration to distinguish cysts from solid lesions. Not all solid masses are detected by ultrasound; thus, a palpable mass that is not visualized on ultrasound must be presumed to be solid. Several points are essential in pursuing these management decision trees. First, risk-factor analysis is not part of the decision structure. No constellation of risk factors, by their presence or absence, can be used to exclude biopsy. Second, fine-needle aspiration should be used only in centers that have proven skill in obtaining such specimens and analyzing them. The likelihood of cancer is low in the setting of a “triple negative” (benign-feeling lump, negative mammogram, and negative fine-needle aspiration), but it is not zero. The patient and physician FIGURE 108-2 The “triple diagnosis” technique. FIGURE 108-3 Management of a breast cyst. must be aware of a 1% risk of false negatives. Third, additional technologies such as magnetic resonance imaging (MRI), ultrasound, and sestamibi imaging cannot be used to exclude the need for biopsy, although in unusual circumstances, they may provoke a biopsy. Diagnostic mammography should not be confused with screening mammography, which is performed after a palpable abnormality has been detected. Diagnostic mammography is aimed at evaluating the rest of the breast before biopsy is performed or occasionally is part of the triple-test strategy to exclude immediate biopsy. Subtle abnormalities that are first detected by screening mammography should be evaluated carefully by compression or magnified views. These abnormalities include clustered microcalcifications, densities (especially if spiculated), and new or enlarging architectural distortion. For some nonpalpable lesions, ultrasound may be helpful either to identify cysts or to guide biopsy. If there is no palpable lesion and detailed mammographic studies are unequivocally benign, the patient should have routine follow-up appropriate to the patient’s age. It cannot be stressed too strongly that in the presence of a breast lump a negative mammogram does not rule out cancer. If a nonpalpable mammographic lesion has a low index of suspicion, mammographic follow-up in 3–6 months is reasonable. Workup of indeterminate and suspicious lesions has been rendered more complex by the advent of stereotactic biopsies. Morrow and colleagues have suggested that these procedures are indicated for lesions that require biopsy but are likely to be benign—that is, for cases in which the procedure probably will eliminate additional surgery. When a lesion is more probably malignant, open biopsy should be performed with a needle localization technique. Others have proposed more widespread use of stereotactic core biopsies for nonpalpable lesions on economic grounds and because diagnosis leads to earlier treatment planning. However, stereotactic diagnosis of a malignant lesion does not eliminate the need for definitive surgical procedures, particularly if breast conservation is attempted. For example, after a breast biopsy with needle localization (i.e., local excision) of a stereotactically diagnosed malignancy, reexcision may still be necessary to achieve negative margins. To some extent, these issues are decided on the basis of referral pattern and the availability of the resources for stereotactic core biopsies. A reasonable approach is shown in Fig. 108-4. During pregnancy, the breast grows under the influence of estrogen, progesterone, prolactin, and human placental lactogen. Lactation is suppressed by progesterone, which blocks the effects of prolactin. After delivery, lactation is promoted by the fall in progesterone levels, which leaves the effects of prolactin unopposed. The development of a dominant mass during pregnancy or lactation should never be attributed to hormonal changes. A dominant mass must be treated with the same concern in a pregnant woman as any other. Breast Mammographic Abnormality Additional studies including spot magnification, oblique views, aspiration, and ultrasound as indicated. FIGURE 108-4 Approaches to abnormalities detected by mammogram. cancer develops in 1 in every 3000–4000 pregnancies. Stage for stage, breast cancer in pregnant patients is no different from premenopausal breast cancer in nonpregnant patients. However, pregnant women often have more advanced disease because the significance of a breast mass was not fully considered and/or because of endogenous hormone stimulation. Persistent lumps in the breast of pregnant or lactating women cannot be attributed to benign changes based on physical findings; such patients should be promptly referred for diagnostic evaluation. Only about 1 in every 5–10 breast biopsies leads to a diagnosis of cancer, although the rate of positive biopsies varies in different countries and clinical settings. (These differences may be related to interpretation, medico-legal considerations, and availability of mammograms.) The vast majority of benign breast masses are due to “fibrocystic” disease, a descriptive term for small fluid-filled cysts and modest epithelial cell and fibrous tissue hyperplasia. However, fibrocystic disease is a histologic, not a clinical, diagnosis, and women who have had a biopsy with benign findings are at greater risk of developing breast cancer than those who have not had a biopsy. The subset of women with ductal or lobular cell proliferation (about 30% of patients), particularly the small fraction (3%) with atypical hyperplasia, have a fourfold greater risk of developing breast cancer than those women who have not had a biopsy, and the increase in the risk is about ninefold for women in this category who also have an affected first-degree relative. Thus, careful follow-up of these patients is required. By contrast, patients with a benign biopsy without atypical hyperplasia are at little risk and may be followed routinely. Breast cancer is virtually unique among the epithelial tumors in adults in that screening (in the form of annual mammography) improves survival. Meta-analysis examining outcomes from every randomized trial of mammography conclusively shows a 25–30% reduction in the chance of dying from breast cancer with annual screening after age 50 years; the data for women between ages 40 and 50 years are almost as positive; however, since the incidence is much lower in younger women, there are more false positives. While controversy continues to surround the assessment of screening mammography, the preponderance of data strongly supports the benefits of screening mammography. New analyses of older randomized studies have occasionally suggested that screening may not work. While the design defects in some older studies cannot be retrospectively corrected, most experts, including panels of the American Society of Clinical Oncology and the American Cancer Society (ACS), continue to believe that screening conveys substantial benefit. Furthermore, the profound drop in breast cancer mortality rate seen over the past decade is unlikely to be solely attributable to improvements in therapy. It seems prudent to recommend annual or biannual mammography for women past the age of 40 years. Although no randomized study of BSE has ever shown any improvement in survival, its major benefit is identification of tumors appropriate for conservative local therapy. Better mammographic technology, including digitized mammography, routine use of magnified views, and greater skill in mammographic interpretation, combined with newer diagnostic techniques (MRI, magnetic resonance spectroscopy, positron emission tomography, etc.) may make it possible to identify breast cancers even more reliably and earlier. Screening by any technique other than mammography is not indicated. However, the ACS suggests that younger women who are BRCA1 or BRCA2 carriers or untested first-degree relatives of women with cancer; women with a history of radiation therapy to the chest between ages 10 and 30 years; women with a lifetime risk of breast cancer of at least 20%; and women with a history of Li-Fraumeni, Cowden, or Bannayan-Riley-Ruvalcaba syndromes may benefit from MRI screening, where the higher sensitivity may outweigh the loss of specificity. Correct staging of breast cancer patients is of extraordinary importance. Not only does it permit an accurate prognosis, but in many cases, therapeutic decision-making is based largely on the TNM (primary tumor, regional nodes, metastasis) classification (Table 108-1). Comparison with historic series should be undertaken with caution, as the staging has changed several times in the past 20 years. The current staging is complex and results in significant changes in outcome by stage as compared with prior staging systems. One of the most exciting aspects of breast cancer biology has been its subdivision into at least five subtypes based on gene expression profiling. 1. Luminal A: The luminal tumors express cytokeratins 8 and 18, have the highest levels of estrogen receptor expression, tend to be low grade, are most likely to respond to endocrine therapy, and have a favorable prognosis. They tend to be less responsive to chemotherapy. 2. Luminal B: Tumor cells are also of luminal epithelial origin, but with a gene expression pattern distinct from luminal A. Prognosis is somewhat worse that luminal A. 3. Normal breast–like: These tumors have a gene expression profile reminiscent of nonmalignant “normal” breast epithelium. Prognosis is similar to the luminal B group. This subtype is somewhat controversial and may represent contamination of the sample by normal mammary epithelium. 4. HER2 amplified: These tumors have amplification of the HER2 gene on chromosome 17q and frequently exhibit coamplification and overexpression of other genes adjacent to HER2. Historically the clinical prognosis of such tumors was poor. However, with the advent of trastuzumab and other targeted therapies, the clinical outcome of HER2 -positive patients is markedly improving. 5. Basal: These estrogen receptor/progesterone receptor–negative and HER2-negative tumors (so-called triple negative) are characterized by markers of basal/myoepithelial cells. They tend to be high grade, and express cytokeratins 5/6 and 17 as well as vimentin, p63, CD10, α-smooth muscle actin, and epidermal growth factor receptor (EGFR). Patients with BRCA mutations also fall within this molecular subtype. They also have stem cell characteristics. Breast-conserving treatments, consisting of the removal of the primary tumor by some form of lumpectomy with or without irra T0 No evidence of primary tumor TIS Carcinoma in situ T1 Tumor ≤2 cm T1a Tumor >0.1 cm but ≤0.5 cm T1b Tumor >0.5 but ≤1 cm T1c Tumor >1 cm but ≤2 cm T2 Tumor >2 cm but ≤5 cm T3 Tumor >5 cm T4 Extension to chest wall, inflammation, satellite lesions, ulcerations M0 No distant metastasis M1 Distant metastasis (includes spread to ipsilateral supraclavicular nodes) aClinically apparent is defined as detected by imaging studies (excluding lymphoscintigraphy) or by clinical examination. Abbreviations: IHC, immunohistochemistry; RT-PCR, reverse transcriptase polymerase chain reaction. Source: Used with permission of the American Joint Committee on Cancer (AJCC), Chicago, Illinois. The original source for this material is the AJCC Cancer Staging Manual, 7th ed. New York, Springer, 2010; www.springeronline.com. diating the breast, result in a survival that is as good as (or slightly tumors (i.e., T2 in size, positive margins, positive nodes). At pres-superior to) that after extensive surgical procedures, such as mas-ent, nearly one-third of women in the United States are managed tectomy or modified radical mastectomy, with or without further by lumpectomy. Breast-conserving surgery is not suitable for all irradiation. Postlumpectomy breast irradiation greatly reduces the patients: it is not generally suitable for tumors >5 cm (or for smaller risk of recurrence in the breast. While breast conservation is associ-tumors if the breast is small), for tumors involving the nipple-areola ated with a possibility of recurrence in the breast, 10-year survival is complex, for tumors with extensive intraductal disease involvat least as good as that after more extensive surgery. Postoperative ing multiple quadrants of the breast, for women with a history of radiation to regional nodes following mastectomy is also associated collagen-vascular disease, and for women who either do not have with an improvement in survival. Because radiation therapy can also the motivation for breast conservation or do not have convenient reduce the rate of local or regional recurrence, it should be strongly access to radiation therapy. However, these groups probably do not considered following mastectomy for women with high-risk primary account for more than one-third of patients who are treated with 528 mastectomy. Thus, a great many women still undergo mastectomy who could safely avoid this procedure and probably would if appropriately counseled. Sentinel lymph node biopsy (SLNB) is generally the standard of care for women with localized breast cancer and clinically negative axilla. If SLNB is negative, more extensive axillary surgery is not required, avoiding much of the risk of lymphedema following more extensive axillary dissections. In the presence of minimal involvement of a sentinel lymph node, further axillary surgery is not required. An extensive intraductal component is a predictor of recurrence in the breast, and so are several clinical variables. Both axillary lymph node involvement and involvement of vascular or lymphatic channels by metastatic tumor in the breast are associated with a higher risk of relapse in the breast but are not contraindications to breast-conserving treatment. When these patients are excluded, and when lumpectomy with negative tumor margins is achieved, breast conservation is associated with a recurrence rate in the breast of 5% or less. The survival of patients who have recurrence in the breast is somewhat worse than that of women who do not. Thus, recurrence in the breast is a negative prognostic variable for longterm survival. However, recurrence in the breast is not the cause of distant metastasis. If recurrence in the breast caused metastatic disease, then women treated with lumpectomy, who have a higher rate of recurrence in the breast, should have poorer survival than women treated with mastectomy, and they do not. Most patients should consult with a radiation oncologist before making a final decision concerning local therapy. However, a multimodality clinic in which the surgeon, radiation oncologist, medical oncologist, and other caregivers cooperate to evaluate the patient and develop a treatment plan is usually considered a major advantage by patients. Adjuvant Therapy The use of systemic therapy after local management of breast cancer substantially improves survival. More than half of the women who would otherwise die of metastatic breast cancer remain disease-free when treated with the appropriate systemic regimen. These data have grown more and more impressive with longer follow-up and more effective regimens. Prognostic variaBles The most important prognostic variables are provided by tumor staging. The size of the tumor and the status of the axillary lymph nodes provide reasonably accurate information on the likelihood of tumor relapse. The relation of pathologic stage to 5-year survival is shown in Table 108-2. For most women, the need for adjuvant therapy can be readily defined on this basis alone. In the absence of lymph node involvement, involvement of microvessels (either capillaries or lymphatic channels) in tumors is nearly equivalent to lymph node involvement. The greatest controversy concerns women with intermediate prognoses. There is rarely justification for adjuvant chemotherapy in most women with tumors <1 cm in size whose axillary lymph nodes are negative. HER2positive tumors are a potential exception. Detection of breast cancer cells either in the circulation or bone marrow is associated with an increased relapse rate. The most exciting development in this area is the use of gene expression arrays to analyze patterns of tumor gene expression. Several groups have independently defined gene sets that reliably predict disease-free and overall survival far more accu Stage 5-Year Survival, % Source: Modified from data of the National Cancer Institute: Surveillance, Epidemiology, and End Results (SEER). rately than any single prognostic variable including the Oncotype DX® analysis of 21 genes. Also, the use of such standardized risk assessment tools such as Adjuvant! Online (www.adjuvantonline .com) is very helpful. These tools are highly recommended in otherwise ambiguous circumstances. Estrogen receptor status and progesterone receptor status are of prognostic significance. Tumors that lack either or both of these receptors are more likely to recur than tumors that have them. Several measures of tumor growth rate correlate with early relapse. S-phase analysis using flow cytometry is the most accurate measure. Indirect S-phase assessments using antigens associated with the cell cycle, such as PCNA (Ki67), are also valuable. Tumors with a high proportion (more than the median) of cells in S-phase pose a greater risk of relapse; chemotherapy offers the greatest survival benefit for these tumors. Assessment of DNA content in the form of ploidy is of modest value, with nondiploid tumors having a somewhat worse prognosis. Histologic classification of the tumor has also been used as a prognostic factor. Tumors with a poor nuclear grade have a higher risk of recurrence than tumors with a good nuclear grade. Semiquantitative measures such as the Elston score improve the reproducibility of this measurement. Molecular changes in the tumor are also useful. Tumors that over-express erbB2 (HER2/neu) or have a mutated p53 gene have a worse prognosis. Particular interest has centered on erbB2 overexpression as measured by immunohistochemistry or fluorescence in situ hybridization. Tumors that overexpress erbB2 are more likely to respond to doxorubicin-containing regimens; erbB2 overexpression also predicts those tumors that will respond to HER2/neu antibodies (trastuzumab) (Herceptin) and HER2/neu kinase inhibitors. Other variables that have also been used to evaluate prognosis include proteins associated with invasiveness, such as type IV collagenase, cathepsin D, plasminogen activator, plasminogen activator receptor, and the metastasis-suppressor gene nm23. None of these has been widely accepted as a prognostic variable for therapeutic decision-making. One problem in interpreting these prognostic variables is that most of them have not been examined in a study using a large cohort of patients. adjuvant regimens Adjuvant therapy is the use of systemic therapies in patients whose known disease has received local therapy but who are at risk of relapse. Selection of appropriate adjuvant chemotherapy or hormone therapy is highly controversial in some situations. Meta-analyses have helped to define broad limits for therapy but do not help in choosing optimal regimens or in choosing a regimen for certain subgroups of patients. A summary of recommendations is shown in Table 108-3. In general, premenopausal women for whom any form of adjuvant systemic therapy is indicated should receive multidrug chemotherapy. Antihormone therapy improves survival in premenopausal patients who are estrogen receptor positive and should be added following completion of chemotherapy. Prophylactic surgical or medically induced castration may also be associated with a substantial survival benefit (primarily in estrogen receptor–positive patients) but is not widely used in this country. Data on postmenopausal women are also controversial. The impact of adjuvant chemotherapy is quantitatively less clear-cut than in premenopausal patients, particularly in estrogen receptor– positive cases, although survival advantages have been shown. The first decision is whether chemotherapy or endocrine therapy should be used. While adjuvant endocrine therapy (aromatase inhibitors and tamoxifen) improves survival regardless of axillary lymph node status, the improvement in survival is modest for patients in whom multiple lymph nodes are involved. For this reason, it has been usual to give chemotherapy to postmenopausal patients who have no medical contraindications and who have more than one positive lymph node; hormone therapy is commonly given subsequently. For postmenopausal women for whom systemic therapy is warranted but who have a more favorable prognosis (based more commonly on analysis such as the Oncotype DX methodology), hormone therapy may be used alone. Large clinical trials have aAs determined by pathologic examination. shown superiority for aromatase inhibitors over tamoxifen alone in the adjuvant setting, although tamoxifen appears essentially equivalent in women who are obese and therefore presumably have higher endogenous concentrations of estrogen. Unfortunately the optimal plan is unclear. Tamoxifen for 5 years followed by an aromatase inhibitor, the reverse strategy, or even switching to an aromatase inhibitor after 2–3 years of tamoxifen has been shown to be better than tamoxifen alone. Continuation of tamoxifen for 10 years yields further benefit and is a reasonable decision for women with less favorable prognoses. Unfortunately, multiple studies have revealed very suboptimal adherence to long-term adjuvant endocrine regimens, and every effort should be made to encourage their continuous use. No valid information currently permits selection among the three clinically approved aromatase inhibitors. Concomitant use of bisphosphonates is almost always warranted; however, it is not finally settled as to whether their prophylactic use increases survival in addition to just decreasing recurrences in bone. Most comparisons of adjuvant chemotherapy regimens show little difference among them, although small advantages for doxorubicincontaining regimens and “dose-dense” regimens are usually seen. One approach—so-called neoadjuvant chemotherapy—involves the administration of adjuvant therapy before definitive surgery and radiation therapy. Because the objective response rates of patients with breast cancer to systemic therapy in this setting exceed 75%, many patients will be “downstaged” and may become candidates for breast-conserving therapy. However, overall survival has not been improved using this approach as compared with the same drugs given postoperatively. Patients who achieve a pathologic complete remission after neoadjuvant chemotherapy not unexpectedly have a substantially improved survival. The neoadjuvant setting also provides a wonderful opportunity for the evaluation of new agents. For example, a second HER2 targeting antibody, pertuzumab, has been shown to provide additional benefit when combined with trastuzumab in the neoadjuvant setting. Other adjuvant treatments under investigation include the use of taxanes, such as paclitaxel and docetaxel, and therapy based on alternative kinetic and biologic models. In such approaches, high doses of single agents are used separately in relatively dose-intensive cycling regimens. Node-positive patients treated with doxorubicincyclophosphamide for four cycles followed by four cycles of a taxane have a substantial improvement in survival compared with women receiving doxorubicin-cyclophosphamide alone, particularly in women with estrogen receptor–negative tumors. In addition, administration of the same drug combinations at the same dose but at more frequent intervals (every 2 weeks with cytokine support as compared with the standard every 3 weeks) is even more effective. Among the 25% of women whose tumors overexpress HER2/neu, addition of trastuzumab given concurrently with a taxane and then for a year after chemotherapy produces significant improvement in survival. Although longer follow-up will be important, this is now the standard care for most women with HER2/neu-positive breast cancers. Cardiotoxicity, immediate and long-term, remains a concern, and further efforts to exploit non-anthracycline-containing regimens are being pursued. Very-high-dose therapy with stem cell transplantation in the adjuvant setting has not proved superior to standard-dose therapy and should not be routinely used. A variety of exciting approaches are close to adoption, and the literature needs to be followed attentively. Tyrosine kinase inhibitors such as lapatinib and additional HER2-targeting antibodies such as pertuzumab are very promising. Finally, as described in the next section, a novel class of agents targeting DNA repair—the so-called poly–ADP ribose polymerase (PARP) inhibitors—is likely to have a major effect on breast cancers either caused by BRCA1 or BRCA2 mutations or sharing similar defects in DNA repair in their etiology. About one-third of patients treated for apparently localized breast cancer develop metastatic disease. Although a small number of these patients enjoy long remissions when treated with combinations of systemic and local therapy, most eventually succumb to metastatic disease. The median survival for all patients diagnosed with metastatic breast cancer is less than 3 years. Soft tissue, bony, and visceral (lung and liver) metastases each account for approximately one-third of sites of initial relapses. However, by the time of death, most patients will have bony involvement. Recurrences can appear at any time after primary therapy. A very cruel fact about breast cancer recurrences is that at least half of all breast cancer recurrences occur >5 years after initial therapy. It is now clear that a variety of host factors can influence recurrence rates, including depression and central obesity, and these diseases should be managed as aggressively as possible. Because the diagnosis of metastatic disease alters the outlook for the patient so drastically, it should rarely be made without a confirmatory biopsy. Every oncologist has seen patients with tuberculosis, gallstones, sarcoidosis, or other nonmalignant diseases misdiagnosed and treated as though they had metastatic breast cancer or even second malignancies such as multiple myeloma thought to be recurrent breast cancer. This is a catastrophic mistake and justifies biopsy for virtually every patient at the time of initial suspicion of metastatic disease. Furthermore, there are well-documented changes in hormone receptor status that can occur and substantially alter treatment decisions. The choice of therapy requires consideration of local therapy needs, the overall medical condition of the patient, and the hormone receptor status of the tumor, as well as clinical judgment. Because therapy of systemic disease is palliative, the potential toxicities of therapies should be balanced against the response rates. Several variables influence the response to systemic therapy. For example, the presence of estrogen and progesterone receptors is a strong indication for endocrine therapy. On the other hand, patients 530 with short disease-free intervals, rapidly progressive visceral disease, lymphangitic pulmonary disease, or intracranial disease are unlikely to respond to endocrine therapy. In many cases, systemic therapy can be withheld while the patient is managed with appropriate local therapy. Radiation therapy and occasionally surgery are effective at relieving the symptoms of metastatic disease, particularly when bony sites are involved. Many patients with bone-only or bone-dominant disease have a relatively indolent course. Under such circumstances, systemic chemotherapy has a modest effect, whereas radiation therapy may be effective for long periods. Other systemic treatments, such as strontium-89 and/ or bisphosphonates, may provide a palliative benefit without inducing objective responses. Most patients with metastatic disease, and certainly all who have bone involvement, should receive concurrent bisphosphonates. Because the goal of therapy is to maintain well-being for as long as possible, emphasis should be placed on avoiding the most hazardous complications of metastatic disease, including pathologic fracture of the axial skeleton and spinal cord compression. New back pain in patients with cancer should be explored aggressively on an emergent basis; to wait for neurologic symptoms is a potentially catastrophic error. Metastatic involvement of endocrine organs can cause profound dysfunction, including adrenal insufficiency and hypopituitarism. Similarly, obstruction of the biliary tree or other impaired organ function may be better managed with a local therapy than with a systemic approach. Many patients are inappropriately treated with toxic regimens into their last days of life. Often oncologists are unwilling to have the difficult conversations that are required with patients nearing the end of life, and not uncommonly, patients and families can pressure physicians into treatments with very little survival value. Palliative care consultation and realistic assessment of treatment expectations need to be reviewed with patients and families. We urge consideration of palliative care consultations for patients who have received at least two lines of therapy for metastatic disease. Endocrine Therapy Normal breast tissue is estrogen dependent. Both primary and metastatic breast cancer may retain this phenotype. The best means of ascertaining whether a breast cancer is hormone dependent is through analysis of estrogen and progesterone receptor levels on the tumor. Tumors that are positive for the estrogen receptor and negative for the progesterone receptor have a response rate of ∼30%. Tumors that are positive for both receptors have a response rate approaching 70%. If neither receptor is present, the objective response rates are <5%. Receptor analyses provide information as to the correct ordering of endocrine therapies as opposed to chemotherapy. Because of their lack of toxicity and because some patients whose receptor analyses are reported as negative respond to endocrine therapy, an endocrine treatment should be attempted in virtually every patient with metastatic breast cancer. Potential endocrine therapies are summarized in Table 108-4. The choice of endocrine therapy is usually determined by toxicity profile and availability. In most postmenopausal patients, the initial endocrine therapy should be an aromatase inhibitor rather than tamoxifen. For the subset of postmenopausal women who are estrogen receptor–positive but also HER2/neu-positive, response rates to aromatase inhibitors are substantially higher than to tamoxifen. Aromatase inhibitors are not used in premenopausal women because their hypothalamus can respond to estrogen deprivation by producing gonadotropins that promote estrogen synthesis. Newer “pure” antiestrogens that are free of agonistic effects are also effective. Cases in which tumors shrink in response to tamoxifen withdrawal (as well as withdrawal of pharmacologic doses of estrogens) have been reported. A series of studies with aromatase inhibitors, tamoxifen, and fulvestrant have all shown that the addition of everolimus to the hormonal treatment can lead to significant benefit after progression on the endocrine agent alone. Everolimus (an mTOR inhibitor) in coordination with endocrine agents is now being explored as front-line therapy and in the adjuvant setting. Endogenous estrogen formation may be blocked by analogues of aromatase inhibitors, tamoxifen, and aConsider retreatment with Everolimus in combination for disease progression Abbreviation: LHRH, luteinizing hormone–releasing hormone. luteinizing hormone–releasing hormone in premenopausal women. Additive endocrine therapies, including treatment with progestogens, estrogens, and androgens, may also be tried in patients who respond to initial endocrine therapy; the mechanism of action of these latter therapies is unknown. Patients who respond to one endocrine therapy have at least a 50% chance of responding to a second endocrine therapy. It is not uncommon for patients to respond to two or three sequential endocrine therapies; however, combination endocrine therapies do not appear to be superior to individual agents, and combinations of chemotherapy with endocrine therapy are not useful. The median survival of patients with metastatic disease is approximately 2 years, although many patients, particularly older persons and those with hormone-dependent disease, may respond to endocrine therapy for 3–5 years or longer. Chemotherapy Unlike many other epithelial malignancies, breast cancer responds to multiple chemotherapeutic agents, including anthracyclines, alkylating agents, taxanes, and antimetabolites. Multiple combinations of these agents have been found to improve response rates somewhat, but they have had little effect on duration of response or survival. The choice among multidrug combinations frequently depends on whether adjuvant chemotherapy was administered and, if so, what type. Although patients treated with adjuvant regimens such as cyclophosphamide, methotrexate, and fluorouracil (CMF regimens) may subsequently respond to the same combination in the metastatic disease setting, most oncologists use drugs to which the patients have not been previously exposed. Once patients have progressed after combination drug therapy, it is most common to treat them with single agents. Given the significant toxicity of most drugs, the use of a single effective agent will minimize toxicity by sparing the patient exposure to drugs that would be of little value. No method to select the drugs most efficacious for a given patient has been demonstrated to be useful. Most oncologists use either an anthracycline or paclitaxel following failure with the initial regimen. However, the choice has to be balanced with individual needs. One randomized study has suggested that docetaxel may be superior to paclitaxel. A nanoparticle formulation of paclitaxel (Abraxane) is also effective. The use of a humanized antibody to erbB2 (trastuzumab [Herceptin]) combined with paclitaxel can improve response rate and survival for women whose metastatic tumors overexpress erbB2. A novel antibody conjugate (ADC) that links trastuzumab to a cytotoxic agent has been approved for management of HER2-positive breast cancer. The magnitude of the survival extension is modest in patients with metastatic disease. Similarly, the use of bevacizumab (Avastin) has improved the response rate and response duration to paclitaxel. Objective responses in previously treated patients may also be seen with gemcitabine, vinca alkaloids, capecitabine, vinorelbine, and oral etoposide, as well as a new class of agents, epothilones. There are few comparative trials of one agent versus another in metastatic disease. It is a sad fact that choices are often influenced by aggressive marketing of new very expensive agents that have not been shown to be superior to other generic agents. Platinum-based agents have become far more widely used in both the adjuvant and advanced disease settings for some breast cancers, particularly those of the “triple-negative” subtype. high-dose chemotheraPy including autologous Bone marrow transPlantation Autologous bone marrow transplantation combined with high doses of single agents can produce objective responses even in heavily pretreated patients. However, such responses are rarely durable and do not alter the clinical course for most patients with advanced metastatic disease. Between 10 and 25% of patients present with so-called locally advanced, or stage III, breast cancer at diagnosis. Many of these cancers are technically operable, whereas others, particularly cancers with chest wall involvement, inflammatory breast cancers, or cancers with large matted axillary lymph nodes, cannot be managed with surgery initially. Although no randomized trials have shown any survival benefit for neoadjuvant regimens as compared to adjuvant therapy, this approach has gained widespread use. More than 90% of patients with locally advanced breast cancer show a partial or better response to multidrug chemotherapy regimens that include an anthracycline. Early administration of this treatment reduces the bulk of the disease and frequently makes the patient a suitable candidate for salvage surgery and/or radiation therapy. These patients should be managed in multimodality clinics to coordinate surgery, radiation therapy, and systemic chemotherapy. Such approaches produce long-term disease-free survival in about 30–50% of patients. The neoadjuvant setting is also an ideal time to evaluate the efficacy of novel treatments because the effect on the tumor can be directly assessed. Women who have one breast cancer are at risk of developing a contralateral breast cancer at a rate of approximately 0.5% per year. When adjuvant tamoxifen or an aromatase inhibitor is administered to these patients, the rate of development of contralateral breast cancers is reduced. In other tissues of the body, tamoxifen has estrogen-like effects that are beneficial, including preservation of bone mineral density and long-term lowering of cholesterol. However, tamoxifen has estrogen-like effects on the uterus, leading to an increased risk of uterine cancer (0.75% incidence after 5 years on tamoxifen). Tamoxifen also increases the risk of cataract formation. The Breast Cancer Prevention Trial (BCPT) revealed a >49% reduction in breast cancer among women with a risk of at least 1.66% taking the drug for 5 years. Raloxifene has shown similar breast cancer prevention potency but may have different effects on bone and heart. The two agents have been compared in a prospective randomized prevention trial (the Study of Tamoxifen and Raloxifene [STAR] trial). The agents are approximately equivalent in preventing breast cancer with fewer thromboembolic events and endometrial cancers with raloxifene; however, raloxifene did not reduce noninvasive cancers as effectively as tamoxifen, so no clear winner has emerged. A newer selective estrogen receptor modulator (SERM), lasofoxifene, has been shown to reduce cardiovascular events in addition to breast cancer and fractures, and further studies of this agent should be watched with interest. It should be recalled that prevention of contralateral breast cancers in women diagnosed with one cancer is a reasonable surrogate for breast cancer prevention because these are second primaries not recurrences. In this regard, the aromatase inhibitors are all considerably more effective than tamoxifen; however, they are not approved for primary breast cancer prevention. It remains puzzling that agents with the safety 531 profile of raloxifene, which can reduce breast cancer risk by 50% with additional benefits in preventing osteoporotic fracture, are still so infrequently prescribed. They should be far more commonly offered to women than they are. Breast cancer develops as a series of molecular changes in the epithelial cells that lead to ever more malignant behavior. Increased use of mammography has led to more frequent diagnoses of noninvasive breast cancer. These lesions fall into two groups: ductal carcinoma in situ (DCIS) and lobular carcinoma in situ (lobular neoplasia). The management of both entities is controversial. Ductal Carcinoma In Situ Proliferation of cytologically malignant breast epithelial cells within the ducts is termed DCIS. Atypical hyperplasia may be difficult to differentiate from DCIS. At least one-third of patients with untreated DCIS develop invasive breast cancer within 5 years. However, many low-grade DCIS lesions do not appear to progress over many years; therefore, many patients are overtreated. Unfortunately there is no reliable means of distinguishing patients who require treatment from those who may be safely observed. For many years, the standard treatment for this disease was mastectomy. However, treatment of this condition by lumpectomy and radiation therapy gives survival that is as good as the survival for invasive breast cancer treated by mastectomy. In one randomized trial, the combination of wide excision plus irradiation for DCIS caused a substantial reduction in the local recurrence rate as compared with wide excision alone with negative margins, although survival was identical in the two arms. No studies have compared either of these regimens to mastectomy. Addition of tamoxifen to any DCIS surgical/radiation therapy regimen further improves local control. Data for aromatase inhibitors in this setting are not available. Several prognostic features may help to identify patients at high risk for local recurrence after either lumpectomy alone or lumpectomy with radiation therapy. These include extensive disease; age <40; and cytologic features such as necrosis, poor nuclear grade, and comedo subtype with overexpression of erbB2. Some data suggest that adequate excision with careful determination of pathologically clear margins is associated with a low recurrence rate. When surgery is combined with radiation therapy, recurrence (which is usually in the same quadrant) occurs with a frequency of ≤10%. Given the fact that half of these recurrences will be invasive, about 5% of the initial cohort will eventually develop invasive breast cancer. A reasonable expectation of mortality for these patients is about 1%, a figure that approximates the mortality rate for DCIS managed by mastectomy. Although this train of reasoning has not formally been proved valid, it is reasonable to recommend that patients who desire breast preservation, and in whom DCIS appears to be reasonably localized, be managed by adequate surgery with meticulous pathologic evaluation, followed by breast irradiation and tamoxifen. For patients with localized DCIS, axillary lymph node dissection is unnecessary. More controversial is the question of what management is optimal when there is any degree of invasion. Because of a significant likelihood (10–15%) of axillary lymph node involvement even when the primary lesion shows only microscopic invasion, it is prudent to do at least a sentinel lymph node sampling for all patients with any degree of invasion. Further management is dictated by the presence of nodal spread. Lobular Neoplasia Proliferation of cytologically malignant cells within the lobules is termed lobular neoplasia. Nearly 30% of patients who have had adequate local excision of the lesion develop breast cancer (usually infiltrating ductal carcinoma) over the next 15–20 years. Ipsilateral and contralateral cancers are equally common. Therefore, lobular neoplasia may be a premalignant lesion that suggests an elevated risk of subsequent breast cancer, rather than a form of malignancy itself, and aggressive local management seems unreasonable. Most patients should be treated with an SERM or an aromatase inhibitor (for postmenopausal women) for 5 years and 532 followed with careful annual mammography and semiannual physical examinations. Additional molecular analysis of these lesions may make it possible to discriminate between patients who are at risk of further progression and require additional therapy and those in whom simple follow-up is adequate. Breast cancer is about 1/150th as frequent in men as in women; 1720 men developed breast cancer in 2006. It usually presents as a unilateral lump in the breast and is frequently not diagnosed promptly. Given the small amount of soft tissue and the unexpected nature of the problem, locally advanced presentations are somewhat more common. When male breast cancer is matched to female breast cancer by age and stage, its overall prognosis is identical. Although gynecomastia may initially be unilateral or asymmetric, any unilateral mass in a man older than age 40 years should receive a careful workup including biopsy. On the other hand, bilateral symmetric breast development rarely represents breast cancer and is almost invariably due to endocrine disease or a drug effect. It should be kept in mind, nevertheless, that the risk of cancer is much greater in men with gynecomastia; in such men, gross asymmetry of the breasts should arouse suspicion of cancer. Male breast cancer is best managed by mastectomy and axillary lymph node dissection or SLNB. Patients with locally advanced disease or positive nodes should also be treated with irradiation. Approximately 90% of male breast cancers contain estrogen receptors, and approximately 60% of cases with metastatic disease respond to endocrine therapy. No randomized studies have evaluated adjuvant therapy for male breast cancer. Two historic experiences suggest that the disease responds well to adjuvant systemic therapy, and, if not medically contraindicated, the same criteria for the use of adjuvant therapy in women should be applied to men. The sites of relapse and spectrum of response to chemotherapeutic drugs are virtually identical for breast cancers in either sex. Despite the availability of sophisticated and expensive imaging techniques and a wide range of serum tumor marker tests, survival is not influenced by early diagnosis of relapse. Surveillance guidelines are given in Table 108-5. Despite pressure from patients and their families, routine computed tomography scans (or other imaging) are not recommended. SERMs) Patient education about symptoms Ongoing of recurrence Coordination of care Ongoing Complete blood count Serum chemistry studies Chest radiographs Bone scans Ultrasound examination of the liver Computed tomography of chest, abdomen, or pelvis Tumor markers CA 15-3, CA 27-29, CEA Abbreviations: CEA, carcinoembryonic antigen; SERM, selective estrogen receptor modulator. Source: Recommended Breast Cancer Surveillance Guidelines, ASCO Education Book, Fall, 1997. Robert J. Mayer Upper gastrointestinal cancers include malignancies arising in the esophagus, stomach, and small intestine. Cancer of the esophagus is an increasingly common and extremely lethal malignancy. The diagnosis was made in 18,170 Americans in 2014 and led to 15,450 deaths. Almost all esophageal cancers are either squamous cell carcinomas or adenocarcinomas; the two histologic subtypes have a similar clinical presentation but different causative factors. Worldwide, squamous cell carcinoma is the more common cell type, having an incidence that rises strikingly in association with geographic location. It occurs frequently within a region extending from the southern shore of the Caspian Sea on the west to northern China on the east, encompassing parts of Iran, central Asia, Afghanistan, Siberia, and Mongolia. Familial increased risk has been observed in regions with high incidence, although gene associations are not yet defined. High-incidence “pockets” of the disease are also present in such disparate locations as Finland, Iceland, Curaçao, southeastern Africa, and northwestern France. In North America and western Europe, the disease is more common in blacks than whites and in males than females; it appears most often after age 50 and seems to be associated with a lower socioeconomic status. Such cancers generally arise in the cervical and thoracic portions of the esophagus. A variety of causative factors have been implicated in the development of squamous cell cancers of the esophagus (Table 109-1). In the United States, the etiology of such cancers is primarily related to excess alcohol consumption and/or cigarette smoking. The relative risk increases with the amount of tobacco smoked or alcohol consumed, with these factors acting synergistically. The consumption of whiskey is linked to a higher incidence than the consumption of wine or beer. Squamous cell esophageal carcinoma has also been associated with the ingestion of nitrates, smoked opiates, and fungal toxins in pickled vegetables, as well as mucosal damage caused by such physical insults as long-term exposure to extremely hot tea, the ingestion of lye, radiation-induced strictures, and chronic achalasia. The presence of an esophageal web in association with glossitis and iron deficiency (i.e., Plummer-Vinson or Paterson-Kelly syndrome) and congenital some etioLogiC faCtors assoCiateD With squamous CeLL CanCer of the esophagus Nitrates (converted to nitrites) Smoked opiates Fungal toxins in pickled vegetables Esophageal web with glossitis and iron deficiency (i.e., Plummer-Vinson or Paterson-Kelly syndrome) Congenital hyperkeratosis and pitting of the palms and soles (i.e., tylosis palmaris et plantaris) ? Dietary deficiencies of selenium, molybdenum, zinc, and vitamin A some etioLogiC faCtors assoCiateD With aDenoCarCinoma of the esophagus hyperkeratosis and pitting of the palms and soles (i.e., tylosis palmaris et plantaris) have each been linked with squamous cell esophageal cancer, as have dietary deficiencies of molybdenum, zinc, selenium, and vitamin A. Patients with head and neck cancer are at increased risk of squamous cell cancer of the esophagus. For unclear reasons, the incidence of squamous cell esophageal cancer has decreased somewhat in both the black and white populations in the United States over the past 40 years, whereas the rate of adenocarcinoma has risen sevenfold, particularly in white males (male-tofemale ratio of 6:1). Whereas squamous cell cancers comprised the vast majority of esophageal cancers in the United States as recently as 40–50 years ago, more than 75% of esophageal tumors are now adenocarcinomas, with the incidence of this histologic subtype continuing to increase rapidly. Understanding the cause for this increase is the focus of current investigation. Several strong etiologic associations have been observed to account for the development of adenocarcinoma of the esophagus (Table 109-2). Such tumors arise in the distal esophagus in association with chronic gastric reflux, often in the presence of Barrett’s esophagus (replacement of the normal squamous epithelium of the distal esophagus by columnar mucosa), which occurs more commonly in obese individuals. Adenocarcinomas arise within dysplastic columnar epithelium in the distal esophagus. Even before frank neoplasia is detectable, aneuploidy and p53 mutations are found in the dysplastic epithelium. These adenocarcinomas behave clinically like gastric adenocarcinomas, although they are not associated with Helicobacter pylori infections. Approximately 15% of esophageal adenocarcinomas overexpress the HER2/neu gene. About 5% of esophageal cancers occur in the upper third of the esophagus (cervical esophagus), 20% in the middle third, and 75% in the lower third. Squamous cell carcinomas and adenocarcinomas cannot be distinguished radiographically or endoscopically. Progressive dysphagia and weight loss of short duration are the initial symptoms in the vast majority of patients. Dysphagia initially occurs with solid foods and gradually progresses to include semisolids and liquids. By the time these symptoms develop, the disease is already very advanced, because difficulty in swallowing does not occur until >60% of the esophageal circumference is infiltrated with cancer. Dysphagia may be associated with pain on swallowing (odynophagia), pain radiating to the chest and/or back, regurgitation or vomiting, and aspiration pneumonia. The disease most commonly spreads to adjacent and supraclavicular lymph nodes, liver, lungs, pleura, and bone. Tracheoesophageal fistulas may develop, primarily in patients with upper and mid-esophageal tumors. As with other squamous cell carcinomas, hypercalcemia may occur in the absence of osseous metastases, probably from parathormone-related peptide secreted by tumor cells (Chap. 121). Attempts at endoscopic and cytologic screening for carcinoma in patients with Barrett’s esophagus, while effective as a means of detecting high-grade dysplasia, have not yet been shown to reduce the likelihood of death from esophageal adenocarcinoma. Esophagoscopy should be performed in all patients suspected of having an esophageal abnormality, to both visualize and identify a tumor and also to obtain histopathologic confirmation of the diagnosis. Because the population of persons at risk for squamous cell carcinoma of the esophagus (i.e., smokers and drinkers) also has a high rate of cancers of the lung and the head and neck region, endoscopic inspection of the larynx, trachea, 533 and bronchi should also be carried out. A thorough examination of the fundus of the stomach (by retroflexing the endoscope) is imperative as well. The extent of tumor spread to the mediastinum and para-aortic lymph nodes should be assessed by computed tomography (CT) scans of the chest and abdomen and by endoscopic ultrasound. Positron emission tomography scanning provides a useful assessment of the presence of distant metastatic disease, offering accurate information regarding spread to mediastinal lymph nodes, which can be helpful in defining radiation therapy fields. Such scans, when performed sequentially, appear to provide a means of making an early assessment of responsiveness to preoperative chemotherapy. The prognosis for patients with esophageal carcinoma is poor. Approximately 10% of patients survive 5 years after the diagnosis; thus, management focuses on symptom control. Surgical resection of all gross tumor (i.e., total resection) is feasible in only 45% of cases, with residual tumor cells frequently present at the resection margins. Such esophagectomies have been associated with a postoperative mortality rate of approximately 5% due to anastomotic fistulas, subphrenic abscesses, and cardiopulmonary complications. Although debate regarding the comparative benefits of trans-thoracic versus transhiatal resections has continued, experienced thoracic surgeons are now favoring minimally invasive transthoracic esophagectomies. Endoscopic resections of superficial squamous cell cancers or adenocarcinomas are being examined but have not yet been shown to result in a similar likelihood of survival as observed with conventional surgical procedures. Similarly, the value of endoscopic ablation of dysplastic lesions in an area of Barrett’s esophagus on reducing subsequent mortality from esophageal carcinoma is uncertain. Some experts have advocated fundoplication surgery (i.e., the removal of the gastroesophageal junction) as a means of cancer prevention in patients with Barrett’s esophagus; again, objective data are not yet available to fully assess the risks versus benefits of this invasive procedure. About 20% of patients who survive a total surgical resection live for 5 years. The evaluation of chemotherapeutic agents in patients with esophageal carcinoma has been hampered by ambiguity in the definition of “response” and the debilitated physical condition of many treated individuals, particularly those with squamous cell cancers. Nonetheless, significant reductions in the size of measurable tumor masses have been reported in 15–25% of patients given single-agent treatment and in 30–60% of patients treated with drug combinations that include cisplatin. In the small subset of patients whose tumors overexpress the HER2/neu gene, the addition of the monoclonal antibody trastuzumab (Herceptin) appears to further enhance the likelihood of benefit, particularly in patients with gastroesophageal lesions. The use of the antiangiogenic agent bevacizumab (Avastin) seems to be of limited value in the setting of esophageal cancer. Combination chemotherapy and radiation therapy as the initial therapeutic approach, either alone or followed by an attempt at operative resection, seems to be beneficial. When administered along with radiation therapy, chemotherapy produces a better survival outcome than radiation therapy alone. The use of preoperative chemotherapy and radiation therapy followed by esophageal resection appears to prolong survival compared with surgery alone according to several randomized trials and a meta-analysis; some reports suggest that no additional benefit accrues when surgery is added if significant shrinkage of tumor has been achieved by the chemoradiation combination. For the incurable, surgically unresectable patient with esophageal cancer, dysphagia, malnutrition, and the management of tracheoesophageal fistulas are major issues. Approaches to palliation include repeated endoscopic dilatation, the surgical placement of a gastrostomy or jejunostomy for hydration and feeding, endoscopic placement of an expansive metal stent to bypass the tumor, and radiation therapy. Incidence and Epidemiology For unclear reasons, the incidence and mortality rates for gastric cancer have decreased in the United States during the past 80 years, although the disease remains the second most frequent cause of worldwide cancer-related death. The mortality rate from gastric cancer in the United States has dropped in men from 28 to 5.8 per 100,000 persons, whereas in women, the rate has decreased from 27 to 2.8 per 100,000. Nonetheless, in 2014, 22,220 new cases of stomach cancer were diagnosed in the United States, and 10,990 Americans died of the disease. Although the incidence of gastric cancer has decreased worldwide, it remains high in such disparate geographic regions as Japan, China, Chile, and Ireland. The risk of gastric cancer is greater among lower socioeconomic classes. Migrants from highto low-incidence nations maintain their susceptibility to gastric cancer, whereas the risk for their offspring approximates that of the new homeland. These findings suggest that an environmental exposure, probably beginning early in life, is related to the development of gastric cancer, with dietary carcinogens considered the most likely factor(s). Pathology About 85% of stomach cancers are adenocarcinomas, with 15% due to lymphomas, gastrointestinal stromal tumors (GISTs), and leiomyosarcomas. Gastric adenocarcinomas may be subdivided into two categories: a diffuse type, in which cell cohesion is absent, so that individual cells infiltrate and thicken the stomach wall without forming a discrete mass; and an intestinal type, characterized by cohesive neoplastic cells that form glandlike tubular structures. The diffuse carcinomas occur more often in younger patients, develop throughout the stomach (including the cardia), result in a loss of distensibility of the gastric wall (so-called linitis plastica, or “leather bottle” appearance), and carry a poorer prognosis. Diffuse cancers have defective intercellular adhesion, mainly as a consequence of loss of expression of E-cadherin. Intestinal-type lesions are frequently ulcerative, more commonly appear in the antrum and lesser curvature of the stomach, and are often preceded by a prolonged precancerous process, often initiated by H. pylori infection. Although the incidence of diffuse carcinomas is similar in most populations, the intestinal type tends to predominate in the high-risk geographic regions and is less likely to be found in areas where the frequency of gastric cancer is declining. Thus, different etiologic factor(s) are likely involved in these two subtypes. In the United States, ∼30% of gastric cancers originate in the distal stomach, ∼20% arise in the midportion of the stomach, and ∼40% originate in the proximal third of the stomach. The remaining 10% involve the entire stomach. Etiology The long-term ingestion of high concentrations of nitrates found in dried, smoked, and salted foods appears to be associated with a higher risk. The nitrates are thought to be converted to carcinogenic nitrites by bacteria (Table 109-3). Such bacteria may be introduced exogenously through the ingestion of partially decayed nitrate-Converting BaCteria as a faCtor in the Causation of gastriC CarCinomaa Exogenous sources of nitrate-converting bacteria: Bacterially contaminated food (common in lower socioeconomic classes, who have a higher incidence of the disease; diminished by improved food preservation and refrigeration) Helicobacter pylori infection Endogenous factors favoring growth of nitrate-converting bacteria in the stomach: Decreased gastric acidity Prior gastric surgery (antrectomy) (15to 20-year latency period) Atrophic gastritis and/or pernicious anemia ? Prolonged exposure to histamine H2-receptor antagonists aHypothesis: Dietary nitrates are converted to carcinogenic nitrites by bacteria. foods, which are consumed in abundance worldwide by the lower socioeconomic classes. Bacteria such as H. pylori may also contribute to this effect by causing chronic inflammatory atrophic gastritis, loss of gastric acidity, and bacterial growth in the stomach. Although the risk for developing gastric cancer is thought to be sixfold higher in people infected with H. pylori, it remains uncertain whether eradicating the bacteria after infection has already occurred actually reduces this risk. Loss of acidity may occur when acid-producing cells of the gastric antrum have been removed surgically to control benign peptic ulcer disease or when achlorhydria, atrophic gastritis, and even pernicious anemia develop in the elderly. Serial endoscopic examinations of the stomach in patients with atrophic gastritis have documented replacement of the usual gastric mucosa by intestinal-type cells. This process of intestinal metaplasia may lead to cellular atypia and eventual neoplasia. Because the declining incidence of gastric cancer in the United States primarily reflects a decline in distal, ulcerating, intestinal-type lesions, it is conceivable that better food preservation and the availability of refrigeration for all socioeconomic classes have decreased the dietary ingestion of exogenous bacteria. H. pylori has not been associated with the diffuse, more proximal form of gastric carcinoma or with cancers arising at the gastroesophageal junction or in the distal esophagus. Approximately 10–15% of adenocarcinomas appearing in the proximal stomach, the gastroesophageal junction, and the distal esophagus overexpress the HER2/neu gene; individuals whose tumors demonstrate this over-expression benefit from treatment directed against this target (i.e., trastuzumab [Herceptin]). Several additional etiologic factors have been associated with gastric carcinoma. Gastric ulcers and adenomatous polyps have occasionally been linked, but data on a cause-and-effect relationship are unconvincing. The inadequate clinical distinction between benign gastric ulcers and small ulcerating carcinomas may, in part, account for this presumed association. The presence of extreme hypertrophy of gastric rugal folds (i.e., Ménétrier’s disease), giving the impression of polypoid lesions, has been associated with a striking frequency of malignant transformation; such hypertrophy, however, does not represent the presence of true adenomatous polyps. Individuals with blood group A have a higher incidence of gastric cancer than persons with blood group O; this observation may be related to differences in the mucous secretion, leading to altered mucosal protection from carcinogens. A germline mutation in the E-cadherin gene (CDH1), inherited in an autosomal dominant pattern and coding for a cell adhesion protein, has been linked to a high incidence of occult diffuse-type gastric cancers in young asymptomatic carriers. Duodenal ulcers are not associated with gastric cancer. Clinical Features Gastric cancers, when superficial and surgically curable, usually produce no symptoms. As the tumor becomes more extensive, patients may complain of an insidious upper abdominal discomfort varying in intensity from a vague, postprandial fullness to a severe, steady pain. Anorexia, often with slight nausea, is very common but is not the usual presenting complaint. Weight loss may eventually be observed, and nausea and vomiting are particularly prominent in patients whose tumors involve the pylorus; dysphagia and early satiety may be the major symptoms caused by diffuse lesions originating in the cardia. There may be no early physical signs. A palpable abdominal mass indicates long-standing growth and predicts regional extension. Gastric carcinomas spread by direct extension through the gastric wall to the perigastric tissues, occasionally adhering to adjacent organs such as the pancreas, colon, or liver. The disease also spreads via lymphatics or by seeding of peritoneal surfaces. Metastases to intraabdominal and supraclavicular lymph nodes occur frequently, as do metastatic nodules to the ovary (Krukenberg’s tumor), periumbilical region (“Sister Mary Joseph node”), or peritoneal cul-de-sac (Blumer’s shelf palpable on rectal or vaginal examination); malignant ascites may also develop. The liver is the most common site for hematogenous spread of tumor. The presence of iron-deficiency anemia in men and of occult blood in the stool in both sexes mandates a search for an occult gastrointestinal tract lesion. A careful assessment is of particular importance in patients with atrophic gastritis or pernicious anemia. Unusual clinical features associated with gastric adenocarcinomas include migratory thrombophlebitis, microangiopathic hemolytic anemia, diffuse seborrheic keratoses (so-called Leser-Trélat sign), and acanthosis nigricans. Diagnosis The use of double-contrast radiographic examinations has been supplanted by esophagogastroscopy and CT scanning for the evaluation of patients with epigastric complaints. Gastric ulcers identified at the time of such endoscopic procedure may appear benign but merit biopsy in order to exclude a malignancy. Malignant gastric ulcers must be recognized before they penetrate into surrounding tissues, because the rate of cure of early lesions limited to the mucosa or submucosa is >80%. Because gastric carcinomas are difficult to distinguish clinically or endoscopically from gastric lymphomas, endoscopic biopsies should be made as deeply as possible, due to the submucosal location of lymphoid tumors. The staging system for gastric carcinoma is shown in Table 109-4. Data from ACS in the United States No. of 5-Year Stage TNM Features Cases, % Survival, % Abbreviations: ACS, American Cancer Society; TNM, tumor, node, metastasis. Complete surgical removal of the tumor with resection of adjacent lymph nodes offers the only chance for cure. However, this is possible in less than a third of patients. A subtotal gastrectomy is the treatment of choice for patients with distal carcinomas, whereas total or near-total gastrectomies are required for more proximal tumors. The inclusion of extended lymph node dissection in these procedures appears to confer an added risk for complications without providing a meaningful enhancement in survival. The prognosis following complete surgical resection depends on the degree of tumor penetration into the stomach wall and is adversely influenced by regional lymph node involvement and vascular invasion, characteristics found in the vast majority of American patients. As a result, the probability of survival after 5 years for the 25–30% of patients able to undergo complete resection is ∼20% for distal tumors and <10% for proximal tumors, with recurrences continuing for at least 8 years after surgery. In the absence of ascites or extensive hepatic or peritoneal metastases, even patients whose disease is believed to be incurable by surgery should be offered resection of the primary lesion. Reduction of tumor bulk is the best form of palliation and may enhance the probability of benefit from subsequent therapy. In high-incidence regions such as Japan and Korea, where the use of endoscopic screening programs has identified patients with superficial tumors, the use of laparoscopic gastrectomy has gained popularity. In the United States and western Europe, the use of this less invasive surgical approach remains investigational. Gastric adenocarcinoma is a relatively radioresistant tumor, and the adequate control of the primary tumor requires doses of external-beam irradiation that exceed the tolerance of surrounding structures, such as bowel mucosa and spinal cord. As a result, the major role of radiation therapy in patients has been palliation of pain. Radiation therapy alone after a complete resection does not prolong survival. In the setting of surgically unresectable disease limited to the epigastrium, patients treated with 3500–4000 cGy did not live longer than similar patients not receiving radiotherapy; however, survival was prolonged slightly when 5-fluorouracil (5-FU) plus leucovorin was given in combination with radiation therapy (3-year survival 50% vs 41% for radiation therapy alone). In this clinical setting, the 5-FU likely functions as a radiosensitizer. The administration of combinations of cytotoxic drugs to patients with advanced gastric carcinoma has been associated with partial responses in 30–50% of cases; responders appear to benefit from treatment. Such drug combinations have generally included cisplatin combined with epirubicin or docetaxel and infusional 5-FU or capecitabine, or with irinotecan. Despite the encouraging response rates, complete remissions are uncommon, the partial responses are transient, and the overall impact of multidrug therapy on survival has been limited; the median survival time for patients treated in this manner remains less than 12 months. As with adenocarcinomas arising in the esophagus, the addition of bevacizumab (Avastin) to chemotherapy regimens in treating gastric cancer appears to provide limited benefit. However, preliminary results utilizing another antiangiogenic compound—ramucirumab (Cyranza)—in the treatment of gastric cancer are encouraging. The use of adjuvant chemotherapy alone following the complete resection of a gastric cancer has only minimally improved survival. However, combination chemotherapy administered before and after surgery (perioperative treatment) as well as postoperative chemotherapy combined with radiation therapy reduces the recurrence rate and prolongs survival. Primary lymphoma of the stomach is relatively uncommon, accounting for <15% of gastric malignancies and ∼2% of all lymphomas. The stomach is, however, the most frequent extranodal site for lymphoma, and gastric lymphoma has increased in frequency during the past 536 35 years. The disease is difficult to distinguish clinically from gastric adenocarcinoma; both tumors are most often detected during the sixth decade of life; present with epigastric pain, early satiety, and generalized fatigue; and are usually characterized by ulcerations with a ragged, thickened mucosal pattern demonstrated by contrast radiographs or endoscopic appearance. The diagnosis of lymphoma of the stomach may occasionally be made through cytologic brushings of the gastric mucosa but usually requires a biopsy at gastroscopy or laparotomy. Failure of gastroscopic biopsies to detect lymphoma in a given case should not be interpreted as being conclusive, because superficial biopsies may miss the deeper lymphoid infiltrate. The macroscopic pathology of gastric lymphoma may also mimic adenocarcinoma, consisting of either a bulky ulcerated lesion localized in the corpus or antrum or a diffuse process spreading throughout the entire gastric submucosa and even extending into the duodenum. Microscopically, the vast majority of gastric lymphoid tumors are lymphomas of B-cell origin. Histologically, these tumors may range from well-differentiated, superficial processes (mucosa-associated lymphoid tissue [MALT]) to high-grade, large-cell lymphomas. Like gastric adenocarcinoma, infection with H. pylori increases the risk for gastric lymphoma in general and MALT lymphomas in particular. Large-cell lymphomas of the stomach spread initially to regional lymph nodes (often to Waldeyer’s ring) and may then disseminate. Primary gastric lymphoma is a far more treatable disease than adenocarcinoma of the stomach, a fact that underscores the need for making the correct diagnosis. Antibiotic treatment to eradicate H. pylori infection has led to regression of about 75% of gastric MALT lymphomas and should be considered before surgery, radiation therapy, or chemotherapy is undertaken in patients having such tumors. A lack of response to such antimicrobial treatment has been linked to a specific chromosomal abnormality, i.e., t(11;18). Responding patients should undergo periodic endoscopic surveillance because it remains unclear whether the neoplastic clone is eliminated or merely suppressed, although the response to antimicrobial treatment is quite durable. Subtotal gastrectomy, usually followed by combination chemotherapy, has led to 5-year survival rates of 40–60% in patients with localized high-grade lymphomas. The need for a major surgical procedure has been questioned, particularly in patients with preoperative radiographic evidence of nodal involvement, for whom chemotherapy (CHOP [cyclophosphamide, doxorubicin, vincristine, and prednisone]) plus rituximab is highly effective therapy. A role for radiation therapy is not defined because most recurrences develop at distant sites. Leiomyosarcomas and GISTs make up 1–3% of gastric neoplasms. They most frequently involve the anterior and posterior walls of the gastric fundus and often ulcerate and bleed. Even those lesions that appear benign on histologic examination may behave in a malignant fashion. These tumors rarely invade adjacent viscera and characteristically do not metastasize to lymph nodes, but they may spread to the liver and lungs. The treatment of choice is surgical resection. Combination chemotherapy should be reserved for patients with metastatic disease. All such tumors should be analyzed for a mutation in the c-kit receptor. GISTs are unresponsive to conventional chemotherapy; yet ∼50% of patients experience objective response and prolonged survival when treated with imatinib mesylate (Gleevec) (400–800 mg PO daily), a selective inhibitor of the c-kit tyrosine kinase. Many patients with GIST whose tumors have become refractory to imatinib subsequently benefit from sunitinib (Sutent) or regorafenib (Stivarga), other inhibitors of the c-kit tyrosine kinase. Small-bowel tumors comprise <3% of gastrointestinal neoplasms. Because of their rarity and inaccessibility, a correct diagnosis is often delayed. Abdominal symptoms are usually vague and poorly defined, and conventional radiographic studies of the upper and lower intestinal tract often appear normal. Small-bowel tumors should be considered in the differential diagnosis in the following situations: (1) recurrent, unexplained episodes of crampy abdominal pain; (2) intermittent bouts of intestinal obstruction, especially in the absence of inflammatory bowel disease (IBD) or prior abdominal surgery; (3) intussusception in the adult; and (4) evidence of chronic intestinal bleeding in the presence of negative conventional and endoscopic examination. A careful small-bowel barium study should be considered in such a circumstance; the diagnostic accuracy may be improved by infusing barium through a nasogastric tube placed into the duodenum (enteroclysis). Alternatively, capsule endoscopic procedures have been used. The histology of benign small-bowel tumors is difficult to predict on clinical and radiologic grounds alone. The symptomatology of benign tumors is not distinctive, with pain, obstruction, and hemorrhage being the most frequent symptoms. These tumors are usually discovered during the fifth and sixth decades of life, more often in the distal rather than the proximal small intestine. The most common benign tumors are adenomas, leiomyomas, lipomas, and angiomas. Adenomas These tumors include those of the islet cells and Brunner’s glands as well as polypoid adenomas. Islet cell adenomas are occasionally located outside the pancreas; the associated syndromes are discussed in Chap. 113. Brunner’s gland adenomas are not truly neoplastic but represent a hypertrophy or hyperplasia of submucosal duodenal glands. These appear as small nodules in the duodenal mucosa that secrete a highly viscous alkaline mucus. Most often, this is an incidental radiographic finding not associated with any specific clinical disorder. Polypoid Adenomas About 25% of benign small-bowel tumors are polypoid adenomas (see Table 110-2). They may present as single polypoid lesions or, less commonly, as papillary villous adenomas. As in the colon, the sessile or papillary form of the tumor is sometimes associated with a coexisting carcinoma. Occasionally, patients with Gardner’s syndrome develop premalignant adenomas in the small bowel; such lesions are generally in the duodenum. Multiple polypoid tumors may occur throughout the small bowel (and occasionally the stomach and colorectum) in the Peutz-Jeghers syndrome. The polyps are usually hamartomas (juvenile polyps) having a low potential for malignant degeneration. Mucocutaneous melanin deposits as well as tumors of the ovary, breast, pancreas, and endometrium are also associated with this autosomal dominant condition. Leiomyomas These neoplasms arise from smooth-muscle components of the intestine and are usually intramural, affecting the overlying mucosa. Ulceration of the mucosa may cause gastrointestinal hemorrhage of varying severity. Cramping or intermittent abdominal pain is frequently encountered. Lipomas These tumors occur with greatest frequency in the distal ileum and at the ileocecal valve. They have a characteristic radiolucent appearance and are usually intramural and asymptomatic, but on occasion cause bleeding. Angiomas While not true neoplasms, these lesions are important because they frequently cause intestinal bleeding. They may take the form of telangiectasia or hemangiomas. Multiple intestinal telangiectasias occur in a nonhereditary form confined to the gastrointestinal tract or as part of the hereditary Osler-Rendu-Weber syndrome. Vascular tumors may also take the form of isolated hemangiomas, most commonly in the jejunum. Angiography, especially during bleeding, is the best procedure for evaluating these lesions. While rare, small-bowel malignancies occur in patients with longstanding regional enteritis and celiac sprue as well as in individuals with AIDS. Malignant tumors of the small bowel are frequently associated with fever, weight loss, anorexia, bleeding, and a palpable abdominal mass. After ampullary carcinomas (many of which arise from biliary or pancreatic ducts), the most frequently occurring small-bowel malignancies are adenocarcinomas, lymphomas, carcinoid tumors, and leiomyosarcomas. The most common primary cancers of the small bowel are adenocarcinomas, accounting for ∼50% of malignant tumors. These cancers occur most often in the distal duodenum and proximal jejunum, where they tend to ulcerate and cause hemorrhage or obstruction. Radiologically, they may be confused with chronic duodenal ulcer disease or with Crohn’s disease if the patient has long-standing regional enteritis. The diagnosis is best made by endoscopy and biopsy under direct vision. Surgical resection is the treatment of choice with suggested postoperative adjuvant chemotherapy options generally following treatment patterns used in the management of colon cancer. Lymphoma in the small bowel may be primary or secondary. A diagnosis of a primary intestinal lymphoma requires histologic confirmation in a clinical setting in which palpable adenopathy and hepatosplenomegaly are absent and no evidence of lymphoma is seen on chest radiograph, CT scan, or peripheral blood smear or on bone marrow aspiration and biopsy. Symptoms referable to the small bowel are present, usually accompanied by an anatomically discernible lesion. Secondary lymphoma of the small bowel consists of involvement of the intestine by a lymphoid malignancy extending from involved retroperitoneal or mesenteric lymph nodes (Chap. 134). Primary intestinal lymphoma accounts for ∼20% of malignancies of the small bowel. These neoplasms are non-Hodgkin’s lymphomas; they usually have a diffuse, large-cell histology and are of T cell origin. Intestinal lymphoma involves the ileum, jejunum, and duodenum, in decreasing frequency—a pattern that mirrors the relative amount of normal lymphoid cells in these anatomic areas. The risk of small-bowel lymphoma is increased in patients with a prior history of malabsorptive conditions (e.g., celiac sprue), regional enteritis, and depressed immune function due to congenital immunodeficiency syndromes, prior organ transplantation, autoimmune disorders, or AIDS. The development of localized or nodular masses that narrow the lumen results in periumbilical pain (made worse by eating) as well as weight loss, vomiting, and occasional intestinal obstruction. The diagnosis of small-bowel lymphoma may be suspected from the appearance on contrast radiographs of patterns such as infiltration and thickening of mucosal folds, mucosal nodules, areas of irregular ulceration, or stasis of contrast material. The diagnosis can be confirmed by surgical exploration and resection of involved segments. Intestinal lymphoma can occasionally be diagnosed by peroral intestinal mucosal biopsy, but because the disease mainly involves the lamina propria, full-thickness surgical biopsies are usually required. Resection of the tumor constitutes the initial treatment modality. While postoperative radiation therapy has been given to some patients following a total resection, most authorities favor short-term (three cycles) systemic treatment with combination chemotherapy. The frequent presence of widespread intraabdominal disease at the time of diagnosis and the occasional multicentricity of the tumor often make a total resection impossible. The probability of sustained remission or cure is ∼75% in patients with localized disease but is ∼25% in individuals with unresectable lymphoma. In patients whose tumors are not resected, chemotherapy may lead to bowel perforation. A unique form of small-bowel lymphoma, diffusely involving the entire intestine, was first described in oriental Jews and Arabs and is referred to as immunoproliferative small intestinal disease (IPSID), Mediterranean lymphoma, or α heavy chain disease. This is a B cell tumor. The typical presentation includes chronic diarrhea and steatorrhea associated with vomiting and abdominal cramps; clubbing of the digits may be observed. A curious feature in many patients with IPSID is the presence in the blood and intestinal secretions of an abnormal IgA that contains a shortened α heavy chain and is devoid of light chains. It is suspected that the abnormal α chains are produced by plasma cells infiltrating the small bowel. The clinical course of 537 patients with IPSID is generally one of exacerbations and remissions, with death frequently resulting from either progressive malnutrition and wasting or the development of an aggressive lymphoma. The use of oral antibiotics such as tetracycline appears to be beneficial in the early phases of the disorder, suggesting a possible infectious etiology. Combination chemotherapy has been administered during later stages of the disease, with variable results. Results are better when antibiotics and chemotherapy are combined. Carcinoid tumors arise from argentaffin cells of the crypts of Lieberkühn and are found from the distal duodenum to the ascending colon, areas embryologically derived from the midgut. More than 50% of intestinal carcinoids are found in the distal ileum, with most congregating close to the ileocecal valve. Most intestinal carcinoids are asymptomatic and of low malignant potential, but invasion and metastases may occur, leading to the carcinoid syndrome (Chap. 113). Leiomyosarcomas often are >5 cm in diameter and may be palpable on abdominal examination. Bleeding, obstruction, and perforation are common. Such tumors should be analyzed for the expression of mutant c-kit receptor (defining GIST), and in the presence of metastatic disease, justifying treatment with imatinib mesylate (Gleevec) or, in imatinib-refractory patients, sunitinib (Sutent) or regorafenib (Stivarga). Robert J. Mayer Lower gastrointestinal cancers include malignant tumors of the colon, rectum, and anus. Cancer of the large bowel is second only to lung cancer as a cause of cancer death in the United States: 136,830 new cases occurred in 2014, and 50,310 deaths were due to colorectal cancer. The incidence rate has decreased significantly during the past 25 years, likely due to enhanced and more compliantly followed screening practices. Similarly, mortality rates in the United States have decreased by approximately 25%, resulting largely from earlier detection and improved treatment. Most colorectal cancers, regardless of etiology, arise from adenomatous polyps. A polyp is a grossly visible protrusion from the mucosal surface and may be classified pathologically as a nonneoplastic hamartoma (e.g., juvenile polyp), a hyperplastic mucosal proliferation (hyperplastic polyp), or an adenomatous polyp. Only adenomas are clearly premalignant, and only a minority of adenomatous polyps evolve into cancer. Adenomatous polyps may be found in the colons of ∼30% of middle-aged and ∼50% of elderly people; however, <1% of polyps ever become malignant. Most polyps produce no symptoms and remain clinically undetected. Occult blood in the stool is found in <5% of patients with polyps. A number of molecular changes are noted in adenomatous polyps and colorectal cancers that are thought to reflect a multistep process in the evolution of normal colonic mucosa to life-threatening invasive carcinoma. These developmental steps toward carcinogenesis include, but are not restricted to, point mutations in the K-ras protooncogene; hypomethylation of DNA, leading to gene activation; loss of DNA 538 (allelic loss) at the site of a tumor-suppressor gene (the adenomatous polyposis coli [APC] gene) on the long arm of chromosome 5 (5q21); allelic loss at the site of a tumor-suppressor gene located on chromosome 18q (the deleted in colorectal cancer [DCC] gene); and allelic loss at chromosome 17p, associated with mutations in the p53 tumor-suppressor gene (see Fig. 101e-2). Thus, the altered proliferative pattern of the colonic mucosa, which results in progression to a polyp and then to carcinoma, may involve the mutational activation of an oncogene followed by and coupled with the loss of genes that normally suppress tumorigenesis. It remains uncertain whether the genetic aberrations always occur in a defined order. Based on this model, however, cancer is believed to develop only in those polyps in which most (if not all) of these mutational events take place. Clinically, the probability of an adenomatous polyp becoming a cancer depends on the gross appearance of the lesion, its histologic features, and its size. Adenomatous polyps may be pedunculated (stalked) or sessile (flat-based). Invasive cancers develop more frequently in sessile polyps. Histologically, adenomatous polyps may be tubular, villous (i.e., papillary), or tubulovillous. Villous adenomas, most of which are sessile, become malignant more than three times as often as tubular adenomas. The likelihood that any polypoid lesion in the large bowel contains invasive cancer is related to the size of the polyp, being negligible (<2%) in lesions <1.5 cm, intermediate (2–10%) in lesions 1.5–2.5 cm, and substantial (10%) in lesions >2.5 cm in size. Following the detection of an adenomatous polyp, the entire large bowel should be visualized endoscopically because synchronous lesions are noted in about one-third of cases. Colonoscopy should then be repeated periodically, even in the absence of a previously documented malignancy, because such patients have a 30–50% probability of developing another adenoma and are at a higher-than-average risk for developing a colorectal carcinoma. Adenomatous polyps are thought to require >5 years of growth before becoming clinically significant; colonoscopy need not be carried out more frequently than every 3 years for the vast majority of patients. Risk factors for the development of colorectal cancer are listed in Table 110-1. Diet The etiology for most cases of large-bowel cancer appears to be related to environmental factors. The disease occurs more often in upper socioeconomic populations who live in urban areas. Mortality from colorectal cancer is directly correlated with per capita consumption of calories, meat protein, and dietary fat and oil as well as elevations in the serum cholesterol concentration and mortality from coronary artery disease. Geographic variations in incidence largely are unrelated to genetic differences, since migrant groups tend to assume the large-bowel cancer incidence rates of their adopted countries. Furthermore, population groups such as Mormons and Seventh Day Adventists, whose lifestyle and dietary habits differ somewhat risK faCtors for the DeveLopment of CoLoreCtaL CanCer Diet: Animal fat ? Tobacco use from those of their neighbors, have significantly lower-than-expected incidence and mortality rates for colorectal cancer. The incidence of colorectal cancer has increased in Japan since that nation has adopted a more “Western” diet. At least three hypotheses have been proposed to explain the relationship to diet, none of which is fully satisfactory. animal fats One hypothesis is that the ingestion of animal fats found in red meats and processed meat leads to an increased proportion of anaerobes in the gut microflora, resulting in the conversion of normal bile acids into carcinogens. This provocative hypothesis is supported by several reports of increased amounts of fecal anaerobes in the stools of patients with colorectal cancer. Diets high in animal (but not vegetable) fats are also associated with high serum cholesterol, which is also associated with enhanced risk for the development of colorectal adenomas and carcinomas. insulin resistance The large number of calories in Western diets coupled with physical inactivity has been associated with a higher prevalence of obesity. Obese persons develop insulin resistance with increased circulating levels of insulin, leading to higher circulating concentrations of insulin-like growth factor type I (IGF-I). This growth factor appears to stimulate proliferation of the intestinal mucosa. fiBer Contrary to prior beliefs, the results of randomized trials and case-controlled studies have failed to show any value for dietary fiber or diets high in fruits and vegetables in preventing the recurrence of colorectal adenomas or the development of colorectal cancer. The weight of epidemiologic evidence, however, implicates diet as being the major etiologic factor for colorectal cancer, particularly diets high in animal fat and in calories. Up to 25% of patients with colorectal cancer have a family history of the disease, suggesting a hereditary predisposition. Inherited large-bowel cancers can be divided into two main groups: the well-studied but uncommon polyposis syndromes and the more common nonpolyposis syndromes (Table 110-2). Polyposis Coli Polyposis coli (familial polyposis of the colon) is a rare condition characterized by the appearance of thousands of adenomatous polyps throughout the large bowel. It is transmitted as an autosomal dominant trait; the occasional patient with no family history probably developed the condition due to a spontaneous mutation. Polyposis coli is associated with a deletion in the long arm of chromosome 5 (including the APC gene) in both neoplastic (somatic mutation) and normal (germline mutation) cells. The loss of this genetic material (i.e., allelic loss) results in the absence of tumor-suppressor genes whose protein products would normally inhibit neoplastic growth. The presence of soft tissue and bony tumors, congenital hypertrophy of the retinal pigment epithelium, mesenteric desmoid tumors, and ampullary cancers in addition to the colonic polyps characterizes a subset of polyposis coli known as Gardner’s syndrome. The appearance of malignant tumors of the central nervous system accompanying polyposis coli defines Turcot’s syndrome. The colonic polyps in all these conditions are rarely present before puberty but are generally evident in affected individuals by age 25. If the polyposis is not treated surgically, colorectal cancer will develop in almost all patients before age 40. Polyposis coli results from a defect in the colonic mucosa, leading to an abnormal proliferative pattern and impaired DNA repair mechanisms. Once the multiple polyps are detected, patients should undergo a total colectomy. Medical therapy with nonsteroidal anti-inflammatory drugs (NSAIDs) such as sulindac and selective cyclooxygenase-2 inhibitors such as celecoxib can decrease the number and size of polyps in patients with polyposis coli; however, this effect on polyps is only temporary, and the use of NSAIDs has not been shown to reduce the risk of cancer. Colectomy remains the primary therapy/prevention. The offspring of patients with polyposis coli, who often are prepubertal when the diagnosis is made in the parent, have a 50% risk for developing this premalignant disorder and should be carefully screened by annual flexible sigmoidoscopy until age 35. Proctosigmoidoscopy is a sufficient screening procedure because polyps tend to be evenly distributed from cecum to anus, making more invasive and expensive techniques such as colonoscopy or barium enema unnecessary. Testing for occult blood in the stool is an inadequate screening maneuver. If a causative germ-line AP C mutation has been identified in an affected family member, an alternative method for identifying carriers is testing DNA from peripheral blood mononuclear cells for the presence of the specific APC mutation. The detection of such a germline mutation can lead to a definitive diagnosis before the development of polyps. MYH-Associated Polyposis MYH-associated polyposis (MAP) is a rare autosomal recessive syndrome caused by a biallelic mutation in the MUT4H gene. This hereditary condition may have a variable clinical presentation, resembling polyposis coli or colorectal cancer occurring in younger individuals without polyposis. Screening and colectomy guidelines for this syndrome are less clear than for polyposis coli, but annual to biennial colonoscopic surveillance is generally recommended starting at age 25–30. Hereditary Nonpolyposis Colon Cancer Hereditary nonpolyposis colon cancer (HNPCC), also known as Lynch’s syndrome, is another autosomal dominant trait. It is characterized by the presence of three or more relatives with histologically documented colorectal cancer, one of whom is a first-degree relative of the other two; one or more cases of colorectal cancer diagnosed before age 50 in the family; and colorectal cancer involving at least two generations. In contrast to polyposis coli, HNPCC is associated with an unusually high frequency of cancer arising in the proximal large bowel. The median age for the appearance of an adenocarcinoma is <50 years, 10–15 years younger than the median age for the general population. Despite having a poorly differentiated, mucinous histologic appearance, the proximal colon tumors that characterize HNPCC have a better prognosis than sporadic tumors from patients of similar age. Families with HNPCC often include individuals with multiple primary cancers; the association of colorectal cancer with either ovarian or endometrial carcinomas is especially strong in women, and an increased appearance of gastric, small-bowel, genitourinary, pancreaticobiliary, and sebaceous skin tumors has been reported as well. It has been recommended that members of such families undergo annual or biennial colonoscopy beginning at age 25 years, with intermittent pelvic ultrasonography and endometrial biopsy for afflicted women; such a screening strategy has not yet been validated. HNPCC is associated with germline muta-539 tions of several genes, particularly hMSH2 on chromosome 2 and hMLH1 on chromosome 3. These mutations lead to errors in DNA replication and are thought to result in DNA instability because of defective repair of DNA mismatches resulting in abnormal cell growth and tumor development. Testing tumor cells through molecular analysis of DNA or immunohistochemical staining of paraffin-fixed tissue for “microsatellite instability” (sequence changes reflecting defective mismatch repair) in patients with colorectal cancer and a positive family history for colorectal or endometrial cancer may identify probands with HNPCC. (Chap. 351) Large-bowel cancer is increased in incidence in patients with long-standing inflammatory bowel disease (IBD). Cancers develop more commonly in patients with ulcerative colitis than in those with granulomatous (i.e., Crohn’s) colitis, but this impression may result in part from the occasional difficulty of differentiating these two conditions. The risk of colorectal cancer in a patient with IBD is relatively small during the initial 10 years of the disease, but then appears to increase at a rate of ∼0.5–1% per year. Cancer may develop in 8–30% of patients after 25 years. The risk is higher in younger patients with pancolitis. Cancer surveillance strategies in patients with IBD are unsatisfactory. Symptoms such as bloody diarrhea, abdominal cramping, and obstruction, which may signal the appearance of a tumor, are similar to the complaints caused by a flare-up of the underlying disease. In patients with a history of IBD lasting ≥15 years who continue to experience exacerbations, the surgical removal of the colon can significantly reduce the risk for cancer and also eliminate the target organ for the underlying chronic gastrointestinal disorder. The value of such surveillance techniques as colonoscopy with mucosal biopsies and brushings for less symptomatic individuals with chronic IBD is uncertain. The lack of uniformity regarding the pathologic criteria that characterize dysplasia and the absence of data that such surveillance reduces the development of lethal cancers have made this costly practice an area of controversy. OTHER HIGH-RISK CONDITIONS Streptococcus bovis Bacteremia For unknown reasons, individuals who develop endocarditis or septicemia from this fecal bacterium have a high incidence of occult colorectal tumors and, possibly, upper gastrointestinal cancers as well. Endoscopic or radiographic screening appears advisable. Tobacco Use Cigarette smoking is linked to the development of colorectal adenomas, particularly after >35 years of tobacco use. No biologic explanation for this association has yet been proposed. Several orally administered compounds have been assessed as possible inhibitors of colon cancer. The most effective class of chemopreventive agents is aspirin and other NSAIDs, which are thought to suppress cell proliferation by inhibiting prostaglandin synthesis. Regular aspirin use reduces the risk of colon adenomas and carcinomas as well as death from large-bowel cancer; such use also appears to diminish the likelihood for developing additional premalignant adenomas following successful treatment for a prior colon carcinoma. This effect of aspirin on colon carcinogenesis increases with the duration and dosage of drug use. Oral folic acid supplements and oral calcium supplements appear to reduce the risk of adenomatous polyps and colorectal cancers in case-controlled studies. The value of vitamin D as a form of chemoprevention is under study. Antioxidant vitamins such as ascorbic acid, tocopherols, and β-carotene are ineffective at reducing the incidence of subsequent adenomas in patients who have undergone the removal of a colon adenoma. Estrogen replacement therapy has been associated with a reduction in the risk of colorectal cancer in women, conceivably by an effect on bile acid synthesis and composition or by decreasing synthesis of IGF-I. 540 SCREENING The rationale for colorectal cancer screening programs is that the removal of adenomatous polyps will prevent colorectal cancer, and that earlier detection of localized, superficial cancers in asymptomatic individuals will increase the surgical cure rate. Such screening programs are particularly important for individuals with a family history of the disease in first-degree relatives. The relative risk for developing colorectal cancer increases to 1.75 in such individuals and may be even higher if the relative was afflicted before age 60. The prior use of proctosigmoidoscopy as a screening tool was based on the observation that 60% of early lesions are located in the rectosigmoid. For unexplained reasons, however, the proportion of large-bowel cancers arising in the rectum has been decreasing during the past several decades, with a corresponding increase in the proportion of cancers in the more proximal descending colon. As such, the potential for proctosigmoidoscopy to detect a sufficient number of occult neoplasms to make the procedure cost-effective has been questioned. Screening strategies for colorectal cancer that have been examined during the past several decades are listed in Table 110-3. Many programs directed at the early detection of colorectal cancers have focused on digital rectal examinations and fecal occult blood (i.e., stool guaiac) testing. The digital examination should be part of any routine physical evaluation in adults older than age 40 years, serving as a screening test for prostate cancer in men, a component of the pelvic examination in women, and an inexpensive maneuver for the detection of masses in the rectum. However, because of the proximal migration of colorectal tumors, its value as an overall screening modality for colorectal cancer has become limited. The development of the fecal occult blood test has greatly facilitated the detection of occult fecal blood. Unfortunately, even when performed optimally, the fecal occult blood test has major limitations as a screening technique. About 50% of patients with documented colorectal cancers have a negative fecal occult blood test, consistent with the intermittent bleeding pattern of these tumors. When random cohorts of asymptomatic persons have been tested, 2–4% have fecal occult blood-positive stools. Colorectal cancers have been found in <10% of these “test-positive” cases, with benign polyps being detected in an additional 20–30%. Thus, a colorectal neoplasm will not be found in most asymptomatic individuals with occult blood in their stool. Nonetheless, persons found to have fecal occult blood-positive stool routinely undergo further medical evaluation, including sigmoidoscopy and/or colonoscopy—procedures that are not only uncomfortable and expensive but also associated with a small risk for significant complications. The added cost of these studies would appear justifiable if the small number of patients found to have occult neoplasms because of fecal occult blood screening could be shown to have an improved prognosis and prolonged survival. Prospectively controlled trials have shown a statistically significant reduction in mortality rate from colorectal cancer for individuals undergoing annual stool guaiac screening. However, this benefit only emerged after >13 years of follow-up and was extremely expensive to achieve, because all positive tests (most of which were falsely positive) were followed by colonoscopy. Moreover, these colonoscopic examinations quite likely provided the opportunity for cancer prevention through the removal of potentially premalignant adenomatous polyps (i.e., computed tomography colonography) because the eventual development of cancer was reduced by 20% in the cohort undergoing annual screening. With the appreciation that the carcinogenic process leading to the progression of the normal bowel mucosa to an adenomatous polyp and then to a cancer is the result of a series of molecular changes, investigators have examined fecal DNA for evidence of mutations associated with such molecular changes as evidence of the occult presence of precancerous lesions or actual malignancies. Such a strategy has been tested in more than 4000 asymptomatic individuals whose stool was assessed for occult blood and for 21 possible mutations in fecal DNA; these study subjects also underwent colonoscopy. Although the fecal DNA strategy suggested the presence of more advanced adenomas and cancers than did the fecal occult blood testing approach, the overall sensitivity, using colonoscopic findings as the standard, was less than 50%, diminishing enthusiasm for further pursuit of the fecal DNA screening strategy. The use of imaging studies to screen for colorectal cancers has also been explored. Air contrast barium enemas had been used to identify sources of occult blood in the stool prior to the advent of fiberoptic endoscopy; the cumbersome nature of the procedure and inconvenience to patients limited its widespread adoption. The introduction of computed tomography (CT) scanning led to the development of virtual (i.e., CT) colonography as an alternative to the growing use of endoscopic screening techniques. Virtual colonography was proposed as being equivalent in sensitivity to colonoscopy and being available in a more widespread manner because it did not require the same degree of operator expertise as fiberoptic endoscopy. However, virtual colonography requires the same cathartic preparation that has limited widespread acceptance of endoscopic colonoscopy, is diagnostic but not therapeutic (i.e., patients with suspicious findings must undergo a subsequent endoscopic procedure for polypectomy or biopsy), and, in the setting of general radiology practices, appears to be less sensitive as a screening technique when compared with endoscopic procedures. With the appreciation of the inadequacy of fecal occult blood testing alone, concerns about the practicality of imaging approaches, and the wider adoption of endoscopic examinations by the primary care community, screening strategies in asymptomatic persons have changed. At present, both the American Cancer Society and the National Comprehensive Cancer Network suggest either fecal occult blood testing annually coupled with flexible sigmoidoscopy every 5 years or colonoscopy every 10 years beginning at age 50 in asymptomatic individuals with no personal or family history of polyps or colorectal cancer. The recommendation for the inclusion of flexible sigmoidoscopy is strongly supported by the recently published results of three randomized trials performed in the United States, the United Kingdom, and Italy, involving more than 350,000 individuals, which consistently showed that periodic (even single) sigmoidoscopic examinations, after more than a decade of median follow-up, lead to an approximate 21% reduction in the development of colorectal cancer and a more than 25% reduction in mortality from the malignant disease. Less than 20% of participants in these studies underwent a subsequent colonoscopy. In contrast to the cathartic preparation required before colonoscopic procedures, which is only performed by highly trained specialists, flexible sigmoidoscopy requires only an enema as preparation and can be accurately performed by nonspecialty physicians or physician-extenders. The randomized screening studies using flexible sigmoidoscopy led to the estimate that approximately 650 individuals needed to be screened to prevent one colorectal cancer death; this contrasts with the data for mammography where the number of women needing to be screened to prevent one breast cancer death is 2500, reinforcing the efficacy of endoscopic surveillance for colorectal cancer screening. Presumably the benefit from the sigmoidoscopic screening is the result of the identification and removal of adenomatous polyps; it is intriguing that this benefit has been achieved using a technique that leaves the proximal half of the large bowel unvisualized. It remains to be seen whether surveillance colonoscopy, which has gained increasing popularity in the United States for colorectal cancer screening, will prove to be more effective than flexible sigmoidoscopy. Ongoing randomized trials being conducted in Europe are addressing this issue. Although flexible sigmoidoscopy only visualizes the distal half of the large bowel, leading to the assumption that colonoscopy represents a more informative approach, colonoscopy has been reported as being less accurate for screening the proximal rather than the distal colon, perhaps due to technical considerations but also possibly because of a greater frequency of serrated (i.e., “flat”) polyps in the right colon, which are more difficult to identify. At present, colonoscopy performed every 10 years has been offered as an alternative to annual fecal occult blood testing with periodic (every 5 years) flexible sigmoidoscopy. Colonoscopy has been shown to be superior to double-contract barium enema and also to have a higher sensitivity for detecting villous or dysplastic adenomas or cancers than the strategy using occult fecal blood testing and flexible sigmoidoscopy. Whether colonoscopy performed every 10 years beginning at age 50 is medically superior and economically equivalent to flexible sigmoidoscopy remains to be determined. CLINICAL FEATURES Presenting Symptoms Symptoms vary with the anatomic location of the tumor. Because stool is relatively liquid as it passes through the ileocecal valve into the right colon, cancers arising in the cecum and ascending colon may become quite large without resulting in any obstructive symptoms or noticeable alterations in bowel habits. Lesions of the right colon commonly ulcerate, leading to chronic, insidious blood loss without a change in the appearance of the stool. Consequently, patients with tumors of the ascending colon often present with symptoms such as fatigue, palpitations, and even angina pectoris and are found to have a hypochromic, microcytic anemia indicative of iron deficiency. Because the cancers may bleed intermittently, a random fecal occult blood test may be negative. As a result, the unexplained presence of iron-deficiency anemia in any adult (with the possible exception of a premenopausal, multiparous woman) mandates a thorough endoscopic and/or radiographic visualization of the entire large bowel (Fig. 110-1). Because stool becomes more formed as it passes into the transverse and descending colon, tumors arising there tend to impede the passage of stool, resulting in the development of abdominal cramping, occasional obstruction, and even perforation. Radiographs of the abdomen often reveal characteristic annular, constricting lesions (“apple-core” or “napkin-ring”) (Fig. 110-2). Cancers arising in the rectosigmoid are often associated with hematochezia, tenesmus, and narrowing of the caliber of stool; anemia is an infrequent finding. While these symptoms may lead patients and their physicians to suspect the presence of hemorrhoids, the development of rectal bleeding and/or altered bowel habits demands a prompt digital rectal examination and proctosigmoidoscopy. Staging, Prognostic Factors, and Patterns of Spread The prognosis for individuals having colorectal cancer is related to the depth of tumor penetration into the bowel wall and the presence of both regional lymph node involvement and distant metastases. These variables are incorporated into the staging system introduced by Dukes and subsequently applied to a TNM classification method, in which T represents the depth of tumor penetration, N the presence of lymph node involvement, and M the presence or absence of distant metastases (Fig. 110-3). Superficial lesions that do not involve regional lymph nodes and do not penetrate through the submucosa (T1) or the muscularis (T2) are designated as stage I (T1–2N0M0) disease; tumors that penetrate through the muscularis but have not spread to lymph nodes are stage II disease (T3-4N0M0); regional lymph node involvement defines stage III (TXN1-2M0) disease; and metastatic spread to sites such as liver, lung, or bone indicates stage IV (TXNXM1) disease. Unless gross evidence of metastatic disease is present, disease stage cannot be determined accurately before surgical resection and pathologic analysis of the operative specimens. It is not clear whether the detection of nodal metastases by special immunohistochemical molecular techniques has the same prognostic implications as disease detected by routine light microscopy. Most recurrences after a surgical resection of a large-bowel cancer occur within the first 4 years, making 5-year survival a fairly reliable indicator of cure. The likelihood for 5-year survival in patients with colorectal cancer is stage-related (Fig. 110-3). That likelihood has improved during the past several decades when similar surgical stages have been compared. The most plausible explanation for this improvement is more thorough intraoperative and pathologic staging. In particular, more exacting attention to pathologic detail has revealed that the prognosis following the resection of a colorectal cancer is not related merely to the presence or absence of regional lymph node involvement; rather, prognosis may be more precisely gauged by the number of involved lymph nodes (one to three lymph nodes [“N1”] vs four or more lymph nodes [“N2”]) and the number of nodes examined. A minimum of 12 sampled lymph nodes is thought necessary to FIGURE 110-1 Double-contrast air-barium enema revealing a ses-sile tumor of the cecum in a patient with iron-deficiency anemia and guaiac-positive stool. The lesion at surgery was a stage II adenocarci-noma. FIGURE 110-2 Annular, constricting adenocarcinoma of the descending colon. This radiographic appearance is referred to as an “apple-core” lesion and is always highly suggestive of malignancy. Staging of colorectal cancer FIGURE 110-3 Staging and prognosis for patients with colorectal cancer. accurately define tumor stage, and the more nodes examined, the better. Other predictors of a poor prognosis after a total surgical resection include tumor penetration through the bowel wall into pericolic fat, poorly differentiated histology, perforation and/or tumor adherence to adjacent organs (increasing the risk for an anatomically adjacent recurrence), and venous invasion by tumor (Table 110-4). Regardless of the clinicopathologic stage, a preoperative elevation of the plasma carcinoembryonic antigen (CEA) level predicts eventual tumor recurrence. The presence of aneuploidy and specific chromosomal deletions, such as a mutation in the b-raf gene in tumor cells, appears to predict for a higher risk for metastatic spread. Conversely, the detection of microsatellite instability in tumor tissue indicates a more favorable outcome. In contrast to most other cancers, the prognosis in colorectal cancer is not influenced by the size of the primary lesion when adjusted for nodal involvement and histologic differentiation. Cancers of the large bowel generally spread to regional lymph nodes or to the liver via the portal venous circulation. The liver represents the most frequent visceral site of metastasis; it is the initial site of distant spread in one-third of recurring colorectal cancers and is involved in more than two-thirds of such patients at the time of death. In general, colorectal cancer rarely spreads to the lungs, supraclavicular lymph nodes, bone, or brain without prior spread to the liver. A major exception to this rule occurs in patients having primary tumors in the distal rectum, from which tumor cells may spread through the paravertebral preDiCtors of poor outCome foLLoWing totaL surgiCaL reseCtion of CoLoreCtaL CanCer Tumor spread to regional lymph nodes Number of regional lymph nodes involved Tumor penetration through the bowel wall Poorly differentiated histology Perforation Tumor adherence to adjacent organs Venous invasion Preoperative elevation of CEA titer (>5 ng/mL) Aneuploidy Specific chromosomal deletion (e.g., mutation in the b-raf gene) Abbreviation: CEA, carcinoembryonic antigen. venous plexus, escaping the portal venous system and thereby reaching the lungs or supraclavicular lymph nodes without hepatic involvement. The median survival after the detection of distant metastases has ranged in the past from 6–9 months (hepatomegaly, abnormal liver chemistries) to 24–30 months (small liver nodule initially identified by elevated CEA level and subsequent CT scan), but effective systemic therapy is significantly improving this prognosis. Efforts to use gene expression profiles to identify patients at risk of recurrence or those particularly likely to benefit from adjuvant therapy have not yet yielded practice-changing results. Despite a burgeoning literature examining a host of prognostic factors, pathologic stage at diagnosis remains the best predictor of long-term prognosis. Patients with lymphovascular invasion and high preoperative CEA levels are likely to have a more aggressive clinical course. Total resection of tumor is the optimal treatment when a malignant lesion is detected in the large bowel. An evaluation for the presence of metastatic disease, including a thorough physical examination, biochemical assessment of liver function, measurement of the plasma CEA level, and a CT scan of the chest, abdomen, and pelvis, should be performed before surgery. When possible, a colonoscopy of the entire large bowel should be performed to identify synchronous neoplasms and/or polyps. The detection of metastases should not preclude surgery in patients with tumor-related symptoms such as gastrointestinal bleeding or obstruction, but it often prompts the use of a less radical operative procedure. The necessity for a primary tumor resection in asymptomatic individuals with metastatic disease is an area of controversy. At the time of laparotomy, the entire peritoneal cavity should be examined, with thorough inspection of the liver, pelvis, and hemidiaphragm and careful palpation of the full length of the large bowel. Following recovery from a complete resection, patients should be observed carefully for 5 years by semiannual physical examinations and blood chemistry measurements. If a complete colonoscopy was not performed preoperatively, it should be carried out within the first several postoperative months. Some authorities favor measuring plasma CEA levels at 3-month intervals because of the sensitivity of this test as a marker for otherwise undetectable tumor recurrence. Subsequent endoscopic surveillance of the large bowel, probably at triennial intervals, is indicated, because patients who have been cured of one colorectal cancer have a 3–5% probability of developing an additional bowel cancer during their lifetime and a >15% risk for the development of adenomatous polyps. Anastomotic (“suture-line”) recurrences are infrequent in colorectal cancer patients, provided the surgical resection margins are adequate and free of tumor. The value of periodic CT scans of the abdomen, assessing for an early, asymptomatic indication of tumor recurrence, is an area of uncertainty, with some experts recommending the test be performed annually for the first 3 postoperative years. Radiation therapy to the pelvis is recommended for patients with rectal cancer because it reduces the 20–25% probability of regional recurrences following complete surgical resection of stage II or III tumors, especially if they have penetrated through the serosa. This alarmingly high rate of local disease recurrence is believed to be due to the fact that the contained anatomic space within the pelvis limits the extent of the resection and because the rich lymphatic network of the pelvic side wall immediately adjacent to the rectum facilitates the early spread of malignant cells into surgically inaccessible tissue. The use of sharp rather than blunt dissection of rectal cancers (total mesorectal excision) appears to reduce the likelihood of local disease recurrence to ∼10%. Radiation therapy, either preor postoperatively, further reduces the likelihood of pelvic recurrences but does not appear to prolong survival. Combining radiation therapy with 5-fluorouracil (5-FU)-based chemotherapy, preferably prior to surgical resection, lowers local recurrence rates and improves overall survival. Preoperative radiotherapy is indicated for patients with large, potentially unresectable rectal cancers; such lesions may shrink enough to permit subsequent surgical removal. Radiation therapy is not effective as the primary treatment of colon cancer. Systemic therapy for patients with colorectal cancer has become more effective. 5-FU remains the backbone of treatment for this disease. Partial responses are obtained in 15–20% of patients. The probability of tumor response appears to be somewhat greater for patients with liver metastases when chemotherapy is infused directly into the hepatic artery, but intraarterial treatment is costly and toxic and does not appear to appreciably prolong survival. The concomitant administration of folinic acid (leucovorin) improves the efficacy of 5-FU in patients with advanced colorectal cancer, presumably by enhancing the binding of 5-FU to its target enzyme, thymidylate synthase. A threefold improvement in the partial response rate is noted when folinic acid is combined with 5-FU; however, the effect on survival is marginal, and the optimal dose schedule remains to be defined. 5-FU is generally administered intravenously but may also be given orally in the form of capecitabine (Xeloda) with seemingly similar efficacy. Irinotecan (CPT-11), a topoisomerase 1 inhibitor, prolongs survival when compared to supportive care in patients whose disease has progressed on 5-FU. Furthermore, the addition of irinotecan to 5-FU and leucovorin (LV) (e.g., FOLFIRI) improves response rates and survival of patients with metastatic disease. The FOLFIRI regimen is as follows: irinotecan, 180 mg/m2 as a 90-min infusion on day 1; LV, 400 mg/m2 as a 2-h infusion during irinotecan administration; immediately followed by 5-FU bolus, 400 mg/m2, and 46-h continuous infusion of 2.4–3 g/m2 every 2 weeks. Diarrhea is the major side effect from irinotecan. Oxaliplatin, a platinum analogue, also improves the response rate when added to 5-FU and LV (FOLFOX) as initial treatment of patients with metastatic disease. The FOLFOX regimen is as follows: 2-h infusion of LV (400 mg/m2 per day) followed by a 5-FU bolus (400 mg/m2 per day) and 22-h infusion (1200 mg/m2) every 2 weeks, together with oxaliplatin, 85 mg/m2 as a 2-h infusion on day 1. Oxaliplatin frequently causes a dose-dependent sensory neuropathy that often but not always resolves following the cessation of therapy. FOLFIRI and FOLFOX are equal in efficacy. In metastatic disease, these regimens may produce median survivals of 2 years. Monoclonal antibodies are also effective in patients with advanced colorectal cancer. Cetuximab (Erbitux) and panitumumab (Vectibix) are directed against the epidermal growth factor recep-543 tor (EGFR), a transmembrane glycoprotein involved in signaling pathways affecting growth and proliferation of tumor cells. Both cetuximab and panitumumab, when given alone, have been shown to benefit a small proportion of previously treated patients, and cetuximab appears to have therapeutic synergy with such chemotherapeutic agents as irinotecan, even in patients previously resistant to this drug; this suggests that cetuximab can reverse cellular resistance to cytotoxic chemotherapy. The antibodies are not effective in the approximate 40% subset of colon tumors that contain mutated K-ras. The use of both cetuximab and panitumumab can lead to an acne-like rash, with the development and severity of the rash being correlated with the likelihood of antitumor efficacy. Inhibitors of the EGFR tyrosine kinase such as erlotinib (Tarceva) or sunitinib (Sutent) do not appear to be effective in colorectal cancer. Bevacizumab (Avastin) is a monoclonal antibody directed against the vascular endothelial growth factor (VEGF) and is thought to act as an antiangiogenesis agent. The addition of bevacizumab to irinotecan-containing combinations and to FOLFOX initially appeared to significantly improve the outcome observed with chemotherapy alone, but subsequent studies have suggested a lesser degree of benefit. The use of bevacizumab can lead to hypertension, proteinuria, and an increased likelihood of thromboembolic events. Patients with solitary hepatic metastases without clinical or radiographic evidence of additional tumor involvement should be considered for partial liver resection, because such procedures are associated with 5-year survival rates of 25–30% when performed on selected individuals by experienced surgeons. The administration of 5-FU and LV for 6 months after resection of tumor in patients with stage III disease leads to a 40% decrease in recurrence rates and 30% improvement in survival. The likelihood of recurrence has been further reduced when oxaliplatin has been combined with 5-FU and LV (e.g., FOLFOX); unexpectedly, the addition of irinotecan to 5-FU and LV as well as the addition of either bevacizumab or cetuximab to FOLFOX did not significantly enhance outcome. Patients with stage II tumors do not appear to benefit appreciably from adjuvant therapy, with the use of such treatment generally restricted to those patients having biologic characteristics (e.g., perforated tumors, T4 lesions, lymphovascular invasion) that place them at higher likelihood for recurrence. The addition of oxaliplatin to adjuvant treatment for patients older than age 70 and those with stage II disease does not appear to provide any therapeutic benefit. In rectal cancer, the delivery of preoperative or postoperative combined-modality therapy (5-FU plus radiation therapy) reduces the risk of recurrence and increases the chance of cure for patients with stage II and III tumors, with the preoperative approach being better tolerated. The 5-FU acts as a radiosensitizer when delivered together with radiation therapy. Life-extending adjuvant therapy is used in only about half of patients older than age 65 years. This age bias is unfortunate because the benefits and likely the tolerance of adjuvant therapy in patients age ≥65 years appear similar to those seen in younger individuals. Cancers of the anus account for 1–2% of the malignant tumors of the large bowel. Most such lesions arise in the anal canal, the anatomic area extending from the anorectal ring to a zone approximately halfway between the pectinate (or dentate) line and the anal verge. Carcinomas arising proximal to the pectinate line (i.e., in the transitional zone between the glandular mucosa of the rectum and the squamous epithelium of the distal anus) are known as basaloid, cuboidal, or cloacogenic tumors; about one-third of anal cancers have this histologic pattern. Malignancies arising distal to the pectinate line have squamous histology, ulcerate more frequently, and constitute ∼55% of anal cancers. The prognosis for patients with basaloid and squamous cell cancers of 544 the anus is identical when corrected for tumor size and the presence or absence of nodal spread. The development of anal cancer is associated with infection by human papillomavirus, the same organism etiologically linked to cervical cancer. The virus is sexually transmitted. The infection may lead to anal warts (condyloma acuminata), which may progress to anal intraepithelial neoplasia and on to squamous cell carcinoma. The risk for anal cancer is increased among homosexual males, presumably related to anal intercourse. Anal cancer risk is increased in both men and women with AIDS, possibly because their immunosuppressed state permits more severe papillomavirus infection. Vaccination against human papilloma viruses may reduce the eventual risk for anal cancer. Anal cancers occur most commonly in middle-aged persons and are more frequent in women than men. At diagnosis, patients may experience bleeding, pain, sensation of a perianal mass, and pruritus. Radical surgery (abdominal-perineal resection with lymph node sampling and a permanent colostomy) was once the treatment of choice for this tumor type. The 5-year survival rate after such a procedure was 55–70% in the absence of spread to regional lymph nodes and <20% if nodal involvement was present. An alternative therapeutic approach combining external beam radiation therapy with concomitant chemotherapy (5-FU and mitomycin C) has resulted in biopsy-proven disappearance of all tumor in >80% of patients whose initial lesion was <3 cm in size. Tumor recurrences develop in <10% of these patients, meaning that ∼70% of patients with anal cancers can be cured with nonoperative treatment and without the need for a colostomy. Surgery should be reserved for the minority of individuals who are found to have residual tumor after being managed initially with radia tion therapy combined with chemotherapy. tumors of the Liver and Biliary tree Brian I. Carr HEPATOCELLULAR CARCINOMA INCIDENCE 111 Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide. The annual global incidence is approx imately 1 million cases, with a male-to-female ratio of approximately 4:1 (1:1 without cirrhosis to 9:1 in many high-incidence countries). The incidence rate equals the death rate. In the United States, approximately 22,000 new cases are diagnosed annually, with 18,000 deaths. The death rates in males in low-incidence countries such as the United States are 1.9 per 100,000 per year; in intermediate areas such as Austria and South Africa, they range from 5.1–20; and in high-incidence areas such as in the Orient (China and Korea), they are as high as 23.1– 150 per 100,000 per year (Table 111-1). The incidence of HCC in the United States is approximately 3 per 100,000 persons, with significant gender, ethnic, and geographic variations. These numbers are rapidly increasing and may be an underestimate. Approximately 4 million chronic hepatitis C virus (HCV) carriers are in the United States alone. Approximately 10% of them, or 400,000, are likely to develop cirrhosis. Approximately 5%, or 20,000, of these patients may develop HCC annually. Add to this the two other common predisposing factors—hepatitis B virus (HBV) and chronic alcohol consumption—and 60,000 new HCC cases annually seem possible. Future advances in HCC survival will likely depend in part on immunization strategies for HBV (and HCV) and earlier diagnosis by screening of patients at risk of HCC development. Current Directions With the U.S. HCV epidemic, HCC is increasing in most states, and obesity-associated liver disease (nonalcoholic steatohepatitis [NASH]) is increasingly recognized as a cause. Persons per 100,000 per Year Argentina 6.0 2.5 Brazil, Recife 9.2 8.3 Brazil, Sao Paulo 3.8 2.6 Mozambique 112.9 30.8 South Africa, Cape: Black 26.3 8.4 South Africa, Cape: White 1.2 0.6 Senegal 25.6 9.0 Nigeria 15.4 3.2 Gambia 33.1 12.6 Burma 25.5 8.8 Japan 7.2 2.2 Korea 13.8 3.2 China, Shanghai 34.4 11.6 India, Bombay 4.9 2.5 India, Madras 2.1 0.7 Great Britain 1.6 0.8 France 6.9 1.2 Italy, Varese 7.1 2.7 Norway 1.8 1.1 Spain, Navarra 7.9 4.7 There are two general types of epidemiologic studies of HCC—those of country-based incidence rates (Table 111-1) and those of migrants. Endemic hot spots occur in areas of China and sub-Saharan Africa, which are associated both with high endemic hepatitis B carrier rates as well as mycotoxin contamination of foodstuffs (aflatoxin B1), stored grains, drinking water, and soil. Environmental factors are important, for example, Japanese in Japan have a higher incidence than Japanese living in Hawaii, who in turn have a higher incidence than those living in California. studied along two general lines. First are agents identified as carcinogenic in experimental animals (particularly rodents) that are thought to be present in the human environment (Table 111-2). Second is the association of HCC with various other clinical conditions. Probably the best-studied and most potent ubiquitous natural chemical carcinogen is a product of the Aspergillus fungus, called aflatoxin B1. This mold and aflatoxin product can be found in a variety of stored grains in hot, humid places, where peanuts and rice are stored in unrefrigerated conditions. Aflatoxin contamination of foodstuffs correlates well with incidence rates in Africa and to some extent in China. In endemic areas of China, even farm animals such as ducks have HCC. The most potent carcinogens appear to be natural products of plants, fungi, and bacteria, such as bush trees containing pyrrolizidine alkaloids as well as tannic acid and safrole. Pollutants such as pesticides and insecticides are known rodent carcinogens. Hepatitis Both case-control and cohort studies have shown a strong association between chronic hepatitis B carrier rates and increased incidence of HCC. In Taiwanese male postal carriers who were hepatitis B surface antigen (HBsAg)-positive, a 98-fold greater risk for HCC was found compared to HBsAg-negative individuals. The incidence of HCC in Alaskan natives is markedly increased related to a high prevalence of HBV infection. HBV-based HCC may involve rounds of hepatic destruction with subsequent proliferation and not necessarily frank cirrhosis. The increase in Japanese Abbreviations: NAFL, nonalcoholic fatty liver; NASH, nonalcoholic steatohepatitis. HCC incidence rates in the last three decades is thought to be from hepatitis C. A large-scale World Health Organization (WHO)sponsored intervention study is currently under way in Asia involving HBV vaccination of the newborn. HCC in African blacks is not associated with severe cirrhosis but is poorly differentiated and very aggressive. Despite uniform HBV carrier rates among the South African Bantu, there is a ninefold difference in HCC incidence between Mozambicans living along the coast and inland. These differences are attributed to the additional exposure to dietary aflatoxin B1 and other carcinogenic mycotoxins. A typical interval between HCV-associated transfusion and subsequent HCC is approximately 30 years. HCV-associated HCC patients tend to have more frequent and advanced cirrhosis, but in HBV-associated HCC, only half the patients have cirrhosis, with the remainder having chronic active hepatitis (Chap. 362). Other Etiologic Conditions The 75–85% association of HCC with underlying cirrhosis has long been recognized, more typically with macronodular cirrhosis in Southeast Asia, but also with micronodular cirrhosis (alcohol) in Europe and the United States (Chap. 365). It is still not clear whether cirrhosis itself is a predisposing factor to the development of HCC or whether the underlying causes of the cirrhosis are actually the carcinogenic factors. However, ∼20% of U.S. patients with HCC do not have underlying cirrhosis. Several underlying conditions are associated with an increased risk for cirrhosis-associated HCC (Table 111-2), including hepatitis, alcohol, autoimmune chronic active hepatitis, cryptogenic cirrhosis, and NASH. A less common association is with primary biliary cirrhosis and several metabolic diseases including hemochromatosis, Wilson’s disease, α1 antitrypsin deficiency, tyrosinemia, porphyria cutanea tarda, glycogenesis types 1 and 3, citrullinemia, and orotic aciduria. The etiology of HCC in those 20% of patients who have no cirrhosis is currently unclear, and their HCC natural history is not well-defined. Current Directions Many patients have multiple etiologies, and the interactions of HBV, HCV, alcohol, smoking, and aflatoxins are just beginning to be explored. Symptoms These include abdominal pain, weight loss, weak ness, abdominal fullness and swelling, jaundice, and nausea (Table 111-3). Presenting signs and symptoms differ somewhat between highand low-incidence areas. In high-risk areas, especially in South African blacks, the most common symptom is abdominal pain; by contrast, only 40–50% of Chinese and Japanese patients present with abdominal pain. Abdominal swelling may occur as a consequence of ascites due to the underlying chronic liver disease or may be due to a rapidly expanding tumor. Occasionally, central necrosis or acute hemorrhage into the peritoneal cavity leads to death. In countries with an active surveillance program, HCC tends to be identified at an earlier stage, when symptoms may be due only to the underlying disease. Jaundice is usually due to obstruction of the intrahepatic ducts from underlying liver disease. Hematemesis may occur due to Symptom No. of Patients (%) No symptom 129 (24) Abdominal pain 219 (40) Other (workup of anemia and various 64 (12) diseases) Routine physical exam finding, elevated LFTs 129 (24) Weight loss 112 (20) Appetite loss 59 (11) Weakness/malaise 83 (15) Jaundice 30 (5) Routine CT scan screening of known cirrhosis 92 (17) Cirrhosis symptoms (ankle swelling, 98 (18) abdominal bloating, increased girth, pruritus, GI bleed) Diarrhea 7 (1) Tumor rupture 1 Mean age (yr) 56 ± 13 Male:Female 3:1 Ethnicity Abbreviations: CT, computed tomography; GI, gastrointestinal; LFT, liver function test. esophageal varices from the underlying portal hypertension. Bone pain is seen in 3–12% of patients, but necropsies show pathologic bone metastases in ∼20% of patients. However, 25% of patients may be asymptomatic. Physical Signs Hepatomegaly is the most common physical sign, occurring in 50–90% of the patients. Abdominal bruits are noted in 6–25%, and ascites occurs in 30–60% of patients. Ascites should be examined by cytology. Splenomegaly is mainly due to portal hypertension. Weight loss and muscle wasting are common, particularly with rapidly growing or large tumors. Fever is found in 10–50% of patients, from unclear cause. The signs of chronic liver disease may often be present, including jaundice, dilated abdominal veins, palmar erythema, gynecomastia, testicular atrophy, and peripheral edema. Budd-Chiari syndrome can occur due to HCC invasion of the hepatic veins, with tense ascites and a large tender liver (Chap. 365). Paraneoplastic Syndromes Most paraneoplastic syndromes in HCC are biochemical abnormalities without associated clinical consequences. They include hypoglycemia (also caused by end-stage liver failure), erythrocytosis, hypercalcemia, hypercholesterolemia, dysfibrinogenemia, carcinoid syndrome, increased thyroxin-binding globulin, changes in secondary sex characteristics (gynecomastia, testicular atrophy, and precocious puberty), and porphyria cutanea tarda. Mild hypoglycemia occurs in rapidly growing HCC as part of Tumors of the Liver and Biliary Tree 546 terminal illness, and profound hypoglycemia may occur, although the cause is unclear. Erythrocytosis occurs in 3–12% of patients and hypercholesterolemia in 10–40%. A high percentage of patients have thrombocytopenia associated with their fibrosis or leukopenia, resulting from portal hypertension, and not from cancer infiltration of bone marrow, as in other tumor types. Furthermore, large HCCs have normal or high platelet levels (thrombocytosis), as in ovarian and other gastrointestinal cancers, probably related to elevated interleukin 6 (IL-6) levels. Multiple clinical staging systems for HCC have been described. A widely used one has been the American Joint Committee on Cancer (AJCC) tumor-node-metastasis (TNM) classification. However, the Cancer of the Liver Italian Program (CLIP) system is now popular because it takes cirrhosis into account, based on the original Okuda system (Table 111-4). Patients with Okuda stage III disease have a dire prognosis because they usually cannot be curatively resected, and the condition of their liver typically precludes chemotherapy. Other staging systems have been proposed, and a consensus is needed. They are all based on combining the prognostic features of liver damage with those of tumor aggressiveness and include the Barcelona Clinic Liver Cancer (BCLC) system from Spain (Fig. 111-1), which is externally validated and incorporates baseline survival estimates; the Chinese University Prognostic Index (CUPI); the important and simple Japan Integrated Staging Score (JIS); and SLiDe, which stands for s tage, li ver damage, and de s-γ-carboxy prothrombin. CLIP and BCLC appear most popular in the West, whereas JIS is favored in Japan. Each system i. Tumor number Single Multiple – Hepatic replacement by tumor (%) <50 <50 >50 ii. Child-Pugh score A B C iii. α Fetoprotein level (ng/mL) <400 ≥400 – iv. Portal vein thrombosis (CT) No Yes – CLIP stages (score = sum of points): CLIP 0, 0 points; CLIP 1, 1 point; CLIP 2, 2 points; CLIP 3, 3 points. Okuda stages: stage 1, all (−); stage 2, 1 or 2 (+); stage 3, 3 or 4 (+). aExtent of liver occupied by tumor. Abbreviation: CLIP, Cancer of the Liver Italian Program. has its champions. The best prognosis is for stage I, solitary tumors less than 2 cm in diameter without vascular invasion. Adverse prognostic features include ascites, jaundice, vascular invasion, and elevated α fetoprotein (AFP). Vascular invasion in particular has profound effects Resection Liver transplantation (CLT/LDLT) Curative treatments (30%) 5-yr survival: 40–70% Randomized controlled trials (50%) Median survival 11–20 months Symptomatic treatment (20%) Survival <3 months PEI/RF TACE Sorafenib Advanced stage (C) Portal invasion, N1, M1, PST 1-2 Intermediate stage (B) Multinodular, PST 0 Very early stage (0) Single <2 cm carcinoma in situ Single Portal pressure/ bilirubin Normal Increased Associated diseases No Yes 3 nodules ˜3 cm Stage 0 PST 0, Child-Pugh A Early stage (A) Single or 3 nodules <3 cm, PST 0 PST End stage (D) Stage D PST >2, Child-Pugh C Stage A-C PST 0–2, Child-Pugh A–B HCC FIGURE 111-1 Barcelona Clinic Liver Cancer (BCLC) staging classification and treatment schedule. Patients with very early hepatocellular carcinoma (HCC) (stage 0) are optimal candidates for resection. Patients with early HCC (stage A) are candidates for radical therapy (resection, liver transplantation [LT], or local ablation via percutaneous ethanol injection [PEI] or radiofrequency [RF] ablation). Patients with intermediate HCC (stage B) benefit from transcatheter arterial chemoembolization (TACE). Patients with advanced HCC, defined as presence of macroscopic vascular invasion, extrahepatic spread, or cancer-related symptoms (Eastern Cooperative Oncology Group performance status 1 or 2) (stage C), benefit from sorafenib. Patients with end-stage disease (stage D) will receive symptomatic treatment. Treatment strategy will transition from one stage to another on treatment failure or contraindications for the procedures. CLT, cadaveric liver transplantation; LDLT, living donor liver transplantation; PST, Performance Status Test. (Modified from JM Llovet et al: JNCI 100:698, 2008.) on prognosis and may be microscopic or macroscopic (visible on computed tomography [CT] scans). Most large tumors have microscopic vascular invasion, so full staging can usually be made only after surgical resection. Stage III disease contains a mixture of lymph node– positive and–negative tumors. Stage III patients with positive lymph node disease have a poor prognosis, and few patients survive 1 year. The prognosis of stage IV is poor after either resection or transplantation, and 1-year survival is rare. New Directions Consensus is needed on staging. These systems will soon be refined or upended by proteomics. APPROACH TO THE PATIENT: The history is important in evaluating putative predisposing factors, including a history of hepatitis or jaundice, blood transfusion, or use of intravenous drugs. A family history of HCC or hepatitis should be sought and a detailed social history taken to include job descriptions for industrial exposure to possible carcinogenic drugs as well as contraceptive hormones. Physical examination should include assessing stigmata of underlying liver disease such as jaundice, ascites, peripheral edema, spider nevi, palmar erythema, and weight loss. Evaluation of the abdomen for hepatic size, masses or ascites, hepatic nodularity and tenderness, and splenomegaly is needed, as is assessment of overall performance status and psychosocial evaluation. AFP is a serum tumor marker for HCC; however, it is only increased in approximately one-half of U.S. patients. The lens culinaris agglutinin-reactive fraction of AFP (AFP-L3) assay is thought to be more specific. The other widely used assay is that for des-γ-carboxy prothrombin (DCP), a protein induced by vitamin K absence (PIVKA-2). This protein is increased in as many as 80% of HCC patients but may also be elevated in patients with vitamin K deficiency; it is always elevated after warfarin use. It may also predict for portal vein invasion. Both AFP-L3 and DCP are U.S. Food and Drug Administration (FDA) approved. Many other assays have been developed, such as glypican-3, but none have greater aggregate sensitivity and specificity. In a patient presenting with either a new hepatic mass or other indications of recent hepatic decompensation, carcinoembryonic antigen (CEA), vitamin B12, AFP, ferritin, PIVKA2, and antimitochondrial antibody should be measured, and standard liver function tests should be performed, including prothrombin time (PT), partial thromboplastin time (PTT), albumin, transaminases, γ-glutamyl transpeptidase, and alkaline phosphatase. γ-Glutamyl transpeptidase and alkaline phosphatase may be particularly important in the 50% of HCC patients who have low AFP levels. Decreases in platelet count and white blood cell count may reflect portal hypertension and associated hypersplenism. Hepatitis A, B, and C serology should be measured. If HBV or HCV serology is positive, quantitative measurements of HBV DNA or HCV RNA are needed. New Directions Newer biomarkers are being evaluated, especially tissueand serum-based genomics profiling. Newer plasma biomarkers include glypican-3, osteopontin, insulin-like growth factor I, and vascular endothelial growth factor. However, they are still in process of validation. Furthermore, the commercial availability of kits for isolating circulating tumor cells is permitting the molecular profiling of HCCs without the need for further tissue biopsy. An ultrasound examination of the liver is an excellent screening tool. The two characteristic vascular abnormalities are hypervascularity of the tumor mass (neovascularization or abnormal tumor-feeding arterial vessels) and thrombosis by tumor invasion of otherwise normal portal veins. To determine tumor size and extent and the presence of portal vein invasion accurately, a helical/ triphasic CT scan of the abdomen and pelvis, with fast-contrast bolus technique, should be performed to detect the vascular lesions typical of HCC. Portal vein invasion is normally detected as an obstruction and expansion of the vessel. A chest CT is used to exclude metastases. Magnetic resonance imaging (MRI) can also provide detailed information, especially with the newer contrast agents. Ethiodol (Lipiodol) is an ethiodized oil emulsion retained by liver tumors that can be delivered by hepatic artery injection (5–15 mL) for CT imaging 1 week later. For small tumors, Ethiodol injection is very helpful before biopsy because the histologic presence of the dye constitutes proof that the needle biopsied the mass under suspicion. A prospective comparison of triphasic CT, gadolinium-enhanced MRI, ultrasound, and fluorodeoxyglucose positron emission tomography (FDG-PET) showed similar results for CT, MRI, and ultrasound; PET imaging appears to be positive in only a subset of HCC patients. Abdominal CT versus MRI/CT uses a faster single breath-hold, is less complex, and is less dependent on patient cooperation. MRI requires a longer examination, and ascites can cause artifacts, but MRI is better able to distinguish dysplastic or regenerative nodules from HCC. Imaging criteria have been developed for HCC that do not require biopsy proof, as they have >90% specificity. The criteria include nodules >1 cm with arterial enhancement and portal venous washout and, for small tumors, specified growth rates on two scans performed less than 6 months apart (Organ Procurement and Transplant Network). Nevertheless, explant pathology after liver transplant for HCC has shown that ∼20% of patients diagnosed without biopsy did not actually have a tumor. New Directions The altered tumor vascularity that is a consequence of molecularly targeted therapies is the basis for newer imaging techniques including contrast-enhanced ultrasound (CEUS) and dynamic MRI. Histologic proof of the presence of HCC is obtained through a core liver biopsy of the liver mass under ultrasound guidance, as well as random biopsy of the underlying liver. Bleeding risk is increased compared to other cancers because (1) the tumors are hypervascular and (2) patients often have thrombocytopenia and decreased liver-dependent clotting factors. Bleeding risk is further increased in the presence of ascites. Tracking of tumor has an uncommon problem. Fine-needle aspirates can provide sufficient material for diagnosis of cancer, but core biopsies are preferred. Tissue architecture allows the distinction between HCC and adenocarcinoma. Laparoscopic approaches can also be used. For patients suspected of having portal vein involvement, a core biopsy of the portal vein may be performed safely. If positive, this is regarded as an exclusion criterion for transplantation for HCC. New Directions Immunohistochemistry has become mainstream. Prognostic subgroupings are being defined based on growth signaling pathway proteins and genotyping strategies, including a prognostically significant five-gene profile score. Furthermore, molecular profiling of the underlying liver has provided evidence for a “fieldeffect” of cirrhosis in generating recurrent or new HCCs after primary resection. In addition, characteristics of HCC stem cells have been identified and include EpCAM, CD44, and CD90 expression, which may form the basis of stem cell therapeutic targeting strategies. There are two goals of screening, both in patients at increased risk for developing HCC, such as those with cirrhosis. The first goal is to detect smaller tumors that are potentially curable by ablation. The second goal is to enhance survival, compared with patients who were not diagnosed by surveillance. Evidence from Taiwan has shown a survival advantage to population screening in HBV-positive patients, and other evidence has shown its efficacy in diagnosis for HCV. Prospective studies in Tumors of the Liver and Biliary Tree 548 high-risk populations showed that ultrasound was more sensitive than AFP elevations alone, although most practitioners request both tests at 6-month intervals for HBV and HCV carriers, especially in the presence of cirrhosis or worsening of liver function tests. However, an Italian study in patients with cirrhosis identified a yearly HCC incidence of 3% but showed no increase in the rate of detection of potentially curable tumors with aggressive screening. Prevention strategies including universal vaccination against hepatitis are more likely to be effective than screening efforts. Despite absence of formal guidelines, most practitioners obtain 6-month AFP and ultrasound (cheap and ubiquitous, even in poor countries) or CT (more sensitive, especially in overweight patients, but more costly) studies when following high-risk patients (HBV carriers, HCV cirrhosis, family history of HCC). Current Directions Cost-benefit analysis is not yet convincing, even though screening is intuitively sound. However, studies from areas with high HBV carrier rates have shown a survival benefit for screening as a result of earlier stage at diagnosis. A definitive clinical trial on screening is unlikely, due to difficulties in obtaining informed consent for patients who are not to be screened. γ-Glutamyl transpeptidase appears useful for detecting small tumors. Prevention strategies can only be planned when the causes of a cancer are known or strongly suspected. This is true of few human cancers, with significant exceptions being smoking and lung cancer, papilloma virus and cancer of the cervix uteri, and cirrhosis of any cause or dietary contamination by aflatoxin B1 for HCC. Aflatoxin B1is one of the most potent known chemical carcinogens and is a product of the Aspergillus mold that grows on peanuts and rice when stored in hot and humid climates. The obvious strategy is to refrigerate these foodstuffs when stored and to conduct surveillance programs for elevated aflatoxin B1 levels, as happens in the United States, but not usually in Asia. HBV is commonly transmitted from mother to fetus in Asia (except Japan), and neonatal HBV vaccination programs have resulted in a big decrease in adolescent HBV and, thus, in predicted HCC rates. There are millions of HBV and HCV carriers (4 million with HCV in the United States) who are already infected. Nucleoside analogue–based chemoprevention (entecavir) of HBV-mediated HCC in Japan resulted in a fivefold decrease in HCC incidence over 5 years in cirrhotic but not in non-cirrhotic HBV patients. More powerful and effective HCV therapies promise the possibility of prevention of HCV-based HCC in the future. Most HCC patients have two liver diseases, cirrhosis and HCC, each of which is an independent cause of death. The presence of cirrhosis usually places constraints on resection surgery, ablative therapies, and chemotherapy. Thus patient assessment and treatment planning have to take the severity of the nonmalignant liver disease into account. The clinical management choices for HCC can be complex (Fig. 111-2, Tables 111-5 and 111-6). The natural history of HCC is highly variable. Patients presenting with advanced tumors (vascular invasion, symptoms, extrahepatic spread) have a median survival of ∼4 months, with or without treatment. Treatment results from the literature are difficult to interpret. Survival is not always a measure of the efficacy of therapy because of the adverse effects on survival of the underlying liver disease. A multidisciplinary team, including a hepatologist, interventional radiologist, surgical oncologist, resection surgeon, transplant surgeon, and medical oncologist, is important for the comprehensive management of HCC patients. Early-stage tumors are successfully treated using various techniques, including surgical resection, local ablation (thermal, radiofrequency [RFA], or microwave ablation (MWA]), and local injection therapies (Table 111-6). Because the majority of patients with HCC suffer from a field defect in the cirrhotic liver, they are at risk for subsequent multiple primary liver tumors. Many will also have significant underlying liver disease and may not tolerate major surgical loss of hepatic parenchyma, and they may be eligible for orthotopic liver transplant (OLTX). Living related donor transplants have increased in popularity, resulting in absence of waiting for a transplant. An important principle in treating early-stage HCC in the nontransplant PEI/RFA/MWA Single lesion <5 cm Child’s A/B Tumor progression Transplant TACE/90Yttrium/ New agent trials Multifocal >5 cm Child’s A/B Sorafenib/Palliative care hormonal therapies Child’s C Bilirubin ˜2 Metastases Neoadjuvant bridge therapy RFA/TACE/90Yttrium Living donor transplant Suitable donor UNOS + criteria Ablation candidate Resection/RFA Non-cirrhotic Child’s A Single lesion No metastases Transplant candidate Transplant evaluation 1 lesion <5 cm 3 lesions all less than 3 cm Child’s A/B/C; AFP <1000 ng/mL No gross vascular invasion Not suitable for surgery Medical evaluation Comorbid factors 4 lesions Gross vascular invasion LN (+) or metastasis HCC diagnosed Cadaver donor wait list Yes Sorafenib/New agent clinical trials FIGURE 111-2 Hepatocellular carcinoma (HCC) treatment algorithm. The initial clinical evaluation is aimed at assessing the extent of the tumor and the underlying functional compromise of the liver by cirrhosis. Patients are classified as having resectable disease or unresectable disease or as being candidates for transplantation. AFP, α fetoprotein; LN, lymph node; MWA, microwave ablation; OLTX, orthotopic liver trans-plantation; PEI, percutaneous ethanol injection; RFA, radiofrequency ablation; TACE, transcatheter arterial chemoembolization; UNOS, United Network for Organ Sharing. Child’s A/B/C refers to the Child-Pugh classification of liver failure. Regional Therapies: Hepatic Artery Transcatheter Treatments Transarterial chemotherapy Transarterial embolization Transarterial chemoembolization Transarterial drug-eluting beads Transarterial radiotherapies: Molecularly targeted therapies (sorafenib, etc.) setting is to use liver-sparing treatments and to focus on treatment of both the tumor and the cirrhosis. Surgical Excision The risk of major hepatectomy is high (5–10% mortality rate) due to the underlying liver disease and the potential for liver failure, but acceptable in selected cases and highly dependent on surgical experience. The risk is lower in high-volume centers. Preoperative portal vein occlusion can sometimes be performed to cause atrophy of the HCC-involved lobe and compensatory hypertrophy of the noninvolved liver, permitting safer resection. Intraoperative ultrasound is useful for planning the surgical approach. The ultrasound can image the proximity of major vascular structures that may be encountered during the dissection. In cirrhotic patients, any major liver surgery can result in liver failure. The Child-Pugh classification of liver failure is still a reliable prognosticator for tolerance of hepatic surgery, and only Child A patients should be considered for surgical resection. Child B and C patients with stages I and II HCC should be referred for OLTX if 549 appropriate, as well as patients with ascites or a recent history of variceal bleeding. Although open surgical excision is the most reliable, the patient may be better served with a laparoscopic approach to resection, using RFA, MWA, or percutaneous ethanol injection (PEI). No adequate comparisons of these different techniques have been undertaken, and the choice of treatment is usually based on physician skill. However, RFA has been shown to be superior to PEI in necrosis induction for tumors <3 cm in diameter and is thought to be equivalent to open resection and, thus, is the treatment of first choice for these small tumors. As tumors get larger than 3 cm, especially ≥5 cm, the effectiveness of RFA-induced necrosis diminishes. The combination of transcatheter arterial chemoembolization (TACE) with RFA has shown superior results to TACE alone in a prospective, randomized trial. Although vascular invasion is a preeminent negative prognostic factor, microvascular invasion in small tumors appears not to be a negative factor. Local Ablation Strategies RFA uses heat to ablate tumors. The maxi mum size of the probe arrays allows for a 7-cm zone of necrosis, which would be adequate for a 3to 4-cm tumor. The heat reliably kills cells within the zone of necrosis. Treatment of tumors close to the main portal pedicles can lead to bile duct injury and obstruction. This limits the location of tumors that are anatomically suited for this technique. RFA can be performed percutaneously with CT or ultrasound guidance, or at the time of laparoscopy with ultrasound guidance. Local Injection Therapy Numerous agents have been used for local injection into tumors, most commonly ethanol (PEI). The relatively soft HCC within the hard background cirrhotic liver allows for injection of large volumes of ethanol into the tumor without diffusion into the hepatic parenchyma or leakage out of the liver. PEI causes direct destruction of cancer cells, but it is not selective for cancer and will destroy normal cells in the vicinity. However, it usually requires multiple injections (average three), in contrast to one for RFA. The maximum size of tumor reliably treated is 3 cm, even with multiple injections. current directions Resection and RFA each obtain similar results. However, a distinction has been made between the causes and prevention strategies needed to prevent early versus late tumor recurrences after resection. Early recurrence has been linked to tumor invasion factors, especially microvascular tumor invasion with elevated transaminases, whereas late recurrence has been associated with cirrhosis and virus hepatitis factors and, thus, the development of new tumors. See the section on virus-directed adjuvant therapy below. Liver Transplantation (OLTX) A viable option for stages I and II tumors in the setting of cirrhosis is OLTX, with survival approaching that for noncancer cases. OLTX for Tumors of the Liver and Biliary Tree patients with a single lesion ≤5 cm or three or fewer nodules, each ≤3 cm (Milan criteria), resulted in excellent taBLe 111-6 some ranDomizeD CLiniCaL triaLs invoLving transhepatiC artery ChemoemBoLization (taCe) for hepatoCeLLuLar CarCinoma 550 growing in the months until a donor liver becomes available. What remains unclear, however, is whether this translates into prolonged survival after transplant. Further, it is not known whether patients who have had their tumor(s) treated preoperatively follow the recurrence pattern predicted by their tumor status at the time of transplant (i.e., post–local ablative therapy), or if they follow the course set by their tumor parameters present before such treatment. The United Network for Organ Sharing (UNOS) point system for priority scoring of OLTX recipients now includes additional points for patients with HCC. The success of living related donor liver transplantation programs has also led to patients receiving transplantation earlier for HCC and often with greater than minimal tumors. current directions Expanded criteria for larger HCCs beyond the Milan criteria (one lesion <5 cm or three lesions, each <3 cm), such as the University of California, San Francisco (UCSF) criteria (single lesion ≤6.5 cm or two lesions ≤4.5 cm with a total diameter ≤8 cm; 1and 5-year survival rates of 90 and 75%, respectively), are being increasingly accepted by various UNOS areas for OLTX with satisfactory longer-term survival comparable to Milan criteria results. Furthermore, downstaging of HCCs that are too large for the Milan criteria by medical therapy (TACE) is increasingly recognized as acceptable treatment before OLTX with equivalent outcomes to patients who originally were within Milan criteria. Within-criteria patients with AFP levels >1000 ng/mL have exceptionally high post-OLTX recurrence rates. Also, the use of “salvage” OLTX after recurrent HCC after resection has produced conflicting outcomes. Shortages of organs combined with advances in resection safety have led to increasing use of resection for patients with good liver function. Adjuvant Therapy The role of adjuvant chemotherapy for patients after resection or OLTX remains unclear. Both adjuvant and neoadjuvant approaches have been studied, but no clear advantage in disease-free or overall survival has been found. However, a meta-analysis of several trials revealed a significant improvement in disease-free and overall survival. Although analysis of postoperative adjuvant systemic chemotherapy trials demonstrated no disease-free or overall survival advantage, single studies of TACE and neoadjuvant 131I-Ethiodol showed enhanced survival after resection. Antiviral therapy, instead of anticancer therapy, has been successful in decreasing postresection tumor recurrences in the postresection adjuvant setting. Nucleoside analogues in HBV-based HCC and peg-interferon plus ribavirin for HCV-based HCC have both been effective in reducing recurrence rates. current directions A large adjuvant trial examining resection and transplantation, with or without sorafenib (see below) is in progress. The success of viral therapies in decreasing HCC recurrence after resection is part of a broader focus on the tumor microenvironment (stroma, blood vessels, inflammatory cells, and cytokines) as mediators of HCC progression and as targets for new therapies. Fewer surgical options exist for stage III tumors involving major vascular structures. In patients without cirrhosis, a major hepatectomy is feasible, although prognosis is poor. Patients with Child A cirrhosis may be resected, but a lobectomy is associated with significant morbidity and mortality rates, and long-term prognosis is poor. Nevertheless, a small percentage of patients will achieve long-term survival, justifying an attempt at resection when feasible. Because of the advanced nature of these tumors, even successful resection can be followed by rapid recurrence. These patients are not considered candidates for transplantation because of the high tumor recurrence rates, unless their tumors can first be downstaged with neoadjuvant therapy. Decreasing the size of the primary tumor allows for less surgery, and the delay in surgery allows for extrahepatic disease to manifest on imaging studies and avoid unhelpful OLTX. The prognosis is poor for stage IV tumors, and no surgical treatment is recommended. Systemic Chemotherapy A large number of controlled and uncontrolled clinical studies have been performed with most of the major classes of cancer chemotherapy. No single agent or combination of agents given systemically reproducibly leads to even a 25% response rate or has any effect on survival. Regional Chemotherapy In contrast to the dismal results of systemic chemotherapy, a variety of agents given via the hepatic artery have activity for HCC confined to the liver (Table 111-6). Two randomized controlled trials have shown a survival advantage for TACE in a selected subset of patients. One used doxorubicin, and the other used cisplatin. Despite the fact that increased hepatic extraction of chemotherapy has been shown for very few drugs, some drugs such as cisplatin, doxorubicin, mitomycin C, and possibly neocarzinostatin, produce substantial objective responses when administered regionally. Few data are available on continuous hepatic arterial infusion for HCC, although pilot studies with cisplatin have shown encouraging responses. Because the reports have not usually stratified responses or survival based on TNM staging, it is difficult to know long-term prognosis in relation to tumor extent. Most of the studies on regional hepatic arterial chemotherapy also use an embolizing agent such as Ethiodol, gelatin sponge particles (Gelfoam), starch (Spherex), or microspheres. Two products are composed of microspheres of defined size ranges—Embospheres (Biospheres) and Contour SE—using particles of 40–120, 100–300, 300–500, and 500–1000 μm in size. The optimal diameter of the particles for TACE has yet to be defined. Consistently higher objective response rates are reported for arterial administration of drugs together with some form of hepatic artery occlusion compared with any form of systemic chemotherapy to date. The widespread use of some form of embolization in addition to chemotherapy has added to its toxicities. These include a frequent but transient fever, abdominal pain, and anorexia (all in >60% of patients). In addition, >20% of patients have increased ascites or transient elevation of transaminases. Cystic artery spasm and cholecystitis are also not uncommon. However, higher responses have also been obtained. The hepatic toxicities associated with embolization may be ameliorated by the use of degradable starch microspheres, with 50–60% response rates. Two randomized studies of TACE versus placebo showed a survival advantage for treatment (Table 111-6). In addition, it is not clear that formal oncologic CT response criteria are adequate for HCC. A loss of vascularity on CT without size change may be an index of loss of viability and thus of response to TACE. A major problem that TACE trials have had in showing a survival advantage is that many HCC patients die of their underlying cirrhosis, not the tumor. Nevertheless, two randomized controlled trials, one using doxorubicin and the other using cisplatin, showed a survival advantage for TACE versus placebo (Table 111-6). However, improving quality of life is a legitimate goal of regional therapy. Drug-eluting beads using doxorubicin (DEB-TACE) have been claimed to produce equivalent survival with less toxicity, but this strategy has not been tested in a randomized trial. Kinase Inhibitors A survival advantage has been observed for the oral multikinase inhibitor, sorafenib (Nexavar), versus placebo in two randomized trials. It targets both the Raf mitogenic pathway and the vascular endothelial growth factor receptor (VEGFR) endothelial vasculogenesis pathway. However, tumor responses were negligible, and the survival in the treatment arm in Asians was less than the placebo arm in the Western trial (Table 111-7). Sorafenib has considerable toxicity, with 30–40% of patients requiring “drug holidays,” dose reductions, or cessation of therapy. The most common toxicities include fatigue, hypertension, diarrhea, mucositis, and skin changes, such as the painful hand-foot syndrome, hair loss, and itching, each in 20–40% of patients. Several “look-alike” new agents that also target angiogenesis have either proved to be inferior or more toxic. These include sunitinib, brivanib, linifanib, everolimus, and bevacizumab (Table 111-8). The idea of angiogenesis alone as a major HCC therapeutic target may need revision. New Therapies Although prolonged survival has been reported in phase II trials using newer agents, such as bevacizumab plus Sorafenib vs placebo Raf, VEGFR, PDGFR 10.7 vs 7.9 Sorafenib vs placebo (Asians) Raf, VEGFR, PDGFR 6.5 vs 4.2 Abbreviations: PDGFR, platelet-derived growth factor receptor; Raf, rapidly accelerated fibrosarcoma; VEGFR vascular endothelial growth factor receptor. erlotinib, the data from a phase III trial were disappointing. Several forms of radiation therapy have been used in the treatment of HCC, including external-beam radiation and conformal radiation therapy. Radiation hepatitis remains a dose-limiting problem. The pure beta emitter 90Yttrium attached to either glass (TheraSphere) or resin (SIR-Spheres) microspheres injected into a major branch hepatic artery has been assessed in phase II trials of HCC and has encouraging tumor control and survival effects with minimal toxicities. Randomized phase III trials comparing it to TACE have yet to be completed. The main attractiveness of 90Yttrium therapy is its safety in the presence of major branch portal vein thrombosis, where TACE is dangerous or contraindicated. Furthermore, external-beam radiation has been reported to be safe and useful in the control of major branch portal or hepatic vein invasion (thrombosis) by tumors. The studies have all been small. Vitamin K has been assessed in clinical trials at high dosage for its HCC-inhibitory actions. This idea is based on the characteristic biochemical defect in HCC of elevated plasma levels of immature prothrombin (DCP or PIVKA-2), due to a defect in the activity of prothrombin carboxylase, a vitamin K–dependent enzyme. Two vitamin K randomized controlled trials from Japan show decreased tumor occurrence, but a major phase III trial aimed at limiting postresection recurrence was not successful. current directions A number of new kinase inhibitors are being evaluated for HCC (Tables 111-9 and 111-10). These include the biologicals, such as Raf kinase and vascular endothelial growth factor (VEGF) inhibitors, and agents that target various steps of the cell growth pathway. Current hopes focus particularly on the Met pathway inhibitors such as tivantinib and several IGF receptor antagonists. 90Yttrium looks promising and without chemotherapy toxicities. It is particularly attractive because, unlike TACE, it seems safe in the presence of portal vein thrombosis, a pathognomonic feature of HCC aggressiveness. The bottleneck of liver donors for OLTX is at last widening with increasing use of living donors, and criteria for OLTX for larger HCCs are slowly expanding. Patient participation in clinical trials assessing new therapies is encouraged (www.clinicaltrials.gov). The main effort now is the evaluation of combinations of the compounds listed in Tables 111-7 to 111-9 that target different pathways, as well as the combination of any of these targeted therapies, but especially sorafenib, with TACE or 90Yttrium radioembolization. Combining TACE with sorafenib appears to be safe in phase II studies with promising survival data, but randomized studies are still in progress. The same is true for intra-arterial 90Yttrium plus sorafenib as therapy for HCC and as bridge to transplant therapy. Ubiquitin-proteasome Bortezomib Abbreviations: EGF, epidermal growth factor; FGF1, fibroblast growth factor 1; IGF-I, insulin-like growth factor I; PDGF, platelet-derived growth factor; VEGF, vascular endothelial growth factor. Tumor growth or spread is considered a poor prognostic sign and evidence of treatment failure. By contrast, patients receiving chemotherapy are judged to have a response if there is shrinkage of tumor size. Lack of response/size decrease has been thought of as treatment failure. Three considerations in HCC management have completely changed the views concerning nonshrinkage after therapy. First, the correlation between response to chemotherapy and survival is poor in various tumors; in some tumors, such as ovarian cancer and small-cell lung cancer, substantial tumor shrinkage on chemotherapy is followed by rapid EGF receptor antagonists: erlotinib, gefitinib, lapatinib, cetuximab, brivanib Multikinase antagonists: sorafenib, sunitinib VEGF antagonist: bevacizumab VEGFR antagonist: ABT-869 (linifanib) mTOR antagonists: sirolimus, temsirolimus, everolimus Proteasome inhibitors: bortezomib Vitamin K 131I–Ethiodol (lipiodol) 131I–Ferritin 90Yttrium microspheres (TheraSphere, SIR-Spheres) 166Holmium, 188Rhenium Three-dimensional conformal radiation Proton beam high-dose radiotherapy Gamma knife, CyberKnife New targets: inhibitors of cyclin dependent kinases (Cdk), TRAIL induction caspases, and stem cells Abbreviations: EGF, epidermal growth factor; mTOR, mammalian target of rapamycin; VEGF, vascular endothelial growth factor; VEGFR, vascular endothelial growth factor receptor. Tumors of the Liver and Biliary Tree 552 tumor regrowth. Second, the Sorafenib HCC Assessment Randomized Protocol (SHARP) phase III trial of sorafenib versus placebo for unresectable HCC showed that survival could be significantly enhanced in the treatment arm with only 2% of the patients having tumor response but 70% of patients having disease stabilization. This observation has led to a reconsideration of the usefulness of response and the significance of disease stability. Third, HCC is a typically highly vascular tumor, and the vascularity is considered to be a measure of tumor viability. As a result, the Response Evaluation Criteria in Solid Tumors (RECIST) have been modified to mRECIST, which requires measurement of vascular/ viable tumor on the CT or MRI scan. A partial response is defined as a 30% decrease in the sum of diameters of viable (arterially enhancing) target tumors. The need for semiquantitation of tumor vascularity on scans has led to the introduction of diffusion-weighted MRI imaging. Tissue-specific imaging agents such as gadoxetic acid (Primovist or Eovist) and the move to functional and genetic imaging mark a shift in approaches. Furthermore, plasma AFP response may be a biologic marker of radiologic response. Long-term survival is associated with resection or ablation or transplantation, all of which can yield >70% 5-year survival. Liver transplant is the only therapy that can treat the tumor and the underlying liver disease simultaneously and may be the most important advance in HCC therapy in 50 years. Unfortunately, it benefits only patients with limited size tumors without macrovascular portal vein invasion. Untreated patients with multinodular asymptomatic tumors without vascular invasion or extrahepatic spread have a median survival of approximately 16 months. Chemoembolization (TACE) improves their median survival to 19–20 months and is considered standard therapy for these patients, who represent the majority of HCC patients, although 90Yttrium therapy may provide similar results with less toxicity. Patients with advanced-stage disease, vascular invasion, or metastases have a median survival of around 6 months. Among this group, outcomes may vary according to their underlying liver disease. It is this group at which kinase inhibitors are directed. The Most Common Modes of Patient Presentation 1. A patient with known history of hepatitis, jaundice, or cirrhosis, with an abnormality on ultrasound or CT scan, or rising AFP or DCP (PIVKA-2) 2. A patient with an abnormal liver function test as part of a routine examination 3. 4. Symptoms of HCC including cachexia, abdominal pain, or fever 1. Clinical jaundice, asthenia, itching (scratches), tremors, or disorientation 2. Hepatomegaly, splenomegaly, ascites, peripheral edema, skin signs of liver failure 1. Blood tests: full blood count (splenomegaly), liver function tests, ammonia levels, electrolytes, AFP and DCP (PIVKA-2), Ca2+ and Mg2+; hepatitis B, C, and D serology (and quantitative HBV DNA or HCV RNA, if either is positive); neurotensin (specific for fibrolamellar HCC) 2. Triphasic dynamic helical (spiral) CT scan of liver (if inadequate, then follow with an MRI); chest CT scan; upper and lower gastrointestinal endoscopy (for varices, bleeding, ulcers); and brain scan (only if symptoms suggest) 3. Core biopsy: of the tumor and separate biopsy of the underlying liver 1. HCC <2 cm: RFA, PEI, or resection 2. HCC >2 cm, no vascular invasion: liver resection, RFA, or OLTX 3. Multiple unilobar tumors or tumor with vascular invasion: TACE or sorafenib 4. Bilobar tumors, no vascular invasion: TACE with OLTX for patients with tumor response 5. Extrahepatic HCC or elevated bilirubin: sorafenib or bevacizumab plus erlotinib (combination agent trials are in progress) This rarer variant of HCC has a quite different biology than adult-type HCC. None of the known HCC causative factors seem important here. It is typically a disease of younger adults, often teenagers and predominantly females. It is AFP-negative, but patients typically have elevated blood neurotensin levels, normal liver function tests, and no cirrhosis. Radiology is similar for HCC, except that characteristic adult-type portal vein invasion is less common. Although it is often multifocal in the liver, and therefore not resectable, metastases are common, especially to lungs and locoregional lymph nodes, but survival is often much better than with adult-type HCC. Resectable tumors are associated with 5-year survival ≥50%. Patients often present with a huge liver or unexplained weight loss, fever, or elevated liver function tests on routine evaluations. These huge masses suggest quite slow growth for many tumors. Surgical resection is the best management option, even for metastases, as these tumors respond much less well to chemotherapy than adult-type HCC. Although several series of OLTX for FL-HCC have been reported, the patients seem to die from tumor recurrences, with a 2-to 5-year lag compared with OLTX for adult-type HCC. Anecdotal responses to gemcitabine plus cisplatin-TACE are reported. Epithelioid Hemangioendothelioma (EHE) This rare vascular tumor of adults is also usually multifocal and can also be associated with prolonged survival, even in the presence of metastases, which are commonly in the lung. There is usually no underlying cirrhosis. Histologically, these tumors are usually of borderline malignancy and express factor VIII, confirming their endothelial origin. OLTX may produce prolonged survival. Cholangiocarcinoma (CCC) CCC typically refers to mucin-producing adenocarcinomas (different from HCC) that arise from the biliary tract and have features of cholangiocyte differentiation. They are grouped by their anatomic site of origin, as intrahepatic (IHC), perihilar (central, ∼65% of CCCs), and peripheral (or distal, ∼30% of CCCs). IHC is the second most common primary liver tumor. Depending on the site of origin, they have different features and require different treatments. They arise on the basis of cirrhosis less frequently than HCC, but may complicate primary biliary cirrhosis. However, cirrhosis and both primary biliary cirrhosis and HCV predispose to IHC. Nodular tumors arising at the bifurcation of the common bile duct are called Klatskin tumors and are often associated with a collapsed gallbladder, a finding that mandates visualization of the entire biliary tree. The approach to management of central and peripheral CCC is quite different. Incidence is increasing. Although most CCCs have no obvious cause (etiology unknown), a number of predisposing factors have been identified. Predisposing diseases include primary sclerosing cholangitis (10–20% of primary sclerosing cholangitis [PSC] patients), an autoimmune disease, and liver fluke in Asians, especially Opisthorchis viverrini and Clonorchis sinensis. CCC seems also to be associated with any cause of chronic biliary inflammation and injury, with alcoholic liver disease, choledocholithiasis, choledochal cysts (10%), and Caroli’s disease (a rare inherited form of bile duct ectasia). CCC most typically presents as painless jaundice, often with pruritus or weight loss. Diagnosis is made by biopsy, percutaneously for peripheral liver lesions, or more commonly via endoscopic retrograde cholangiopancreatography (ERCP) under direct vision for central lesions. The tumors often stain positively for cytokeratins 7, 8, and 19 and negatively for cytokeratin 20. However, histology alone cannot usually distinguish CCC from metastases from colon or pancreas primary tumors. Serologic tumor markers appear to be nonspecific, but CEA, CA 19-9, and CA-125 are often elevated in CCC patients and are useful for following response to therapy. Radiologic evaluation typically starts with ultrasound, which is very useful in visualizing dilated bile ducts, and then proceeds with either MRI or magnetic resonance cholangiopancreatography (MRCP) or helical CT scans. Invasive cholangiopancreatography (ERCP) is then needed to define the biliary tree and obtain a biopsy or is needed therapeutically to decompress an obstructed biliary tree with internal stent placement. If that fails, then percutaneous biliary drainage will be needed, with the biliary drainage flowing into an external bag. Central tumors often invade the porta hepatis, and locoregional lymph node involvement by tumor is frequent. Incidence has been increasing in recent decades; few patients survive 5 years. The usual treatment is surgical, but combination systemic chemotherapy may be effective. After complete surgical resection for IHC, 5-year survival is 25–30%. Combination radiation therapy with liver transplant has produced a 5-year recurrence-free survival rate of 65%. Hilar CCC is resectable in ∼30% of patients and usually involves bile duct resection and lymphadenectomy for prognostication. Typical survival is approximately 24 months, with recurrences being mainly in the operative bed but with ∼30% in the lungs and liver. Distal CCC, which involves the main ducts, is normally treated by resection of the extrahepatic bile ducts, often with pancreaticoduodenectomy. Survival is similar. Due to the high rates of locoregional recurrences or positive surgical margins, many patients receive postoperative adjuvant radiotherapy. Its effect on survival has not been assessed. Intraluminal brachyradiotherapy has also shown some promise. However, photodynamic therapy enhanced survival in one study. In this technique, sodium porfimer is injected intravenously and then subjected to intraluminal red light laser photoactivation. OLTX has been assessed for treatment of unresectable CCC. Five-year survival was ∼20%, so enthusiasm waned. However, neoadjuvant radiotherapy with sensitizing chemotherapy has shown better survival rates for CCC treated by OLTX and is currently used by UNOS for perihilar CCC <3 cm with neither intrahepatic or extrahepatic metastases. A 12-center data collection study of 287 patients with perihilar CCC confirmed the benefit of this approach in a subset of patients, with a 53% 5-year survival rate but with 10% patient dropout before transplantation. The patients had neoadjuvant external radiation with radiosensitizing therapy. Patients with tumors >3 cm had significantly shorter survival. Multiple chemotherapeutic agents have been assessed for activity and survival in unresectable CCC. Most have been inactive. However, both systemic and hepatic arterial gemcitabine have shown promising results. The combination of cisplatin plus gemcitabine has produced a survival advantage compared with gemcitabine alone in a 410-patient randomized controlled phase III trial for patients with locally advanced or metastatic CCC and is now considered standard therapy for unresectable CCC. Median overall survival in the combination arm was 11.7 months versus 8.1 months for gemcitabine alone. Significant responses were seen mainly in patients with IHC and gallbladder cancer. However, neither surgery for lymph node–positive disease nor regional chemotherapy in nonsurgical patients has shown any survival advantage thus far. Several case series have shown safety and some responses for hepatic arterial chemotherapy with gemcitabine, drug-eluting beads, and 90Yttrium microspheres, but no convincing clinical trials are available. Clinical trials are under way with targeted therapies. Bevacizumab plus erlotinib gave a 10% partial response rate with median overall survival of 9.9 months. A sorafenib trial yielded an overall survival of 4.4 months, but 50% of the patients had received previous chemotherapy. Patients with unresectable tumors should be treated in clinical trials. Gallbladder (GB) cancer has an even worse prognosis than CCC, with a typical survival of ∼6 months or less. Women are affected much more commonly than men (4:1), unlike HCC or CCC, and GB cancer occurs more frequently than CCC. Most patients have a history of antecedent 553 gallstones, but very few patients with gallstones develop GB cancer (∼0.2%). GB cancer presents similarly to CCC and is often diagnosed unexpectedly during gallstone or cholecystitis surgery. Presentation is typically that of chronic cholecystitis, chronic right upper quadrant pain, and weight loss. Useful but nonspecific serum markers include CEA and CA 19-9. CT scans or MRCP typically reveal a GB mass. The mainstay of treatment is surgical, either simple or radical cholecystectomy for stage I or II disease, respectively. Survival rates are near 100% at 5 years for stage I, and range from 60–90% at 5 years for stage II. More advanced GB cancer has worse survival, and many patients are unresectable. Adjuvant radiotherapy, used in the presence of local lymph node disease, has not been shown to enhance survival. Chemotherapy is not useful in advanced or metastatic GB cancer. This tumor arises within 2 cm of the distal end of the common bile duct and is mainly (90%) an adenocarcinoma. Locoregional lymph nodes are commonly involved (50%), and the liver is the most frequent site for metastases. The most common clinical presentation is jaundice, and many patients also have pruritus, weight loss, and epigastric pain. Initial evaluation is performed with an abdominal ultrasound to assess vascular involvement, biliary dilation, and liver lesions. This is followed by a CT scan or MRI and especially MRCP. The most effective therapy is resection by pylorus-sparing pancreaticoduodenectomy, an aggressive procedure resulting in better survival rates than with local resection. Survival rates are ∼25% at 5 years in operable patients with involved lymph nodes and ∼50% in patients without involved nodes. Unlike CCC, approximately 80% of patients are thought to be resectable at diagnosis. Adjuvant chemotherapy or radiotherapy has not been shown to enhance survival. For metastatic tumors, chemotherapy is currently experimental. These are predominantly from colon, pancreas, and breast primary tumors but can originate from any organ primary. Ocular melanomas are prone to liver metastasis. Tumor spread to the liver normally carries a poor prognosis for that tumor type. Colorectal and breast hepatic metastases were previously treated with continuous hepatic arterial infusion chemotherapy. However, more effective systemic drugs for each of these two cancers, especially the addition of oxaliplatin to colorectal cancer regimens, have reduced the use of hepatic artery infusion therapy. In a large randomized study of systemic versus infusional plus systemic chemotherapy for resected colorectal metastases to the liver, the patients receiving infusional therapy had no survival advantage, mainly due to extrahepatic tumor spread. 90Yttrium resin beads are approved in the United States for treatment of colorectal hepatic metastases. The role of this modality, either alone or in combination with chemotherapy, is being evaluated in many centers. Palliation may be obtained from chemoembolization, PEI, or RFA. Three common benign tumors occur and all are found predominantly in women. They are hemangiomas, adenomas, and focal nodular hyperplasia (FNH). FNH is typically benign, and usually no treatment is needed. Hemangiomas are the most common and are entirely benign. Treatment is unnecessary unless their expansion causes symptoms. Adenomas are associated with contraceptive hormone use. They can cause pain and can bleed or rupture, causing acute problems. Their main interest for the physician is a low potential for malignant change and a 30% risk of bleeding. For this reason, considerable effort has gone into differentiating these three entities radiologically. On discovery of a liver mass, patients are usually advised to stop taking sex steroids, because adenoma regression may then occasionally occur. Adenomas can often be large masses ranging from 8–15 cm. Due to their size and definite, but low, malignant potential and potential for bleeding, adenomas are typically resected. The most useful diagnostic differentiating tool is a triphasic CT scan performed with HCC fast bolus protocol for arterial-phase imaging, together with subsequent delayed venous-phase imaging. Adenomas usually do not appear on Tumors of the Liver and Biliary Tree 554 the basis of cirrhosis, although both adenomas and HCCs are intensely vascular on the CT arterial phase and both can exhibit hemorrhage (40% of adenomas). However, adenomas have smooth, well-defined edges, and enhance homogeneously, especially in the portal venous phase on delayed images, when HCCs no longer enhance. FNHs exhibit a characteristic central scar that is hypovascular on the arterial-phase and hypervascular on the delayed-phase CT images. MRI is even more sensitive in depicting the characteristic central scar of FNH. Elizabeth Smyth, David Cunningham Pancreatic cancer is the fourth leading cause of cancer death in the United States and is associated with a poor prognosis. Endocrine tumors affecting the pancreas are discussed in Chap. 113. Infiltrating ductal adenocarcinomas, the subject of this Chapter, account for the vast majority of cases and arise most frequently in the head of pancreas. At the time of diagnosis, 85–90% of patients have inoperable or metastatic disease, which is reflected in the 5-year survival rate of only 6% for all stages combined. An improved 5-year survival of up to 24% may be achieved when the tumor is detected at an early stage and when complete surgical resection is accomplished. Pancreatic cancer represents 3% of all newly diagnosed malignancies in the United States. The most common age group at diagnosis is 65–84 years for both sexes. Pancreatic cancer was estimated to have been diagnosed in approximately 45,220 patients and accounted for approximately 38,460 deaths in 2013. Although survival rates have almost doubled over the past 35 years for this disease, overall survival remains low. An estimated 278,684 cases of pancreatic cancer occur annu ally worldwide (the thirteenth most common cancer globally), with up to 60% of these cases diagnosed in more developed countries. It remains the eighth most common cause of cancer death in men and the ninth most common in women. The incidence is highest in the United States and western Europe and lowest in parts of Africa and South Central Asia. However, increasing rates of obesity, diabetes, and tobacco use in addition to access to diagnostic radiology in the developing world are likely to increase incidence rates in these countries. In this situation, consideration of the cost implications of adoption of current treatment paradigms in resource-constrained environments will be necessary. Primary prevention such as limiting tobacco use and avoiding obesity may be more cost effective than improvements in treatment of preexisting disease. Cigarette smoking may be the cause of up to 20–25% of all pancreatic cancers and is the most common environmental risk factor for this disease. A longstanding history of type 1 or type 2 diabetes also appears to be a risk factor; however, diabetes may also occur in association with pancreatic cancer, possibly confounding this interpretation. Other risk factors may include obesity, chronic pancreatitis, and ABO blood group status. Alcohol does not appear to be a risk factor unless excess consumption gives rise to chronic pancreatitis. Pancreatic cancer is associated with a number of well-defined molecular hallmarks. The four genes most commonly mutated or inactivated in pancreatic cancer are KRAS (predominantly codon 12, in 60–75% of pancreatic cancers), the tumor-suppressor genes p16 (deleted in 95% of tumors), p53 (inactivated or mutated in 50–70% of tumors), and SMAD4 (deleted in 55% of tumors). The pancreatic cancer precursor lesion pancreatic intraepithelial neoplasia (PanIN) acquires these genetic abnormalities in a progressive manner associated with increasing dysplasia; initial KRAS mutations are followed by p16 loss and finally p53 and SMAD4 alterations. SMAD4 gene inactivation is associated with a pattern of widespread metastatic disease in advanced-stage patients and poorer survival in patients with surgically resected pancreatic adenocarcinoma. Up to 16% of pancreatic cancers may be inherited. Germline mutations in the following genes are associated with a significantly increased risk of pancreatic cancer and other cancers: (1) STK11 gene (Peutz-Jeghers syndrome), which carries a 132-fold increased lifetime risk of pancreatic cancer above the general population; (2) BRCA2 (increased risk of breast, ovarian, and pancreatic cancer); (3) p16/CDKN2A (familial atypical multiple mole melanoma), which carries an increased risk of melanoma and pancreatic cancer; (4) PALB2, which confers an increased risk of breast and pancreatic cancer; (5) hMLH1 and MSH2 (Lynch syndrome), which carries an increased risk of colon and pancreatic cancer; and (6) ATM (ataxia-telangiectasia), which carries an increased risk of breast cancer, lymphoma, and pancreatic cancer. Familial pancreatitis and an increased risk of pancreatic cancer are associated with mutations of the PRSS1 (serine protease 1) gene. However, for most familial pancreatic syndromes, the underlying genetic cause remains unexplained. The absolute number of affected first-degree relatives is also correlated with increased cancer risk, and patients with at least two first-degree relatives with pancreatic cancer should be considered to have familial pancreatic cancer until proven otherwise. The desmoplastic stroma surrounding pancreatic adenocarcinoma functions as a mechanical barrier to chemotherapy and secretes compounds essential for tumor progression and metastasis. Key mediators of these functions include the activated pancreatic stellate cell and the glycoprotein SPARC (secreted protein acidic and rich in cysteine), which is expressed in 80% of pancreatic ductal adenocarcinomas. Targeting this extracellular environment has become increasingly important in the treatment of advanced disease. Screening is not routinely recommended because the incidence of pancreatic cancer in the general population is low (lifetime risk 1.3%), putative tumor markers such as carbohydrate antigen 19-9 (CA19-9) and carcinoembryonic antigen (CEA) have insufficient sensitivity, and computed tomography (CT) has inadequate resolution to detect pancreatic dysplasia. Endoscopic ultrasound (EUS) is a more promising screening tool, and preclinical efforts are focused on identifying biomarkers that may detect pancreatic cancer at an early stage. Consensus practice recommendations based largely on expert opinion have chosen a threshold of greater than fivefold increased risk for developing pancreatic cancer to select individuals who may benefit from screening. This includes people with two or more first-degree relatives with pancreatic cancer, patients with Peutz-Jeghers syndrome, and BRCA 2, p16, and hereditary nonpolyposis colorectal cancer (HNPCC) mutation carriers with one or more affected first-degree relatives. PanIN represents a spectrum of small (<5 mm) neoplastic but noninvasive precursor lesions of the pancreatic ductal epithelium demonstrating mild, moderate, or severe dysplasia (PanIN 1–3, respectively); however, not all PanIN lesions will progress to frank invasive malignancy. Cystic pancreatic tumors such as intraductal mucinous papillary neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs) are increasingly detected radiologically and are frequently asymptomatic. Main duct IPMNs are more likely to occur in older persons and have higher malignant potential than branched duct IPMNs (invasive cancer in 45% vs 18% of resected lesions, respectively). In contrast, MCNs are solitary lesions of the distal pancreas that do not communicate with the duct system. MCNs have an almost exclusive female distribution (95%). The rate of invasive cancer in resected MCNs is lower (<18%) with increased rates associated with larger tumors or the presence of nodules. CLINICAL FEATURES Clinical Presentation Obstructive jaundice occurs frequently when the cancer is located in the head of the pancreas. This may be accompanied by symptoms of abdominal discomfort, pruritus, lethargy, and weight loss. Less common presenting features include epigastric pain, backache, new-onset diabetes mellitus, and acute pancreatitis caused by pressure effects on the pancreatic duct. Nausea and vomiting, resulting from gastroduodenal obstruction, may also be a symptom of this disease. Physical Signs Patients can present with jaundice and cachexia, and scratch marks may be present. Of patients with operable tumors, 25% have a palpable gallbladder (Courvoisier’s sign). Physical signs related to the development of distant metastases include hepatomegaly, ascites, left supraclavicular lymphadenopathy (Virchow’s node), and periumbilical nodules (Sister Mary Joseph’s nodes). DIAGNOSIS Diagnostic Imaging Patients who present with clinical features suggestive of pancreatic cancer undergo imaging to confirm the presence of a tumor and to establish whether the mass is likely to be inflammatory or malignant in nature. Other imaging objectives include the local and distant staging of the tumor, which will determine resectability and provide prognostic information. Dual-phase, contrast-enhanced spiral CT is the imaging modality of choice (Fig. 112-1). It provides accurate visualization of surrounding viscera, vessels, and lymph nodes, thus determining tumor resectability. Intestinal infiltration and liver and lung metastases are also reliably depicted on CT. There is no advantage of magnetic resonance imaging (MRI) over CT in predicting tumor resectability, but selected cases may benefit from MRI to characterize the nature of small indeterminate liver lesions and to evaluate the cause of biliary dilatation when no obvious mass is seen on CT. Endoscopic retrograde cholangiopancreatography (ERCP) is useful for revealing small pancreatic lesions, identifying stricture or obstruction in pancreatic or common bile ducts, and facilitating stent placement; however, it is associated with a risk of pancreatitis (Fig. 112-2). Magnetic resonance cholangiopancreatography (MRCP) is a noninvasive method for accurately depicting the level and degree of bile and pancreatic duct dilatation. EUS is highly sensitive in detecting lesions less than 3 cm in size (more sensitive than CT for lesions <2 cm) and is useful as a local staging tool for assessing vascular invasion and lymph node involvement. Fluorodeoxyglucose positron emission tomography (FDG-PET) should be considered before surgery or radical chemoradiotherapy (CRT), because it is superior to conventional imaging in detecting distant metastases. Tissue Diagnosis and Cytology Preoperative confirmation of malignancy is not always necessary in patients with radiologic appearances consistent with operable pancreatic cancer. However, EUS-guided fine-needle aspiration is the technique of choice when there is any doubt, and also for use in patients who require neoadjuvant treatment. It has an accuracy of approximately 90% and has a smaller risk of intraperitoneal dissemination compared with the percutaneous route. FIGURE 112-1 Coronal computed tomography showing pancre-atic cancer and dilated intrahepatic and pancreatic ducts (arrows). showing contrast in dilated pancreatic duct (arrows). Percutaneous biopsy of the pancreatic primary or liver metastases is only acceptable in patients with inoperable or metastatic disease. ERCP is a useful method for obtaining ductal brushings, but the sensitivity of ERCP for diagnosis ranges from 35 to 70%. Serum Markers Tumor-associated CA19-9 is elevated in approximately 70–80% of patients with pancreatic carcinoma but is not recommended as a routine diagnostic or screening test because its sensitivity and specificity are inadequate for accurate diagnosis. Preoperative CA19-9 levels correlate with tumor stage, and postresection CA19-9 level has prognostic value. It is an indicator of asymptomatic recurrence in patients with completely resected tumors and is used as a biomarker of response in patients with advanced disease undergoing chemotherapy. A number of studies have established a high pretreatment CA19-9 level as an independent prognostic factor. The American Joint Committee on Cancer (AJCC) tumor-nodemetastasis (TNM) staging of pancreatic cancer takes into account the location and size of the tumor, the involvement of lymph nodes, and distant metastasis. This information is then combined to assign a stage (Fig. 112-3). From a practical standpoint, patients are grouped according to whether the cancer is resectable, locally advanced (unresectable, but without distant spread), or metastatic. Approximately 10% of patients present with localized nonmetastatic disease that is potentially suitable for surgical resection. Approximately 30% of patients have R1 resection (microscopic residual disease) following surgery. Those who undergo R0 resection (no microscopic or macroscopic residual tumor) and who receive adjuvant treatment have the best chance of cure, with an estimated median survival of 20–23 months and a 5-year survival of approximately 20%. Outcomes are more favorable in patients with small (<3 cm), well-differentiated tumors and lymph node–negative disease. Patients should have surgery in dedicated pancreatic centers that have lower postoperative morbidity and mortality rates. The standard surgical procedure for patients with tumors of the pancreatic head or uncinate process is a pylorus-preserving pancreaticoduodenectomy (modified Whipple’s procedure). The procedure of choice for tumors of the pancreatic body and tail is a distal pancreatectomy, which routinely includes splenectomy. Postoperative treatment improves long-term outcomes in this group of patients. Adjuvant chemotherapy comprising six cycles of gemcitabine is common practice worldwide based on data from three randomized controlled trials (Table 112-1). The Charité FIGURE 112-3 Staging of pancreatic cancer, and survival according to stage. AJCC, American Joint Committee on Cancer. (Illustration by Stephen Millward.) Survival Study Comparator Arm No. of Patients PFS/DFS (months) Median Survival (months) ESPAC-1, Neoptolemos et al: N Engl J Chemotherapy (folinic acid 289 PFS 15.3 vs 9.4. (p = .02) 20.1 vs 15.5 (HR 0.71; 95% CI Med 350:1200, 2004 + bolus 5-FU) vs no 0.55–0.92; p = .009) chemotherapy CONKO 001, Oettle et al: JAMA Gemcitabine vs observation 368 Median DFS 13.4 vs 6.9 22.1 vs 20.2 (p = .06) 297:267, 2007 (p <.001) ESPAC-3, Neoptolemos et al: JAMA 5-FU/LV vs gemcitabine 1088 23 vs 23.6 (HR 0.94; 95% CI 0.81–1.08, 304:1073, 2010 p = .39) Abbreviations: CI, confidence interval; CONKO, Charité Onkologie; DFS, disease-free survival; ESPAC, European Study Group for Pancreatic Cancer; 5-FU, 5-fluorouracil; HR, hazard ratio; LV, leucovorin; PFS, progression-free survival. Survival Study Comparator Arm No. of Patients PFS (months) Median Survival (months) Moore et al: J Clin Oncol 26:1960, Gemcitabine vs gemcitabine 569 2007 + erlotinib Cunningham et al: J Clin Oncol Gemcitabine vs gemcitabine 533 27:5513, 2009 + capecitabine (GEM-CAP) Von Hoff et al: N Engl J Med Gemcitabine vs gemcitabine 861 369:1691, 2013 + nab-paclitaxel Conroy et al: N Engl J Med Gemcitabine vs FOLFIRINOX 342 364:1817, 2011 Onkologie trial (CONKO 001) found that the use of gemcitabine after complete resection significantly delayed the development of recurrent disease compared with surgery alone. The European Study Group for Pancreatic Cancer 3 (ESPAC-3) trial, which investigated the benefit of adjuvant 5-fluorouracil/folinic acid (5-FU/FA) versus gemcitabine, revealed no survival difference between the two drugs. However, the toxicity profile of adjuvant gemcitabine was superior to 5-FU/FA by virtue of its lower incidence of stomatitis and diarrhea. Adjuvant radiotherapy is not commonly used in Europe based on the negative results of the ESPAC-1 study. Adjuvant 5-FU-based CRT with gemcitabine before and after radiotherapy as used in the Radiation Therapy Oncology Group (RTOG) 97-04 trial is preferred in the United States. This approach may be most beneficial in patients with bulky tumors involving the pancreatic head. Approximately 30% of patients present with locally advanced, unresectable, but nonmetastatic pancreatic carcinoma. The median survival with gemcitabine is 9 months. Patients who respond to chemotherapy or who achieve stable disease after 3–6 months of gemcitabine have frequently been offered consolidation radiotherapy. However, a large, phase III, randomized controlled trial, LAP-07, did not demonstrate any improvement in survival for patients treated with CRT after 4 months of disease control on either gemcitabine or a gemcitabine/erlotinib combination. Approximately 60% of patients with pancreatic cancer present with metastatic disease. Patients with poor performance status do not usually benefit from chemotherapy. Gemcitabine was the standard Gastrointestinal (GI) neuroendocrine tumors (NETs) are tumors derived from the diffuse neuroendocrine system of the GI tract; that system is composed of amineand acid-producing cells with different hormonal profiles, depending on the site of origin. The tumors historically are divided into GI-NETs (in the GI tract) (also frequently called carcinoid tumors) and pancreatic neuroendocrine tumors (pNETs), although newer pathologic classifications have proposed that they all be classified as GI-NETs. The term GI-NET has been proposed to replace the term carcinoid; however, the term carcinoid is widely used, and many are not familiar with this change. 3.55 vs 3.75 (HR 0.77; 95% CI 5.91 vs 6.24 (HR 0.82; 95% CI 0.69– 0.64–0.92; p = .004) 0.99; p = .038) 3.8 vs 5.3 (HR 0.78; 95% CI 6.2 vs 7.1 (HR 0.86; 95% CI 0.72–1.02; 0.66–0.93; p = .004) p = .08) 3.7 vs 5.5 (HR 0.69; 95% CI 6.7 vs 8.5 (HR 0.72; 95% CI 0.62–0.83; 0.58–0.82; p <.001) p <.001) 3.3 vs 6.4 (HR 0.47; 95% CI 6.8 vs 11.1 (HR 0.57; 95% CI 0.45–0.73; 0.37–0.59; p <.001) p <.001) treatment with a median survival of 6 months and a 1-year survival rate of only 20%. The addition of nab-paclitaxel (an albumin bound nanoparticle formulation of paclitaxel) to gemcitabine results in significantly improved 1-year survival compared to gemcitabine alone (35% vs 22%, p <.001). Capecitabine, an oral fluoropyrimidine, has also been combined with gemcitabine (GEM-CAP) in a phase III trial that showed an improvement in response rate and progression-free survival over single-agent gemcitabine, but no overall survival benefit. However, pooling of two other randomized controlled trials with this trial in a meta-analysis resulted in a survival advantage with GEM-CAP. Addition of erlotinib, a small-molecule epidermal growth factor receptor inhibitor, produced a statistically significant but clinically marginal benefit when added to gemcitabine in the advanced disease setting. A phase III trial limited to good performance status patients with metastatic pancreatic cancer showed improved survival with the combination of 5-FU/FA, irinotecan, and oxaliplatin (FOLFIRINOX) compared with gemcitabine, but with increased toxicity (Table 112-2). The early detection and future treatment of pancreatic cancer relies on an improved understanding of molecular pathways involved in the development of this disease. This will ultimately lead to the discovery of novel agents and the identification of patient groups who are likely to benefit most from targeted therapy. Dr. Irene Chong is acknowledged for her work on this chapter in the 18th edition. Accordingly, this chapter will use the term GI-NETs (carcinoids). These tumors originally were classified as APUDomas (for amine precursor uptake and decarboxylation), as were pheochromocytomas, melanomas, and medullary thyroid carcinomas, because they share certain cytochemical features as well as various pathologic, biologic, and molecular features (Table 113-1). It was originally proposed that APUDomas had a similar embryonic origin from neural crest cells, but it is now known the peptide-secreting cells are not of neuroectodermal origin. Nevertheless, the concept of APUDomas is useful because these tumors have important similarities as well as some differences (Table 113-1). In this section, the areas of similarity between pNETs and GI-NETs (carcinoids) will be discussed together, and areas in which there are important differences will be discussed separately. NETs generally are composed of monotonous sheets of small round cells with uniform nuclei, and mitoses are uncommon. They can be identified tentatively on routine histology; however, these tumors are now recognized principally by their histologic staining patterns due to shared cellular proteins. Historically, silver staining was used, and Endocrine Tumors of the Gastrointestinal Tract and Pancreas endocrine tumors of the gastrointestinal tract and pancreas Robert T. Jensen GENERAL FEATURES OF GASTROINTESTINAL 113 generaL CharaCteristiCs of gastrointestinaL neuroenDoCrine tumors (gi-nets [CarCinoiDs], panCreatiC neuroenDoCrine tumors [pnets]) A. Share general neuroendocrine cell markers (identification used for diagnosis) 1. Chromogranins (A, B, C) are acidic monomeric soluble proteins found in the large secretory granules. Chromogranin A is the most widely used. 2. Neuron-specific enolase (NSE) is the γ-γ dimer of the enzyme enolase and is a cytosolic marker of neuroendocrine differentiation. 3. Synaptophysin is an integral membrane glycoprotein of 38,000 molecular weight found in small vesicles of neurons and neuroendocrine tumors. B. Pathologic similarities 1. All are APUDomas showing amine precursor uptake and decarboxylation. 2. Ultrastructurally, they have dense-core secretory granules (>80 nm). 3. Histologically, they generally appear similar with few mitoses and uniform nuclei. 4. Frequently synthesize multiple peptides/amines, which can be detected immunocytochemically but may not be secreted. 5. Presence or absence of clinical syndrome or type cannot be predicted by immunocytochemical studies. 6. Histologic classifications (grading, TNM classification) have prognostic significance. Only invasion or metastases establish malignancy. C. Similarities of biologic behavior 1. Generally slow growing, but some are aggressive. 2. Most are well-differentiated tumors having low proliferative indices. 3. Secrete biologically active peptides/amines, which can cause clinical symptoms. 4. Generally have high densities of somatostatin receptors, which are used for both localization and treatment. 5. Most (>70%) secrete chromogranin A, which is frequently used as a tumor marker. D. Similarities/differences in molecular abnormalities 1. Similarities a. Uncommon—mutations in common oncogenes (ras, jun, fos, etc). b. Uncommon—mutations in common tumor-suppressor genes (p53, retinoblastoma). c. Alterations at MEN 1 locus (11q13) (frequently foregut, less commonly mid/hindgut NETs) and p16INK4a (9p21) occur in a proportion (10–45%). d. Methylation of various genes occurs in 40–87% (ras-associated domain family I, p14, p16, O6-methylguanine methyltransferases, retinoic acid receptor β). 2. Differences a. pNETs—loss of 1p (21%), 3p (8–47%), 3q (8–41%), 11q (21–62%), 6q (18–68%), Y (45%). Gains at 17q (10–55%), 7q (16–68%), 4q (33%), 18 (up to 45%). b. GI-NETs (carcinoids)—loss of 18q (38–88%), >18p (33–43%), >9p, 16q21 (21–23%). Gains at 17q, 19p (57%), 4q (33%), 14q (20%), 5 (up to 36%). c. pNETs: ATRX/DAXX mutations in 43%, MEN 1 mutations in 44%, mTor mutations (14%); uncommon in midgut GI-NETs (0–2%). Abbreviations: ATRX, alpha-thalassemia X-lined mental retardation protein; DAXX, death domain associated protein; MEN 1, multiple endocrine neoplasia type 1; TNM, tumor, node, metastasis. tumors were classified as showing an argentaffin reaction if they took up and reduced silver or as being argyrophilic if they did not reduce it. Currently, immunocytochemical localization of chromogranins (A, B, C), neuron-specific enolase, and synaptophysin, which are all neuroendocrine cell markers, is used (Table 113-1). Chromogranin A is the most widely used. Ultrastructurally, these tumors possess electron-dense neurosecretory granules and frequently contain small clear vesicles that correspond to synaptic vesicles of neurons. NETs synthesize numerous peptides, growth factors, and bioactive amines that may be ectopically secreted, giving rise to a specific clinical syndrome (Table 113-2). The diagnosis of the specific syndrome requires the clinical features of the disease (Table 113-2) and cannot be made from the immunocytochemistry results alone. The presence or absence of a specific clinical syndrome also cannot be predicted from the immunocytochemistry alone (Table 113-1). Furthermore, pathologists cannot distinguish between benign and malignant NETs unless metastasis or invasion is present. GI-NETs (carcinoids) frequently are classified according to their anatomic area of origin (i.e., foregut, midgut, hindgut) because tumors with similar areas of origin share functional manifestations, histochemistry, and secretory products (Table 113-3). Foregut tumors generally have a low serotonin (5-HT) content; are argentaffin-negative but argyrophilic; occasionally secrete adrenocorticotropic hormone (ACTH) or 5-hydroxytryptophan (5-HTP), causing an atypical carcinoid syndrome (Fig. 113-1); are often multihormonal; and may metastasize to bone. They uncommonly produce a clinical syndrome due to the secreted products. Midgut carcinoids are argentaffin-positive, have a high serotonin content, most frequently cause the typical carcinoid syndrome when they metastasize (Table 113-3, Fig. 113-1), release serotonin and tachykinins (substance P, neuropeptide K, substance K), rarely secrete 5-HTP or ACTH, and less commonly metastasize to bone. Hindgut carcinoids (rectum, transverse and descending colon) are argentaffin-negative, are often argyrophilic, rarely contain serotonin or cause the carcinoid syndrome (Fig. 113-1, Table 113-3), rarely secrete 5-HTP or ACTH, contain numerous peptides, and may metastasize to bone. pNETs can be classified into nine well-established specific functional syndromes (Table 113-2), six additional very rare specific functional syndromes (less than five cases described), five possible specific functional syndromes (pNETs secreting calcitonin, neurotensin, pancreatic polypeptide, ghrelin) (Table 113-2), and nonfunctional pNETs. Other functional hormonal syndromes due to nonpancreatic tumors (usually intraabdominal in location) have been described only rarely and are not included in (Table 113-2). These include secretion by intestinal and ovarian tumors of peptide tyrosine tyrosine (PYY), which results in altered motility and constipation, and ovarian tumors secreting renin or aldosterone causing alterations in blood pressure or somatostatin causing diabetes or reactive hypoglycemia. Each of the functional syndromes listed in Table 113-2 is associated with symptoms due to the specific hormone released. In contrast, nonfunctional pNETs release no products that cause a specific clinical syndrome. “Nonfunctional” is a misnomer in the strict sense because those tumors frequently ectopically secrete a number of peptides (pancreatic polypeptide [PP], chromogranin A, ghrelin, neurotensin, α subunits of human chorionic gonadotropin, and neuron-specific enolase); however, they cause no specific clinical syndrome. The symptoms caused by nonfunctional pNETs are entirely due to the tumor per se. pNETs frequently ectopically secrete PP (60–85%), neurotensin (30–67%), calcitonin (30–42%), and to a lesser degree, ghrelin (5–65%). Whereas a few studies have proposed their secretion can cause a specific functional syndrome, most studies support the conclusion that their ectopic secretion is not associated with a specific clinical syndrome, and thus they are listed in Table 113-2 as possible clinical syndromes. Endocrine Tumors of the Gastrointestinal Tract and Pancreas Abbreviations: ACTH, adrenocorticotropic hormone; GRFoma, growth hormone–releasing factor secreting pancreatic endocrine tumor; IGF-II, insulin-like growth factor II; MEN, multiple endocrine neoplasia; pNET, pancreatic neuroendocrine tumor; PPoma, tumor secreting pancreatic polypeptide; PTHrP, parathyroid hormone–related peptide; VIPoma, tumor secreting vasoactive intestinal peptide; WDHA, watery diarrhea, hypokalemia, and achlorhydria syndrome. aPancreatic polypeptide–secreting tumors (PPomas) are listed in two places because most authorities classify these as not associated with a specific hormonal syndrome (nonfunctional); however, rare cases of watery diarrhea proposed to be due to PPomas have been reported. Because a large proportion of nonfunctional pNETs (60–90%) secrete PP, these tumors are often referred to as PPomas (Table 113-2). GI-NETs (carcinoids) can occur in almost any GI tissue (Table 113-3); however, at present, most (70%) have their origin in one of three sites: bronchus, jejunoileum, or colon/rectum. In the past, GI-NET (carcinoids) most frequently were reported in the appendix (i.e., 40%); however, the bronchus/lung, rectum, and small intestine are now the most common sites. Overall, the GI tract is the most common site for these tumors, accounting for 64%, with the respiratory tract a distant second at 28%. Both race and sex can affect the frequency as well as the distribution of GI-NETs (carcinoids). African Americans have a higher incidence of carcinoids. Race is particularly important for rectal carcinoids, which are found in 41% of Asians/Pacific Islanders with NETs compared to 32% of American Indians/Alaskan natives, 26% of African Americans, and 12% of white Americans. Females have a lower incidence of small intestinal and pancreatic carcinoids. The term pancreatic neuroendocrine or endocrine tumor, although widely used and therefore retained here, is also a misnomer, strictly speaking, because these tumors can occur either almost entirely in the pancreas (insulinomas, glucagonomas, nonfunctional pNETs, pNETs causing hypercalcemia) or at both pancreatic and extrapancreatic sites (gastrinomas, VIPomas [vasoactive intestinal peptide], somatostatinomas, GRFomas [growth hormone–releasing factor]). pNETs are also called islet cell tumors; however, the use of this term is discouraged because it is not established that they originate from the islets, and many can occur at extrapancreatic sites. Whereas the classification of GI neuroendocrine tumors into fore-gut, midgut, or hindgut is widely used and generally useful because the NETs within these areas have many similarities, they also have marked differences, particularly in biologic behavior, and it has not proved useful for prognostic purposes. More general classifications have been developed that allow NETs with similar features in different locations to be compared, have proven prognostic value, and are widely used. New classification systems have been developed for both GI-NETs (carcinoids) and pNETs by the World Health Organization (WHO), European Neuroendocrine Tumor Society (ENETS), and the American Joint Committee on Cancer/International Union Against Cancer (AJCC/UICC). Although there are some differences between these different classification systems, each uses similar information, and it is now recommended that the basic data underlying the classification be included in all standard pathology reports. These classification systems divide NETs from all sites into those that are well differentiated (low grade [G1] or intermediate grade [G2]) and those that are poorly differentiated (high grade [G3] divided into either small-cell carcinoma or large-cell neuroendocrine carcinoma). In these classification systems, both pNETs and GI-NETs (carcinoids) are classified as neuroendocrine tumors, and the old term of carcinoid is equivalent to well-differentiated neuroendocrine tumors of the GI tract. These classification systems are based on not only the differentiation of the NET, but also a grading system assessing proliferative indices (Ki-67 and the mitotic count). NETs are considered low grade (ENETS G1) if the Ki-67 is <3% and the mitotic count is <2 mitoses/high-power field (HPF), intermediate grade (ENETS G2) if the Ki-67 is 3–20% and the mitotic count is 2–20 mitoses/HPF, and high grade (ENETS G3) if the Ki-67 is >20% and the mitotic count is >20 mitoses/HPF. In addition to the grading system, a TNM classification has been proposed that is based on the level of tumor invasion, tumor size, and tumor extent (see Table 113-4 for an example with pNETs and appendiceal GI-NETs [carcinoids]). Because of the proven prognostic value of these classification and grading systems, as well as the fact that NETs with different classifications/grades respond differently to treatments, the systems are now essential for the management of all NETs. In addition to these classification/grading systems, a number of other factors have been identified that provide important prognostic information that can guide treatment (Table 113-5). The exact incidence of GI-NETs (carcinoids) or pNETs varies according to whether only symptomatic tumors or all tumors are considered. The incidence of clinically significant carcinoids is 7–13 cases/million population per year, whereas any malignant carcinoids taBLe 113-3 gi-net (CarCinoiD) LoCation, frequenCy of metastases, anD assoCiation With the CarCinoiD synDrome Esophagus <0.1 — — Stomach 4.6 10 9.5 Duodenum 2.0 — 3.4 Pancreas 0.7 71.9 20 Gallbladder 0.3 17.8 5 Bronchus, lung, trachea 27.9 5.7 13 Jejunum 1.8 {58.4 Ileum 14.9 9 Meckel’s diverticulum 0.5 — 13 Appendix 4.8 38.8 <1 Colon 8.6 51 5 Liver 0.4 32. — Ovary 1.0 2 32 50 Testis <0.1 — 50 Rectum 13.6 3.9 — Abbreviation: GI-NET, gastrointestinal neuroendocrine tumor. Source: Location is from the PAN-SEER data (1973–1999), and incidence of metastases is from the SEER data (1992–1999), reported by IM Modlin et al: Cancer 97:934, 2003. Incidence of carcinoid syndrome is from 4349 cases studied from 1950–1971, reported by JD Godwin: Cancer 36:560, 1975. FIGURE 113-1 Synthesis, secretion, and metabolism of serotonin (5-HT) in patients with typical and atypical carcinoid syndromes. 5-HIAA, 5-hydroxyindolacetic acid. Comparison of the Criteria for the tumor Category in the enets anD seventh eDition ajCC tnm CLassifiCations of panCreatiC anD appenDiCeaL nets I. Both GI-NETs (carcinoids) and pNETs Symptomatic presentation (p <.05) Presence of liver metastases (p <.001) Extent of liver metastases (p <.001) Presence of lymph node metastases (p <.001) Development of bone or extrahepatic metastases (p <.01) Depth of invasion (p <.001) Rapid rate of tumor growth Elevated serum alkaline phosphatase levels (p = .003) Primary tumor site (p <.001) Primary tumor size (p <.005) High serum chromogranin A level (p <.01) Presence of one or more circulating tumor cells (p <.001) Various histologic features Tumor differentiation (p <.001) High growth indices (high Ki-67 index, PCNA expression) High mitotic counts (p <.001) Necrosis present Presence of cytokeratin 19 (p <.02) Vascular or perineural invasion Vessel density (low microvessel density, increased lymphatic density) High CD10 metalloproteinase expression (in series with all grades of NETs) Flow cytometric features (i.e., aneuploidy) High VEGF expression (in low-grade or well-differentiated NETs only) WHO, ENETS, AJCC/UICC, and grading classification Presence of a pNET rather than GI-NET associated with poorer prognosis (p = .0001) Older age (p <.01) II. GI-NETs (Carcinoids) Location of primary: appendix < lung, rectum < small intestine < pancreas Presence of carcinoid syndrome Laboratory results (urinary 5-HIAA levels [p <.01], plasma neuropeptide K [p <.05], serum chromogranin A [p <.01]) Presence of a second malignancy Male sex (p <.001) Molecular findings (TGF-α expression [p <.05], chr 16q LOH or gain chr 4p [p <.05]) WHO, ENETS, AJCC/UICC, and grading classification Molecular findings (gain in chr 14, loss of 3p13 [ileal carcinoid], upregulation III. pNETs Location of primary: duodenal (gastrinoma) better than pancreatic Ha-ras oncogene or p53 overexpression Female gender MEN 1 syndrome absent Presence of nonfunctional tumor (some studies, not all) WHO, ENETS, AJCC/UICC, and grading classification Various histologic features: IHC positivity for c-KIT, low cyclin B1 expression (p <.01), loss of PTEN or of tuberous sclerosis-2 IHC, expression of Molecular findings (increased HER2/neu expression [p = .032], chr 1q, 3p, 3q, or 6q LOH [p = .0004], EGF receptor overexpression [p = .034], gains in chr 7q, 17q, 17p, 20q; alterations in the VHL gene [deletion, methylation]; presence of FGFR4-G388R single-nucleotide polymorphism) Abbreviations: 5-HIAA, 5-hydroxyindoleacetic acid; AJCC, American Joint Committee on Cancer; chr, chromosome; EGF, epidermal growth factor; FGFR, fibroblast growth factor receptor; GI-NET, gastrointestinal neuroendocrine tumor; IHC, immunohistochemistry; Ki-67, proliferation-associated nuclear antigen recognized by Ki-67 monoclonal antibody; LOH, loss of heterozygosity; MEN, multiple endocrine neoplasia; NET, neuroendocrine tumors; PCNA, proliferating cell nuclear antigen; pNET, pancreatic neuroendocrine tumor; PTEN, phosphatase and tensin homologue deleted from chromosome 10; TGF-α, transforming growth factor α; TNM, tumor, node, metastasis; UICC, International Union Against Cancer; VEGF, vascular endothelial growth factor; WHO, World Health Organization. Confined to pancreas, <2 cm Confined to pancreas, >2 cm Peripancreatic spread, but without major vascular invasion (truncus coeliacus, superior mesenteric artery) T1 ≤1 cm; invasion of T1a, ≤1 cm; T1b, >1–2 cm muscularis propria >2–4 cm or invasion of of subserosa/mesoappendix T3 >2 cm or >3 mm invasion of >4 cm or invasion of ileum subserosa/mesoappendix T4 Invasion of peritoneum/ Invasion of peritoneum/ other organs Abbreviations: AJCC, American Joint Committee on Cancer; ENETS, European Neuroendocrine Tumor Society; NET, neuroendocrine tumor; pNET, pancreatic neuroendocrine tumor; TNM, tumor, node, metastasis; UICC, International Union Against Cancer. Source: Modified from DS Klimstra: Semin Oncol 40:23, 2013 and G Kloppel et al: Virchow Arch 456:595, 2010. at autopsy are reported in 21–84 cases/million population per year. The incidence of GI-NETs (carcinoids) is approximately 25–50 cases per million in the United States, which makes them less common than adenocarcinomas of the GI tract. However, their incidence has increased sixfold in the last 30 years. In an analysis of 35,825 GI-NETs (carcinoids) (2004) from the U.S. Surveillance, Epidemiology, and End Results (SEER) database, their incidence was 5.25/100,000 per year, and the 29-year prevalence was 35/100,000. Clinically significant pNETs have a prevalence of 10 cases/million population, with insulinomas, gastrinomas, and nonfunctional pNETs having an incidence of 0.5–2 cases/million population per year (Table 113-2). pNETs account for 1–10% of all tumors arising in the pancreas and 1.3% of tumors in the SEER database, which consists primarily of malignant tumors. VIPomas are 2–8 times less common, glucagonomas are 17–30 times less common, and somatostatinomas are the least common. In autopsy studies, 0.5–1.5% of all cases have a pNET; however, in less than 1 in 1000 cases was a functional tumor thought to occur. Both GI-NETs (carcinoids) and pNETs commonly show malignant behavior (Tables 113-2 and 113-3). With pNETs, except for insulinomas in which <10% are malignant, 50–100% in different series are malignant. With GI-NETs (carcinoids), the percentage showing malignant behavior varies in different locations (Table 113-3). For the three most common sites of occurrence, the incidence of metastases varies greatly from the jejunoileum (58%), lung/bronchus (6%), and rectum (4%) (Table 113-3). With both GI-NETs (carcinoids) and pNETs, a number of factors (Table 113-5) are important prognostic factors in determining survival and the aggressiveness of the tumor. Patients with pNETs (excluding insulinomas) generally have a poorer prognosis than do patients with GI-NETs (carcinoids). The presence of liver metastases is the single most important prognostic factor in single and multivariate analyses for both GI-NETs (carcinoids) and pNETs. Particularly important in the development of liver metastases is the size of the primary tumor. For example, with small intestinal carcinoids, which are the most common cause of the carcinoid syndrome due to metastatic disease in the liver (Table 113-2), metastases occur in 15–25% if the tumor is <1 cm in diameter, 58–80% if it is 1–2 cm in diameter, and >75% if it is >2 cm in diameter. Similar data exist for gastrinomas and other pNETs; the size of the primary tumor is an independent predictor of the development of liver metastases. Endocrine Tumors of the Gastrointestinal Tract and Pancreas 562 The presence of lymph node metastases or extrahepatic metastases; the depth of invasion; the rapid rate of growth; various histologic features (differentiation, mitotic rates, growth indices, vessel density, vascular endothelial growth factor [VEGF], and CD10 metalloproteinase expression); necrosis; presence of cytokeratin; elevated serum alkaline phosphatase levels; older age; presence of circulating tumor cells; and flow cytometric results, such as the presence of aneuploidy, are all important prognostic factors for the development of metastatic disease (Table 113-5). For patients with GI-NETs (carcinoids), additional associations with a worse prognosis include the development of the carcinoid syndrome (especially the development of carcinoid heart disease), male sex, the presence of a symptomatic tumor or greater increases in a number of tumor markers (5-hydroxyindolacetic acid [5-HIAA], neuropeptide K, chromogranin A), and the presence of various molecular features. With pNETs or gastrinomas, a worse prognosis is associated with female sex, overexpression of the Ha-ras oncogene or p53, the absence of multiple endocrine neoplasia type 1 (MEN 1), higher levels of various tumor markers (i.e., chromogranin A, gastrin), and presence of various histologic features (immunohistochemistry for c-KIT, low cyclin B1, loss of PTEN/TSC-2, expression of fibroblast growth factor-13) and various molecular features (Table 113-5). The TNM classification systems and the grading systems (G1–G3) have important prognostic value. A number of diseases due to various genetic disorders are associated with an increased incidence of NETs (Table 113-6). Each one is caused by a loss of a possible tumor-suppressor gene. The most important is MEN 1, which is an autosomal dominant disorder due to a defect in a 10-exon gene on 11q13, which encodes for a 610-amino-acid nuclear protein, menin (Chap. 408). Patients with MEN 1 develop hyperparathyroidism due to parathyroid hyperplasia in 95–100% of cases, pNETs in 80–100%, pituitary adenomas in 54–80%, adrenal adenomas in 27–36%, bronchial carcinoids in 8%, thymic carcinoids in 8%, gastric carcinoids in 13–30% of patients with Zollinger-Ellison syndrome, skin tumors (angiofibromas [88%], collagenomas [72%]), central nervous system (CNS) tumors (meningiomas [<8%]), and smooth-muscle tumors (leiomyomas, leiomyosarcomas [1–7%]). Among patients with MEN 1, 80–100% develop nonfunctional pNETs (most are microscopic with 0–13% large/symptomatic), and functional pNETs occur in 20–80% in different series, with a mean of 54% developing Zollinger-Ellison syndrome, 18% insulinomas, 3% glucagonomas, 3% Location of Gene Mutation and Gene Syndrome Product NETs Seen/Frequency Multiple endocrine 11q13 (encodes neoplasia type 1 610-amino-acid protein, (MEN 1) menin) von Recklinghausen’s 17q11.2 (encodes disease (neurofibro-2485-amino-acid protein, matosis 1 [NF-1]) neurofibromin) Tuberous sclerosis 9q34 (TSCI) (encodes 1164-amino-acid protein, hamartin), 16p13 (TSC2) (encodes 1807-aminoacid protein, tuberin) 80–100% develop pNETS (microscopic), 20–80% (clinical): (nonfunctional > gastrinoma > insulinoma) GI-NETs (Carcinoids): gastric (13–30%), bronchial/ thymic (8%) 0–10% develop pNETs, primarily duodenal somatostatinomas (usually nonfunctional) Rarely insulinoma, gastrinoma Uncommonly develop pNETS (nonfunctional and functional [insulinoma, gastrinoma]) VIPomas, and <1% GRFomas or somatostatinomas. MEN 1 is present in 20–25% of all patients with Zollinger-Ellison syndrome, 4% of patients with insulinomas, and a low percentage (<5%) of patients with other pNETs. Three phacomatoses associated with NETs are von Hippel–Lindau disease (VHL), von Recklinghausen’s disease (neurofibromatosis type 1 [NF-1]), and tuberous sclerosis (Bourneville’s disease) (Table 113-6). VHL is an autosomal dominant disorder due to defects on chromosome 3p25, which encodes for a 213-amino-acid protein that interacts with the elongin family of proteins as a transcriptional regulator (Chaps. 118, 339, 407, and 408). In addition to cerebellar hemangioblastomas, renal cancer, and pheochromocytomas, 10–17% develop a pNET. Most are nonfunctional, although insulinomas and VIPomas have been reported. Patients with NF-1 (von Recklinghausen’s disease) have defects in a gene on chromosome 17q11.2 that encodes for a 2845-amino-acid protein, neurofibromin, which functions in normal cells as a suppressor of the ras signaling cascade (Chap. 118). Up to 10% of these patients develop an upper GI-NET (carcinoid), characteristically in the periampullary region (54%). Many are classified as somatostatinomas because they contain somatostatin immunocytochemically; however, they uncommonly secrete somatostatin and rarely produce a clinical somatostatinoma syndrome. NF-1 has rarely been associated with insulinomas and Zollinger-Ellison syndrome. NF-1 accounts for 48% of all duodenal somatostastinomas and 23% of all ampullary GI-NETs (carcinoids). Tuberous sclerosis is caused by mutations that alter either the 1164-amino-acid protein hamartin (TSC1) or the 1807-amino-acid protein tuberin (TSC2) (Chap. 118). Both hamartin and tuberin interact in a pathway related to phosphatidylinositol 3-kinases and mammalian target of rapamycin (mTOR) signaling cascades. A few cases including nonfunctional and functional pNETs (insulinomas and gastrinomas) have been reported in these patients (Table 113-6). Mahvash disease is associated with the development of α-cell hyperplasia, hyperglucagonemia, and the development of NF pNETs and is due to a homozygous P86S mutation of the human glucagon receptor. Mutations in common oncogenes (ras, myc, fos, src, jun) or common tumor-suppressor genes (p53, retinoblastoma susceptibility gene) are not commonly found in either pNETs or GI-NETs (carcinoids) (Table 113-1). However, frequent (70%) gene amplifications in MDM2, MDM4, and WIPI inactivating the p53 pathway are noted in well-differentiated pNETs, and the retinoblastoma pathway is altered in the majority of pNETs. In addition to these genes, additional alterations that may be important in their pathogenesis include changes in the MEN1 gene, p16/MTS1 tumor-suppressor gene, and DPC4/ Smad4 gene; amplification of the HER-2/neu protooncogene; alterations in transcription factors (Hoxc6 [GI carcinoids]), growth factors, and their receptors; methylation of a number of genes that probably results in their inactivation; and deletions of unknown tumor-suppressor genes as well as gains in other unknown genes (Table 113-1). The clinical antitumor activity of everolimus, an mTOR inhibitor, and sunitinib, a tyrosine kinase inhibitor (PDGFR, VEGFR1, VEGFR2, c-KIT, FLT-3), support the importance of the mTOR-AKT pathway and tyrosine kinase receptors in mediating growth of malignant NETs (especially pNETs). The importance of the mTOR pathway in pNET growth is further supported by the finding that a single-nucleotide polymorphism (FGFR4-G388R, in fibroblast growth factor receptor 4) affects selectivity to the mTOR inhibitor and can result in significantly higher risk of advanced pNET stage and liver metastases (Table 113-5). Comparative genomic hybridization, genome-wide allelotyping studies, and genome-wide single-nucleotide polymorphism analyses have shown that chromosomal losses and gains are common in pNETs and GI-NETs (carcinoids), but they differ between these two NETs, and some have prognostic significance (Table 113-5). Mutations in the MEN1 gene are probably particularly important. Loss of heterozygosity at the MEN 1 locus on chromosome 11q13 is noted in 93% of sporadic pNETs (i.e., in patients without MEN 1) and in 26–75% of sporadic GI-NETs (carcinoids). Mutations in the MEN1 gene are reported in 31–34% of sporadic gastrinomas. Exomic sequencing of sporadic pNETs found that the most frequently altered gene was Abbreviations: GI, gastrointestinal; PNETs, pancreatic neuroendocrine tumors. MEN1, occurring in 44% of patients, followed by mutations in 43% of patients in genes encoding for two subunits of a transcription/ chromatin remodeling complex consisting of DAXX (death-domainassociated protein) and ATRX (α-thalassemia/mental retardation syndrome X-linked) and in 15% of patients in the mTOR pathway. The presence of a number of these molecular alterations in pNETs or GI-NETs (carcinoids) correlates with tumor growth, tumor size, and disease extent or invasiveness and may have prognostic significance (Table 113-5). CHARACTERISTICS OF THE MOST COMMON GI-NETs (CARCINOIDS) Appendiceal NETs (Carcinoids) Appendiceal NETs (carcinoids) occur in 1 in every 200–300 appendectomies, usually in the appendiceal tip, have an incidence of 0.15/100,000 per year, comprise 2–5% of all GI-NETs (carcinoids), and comprise 32–80% of all appendiceal tumors. Most (i.e., >90%) are <1 cm in diameter without metastases in older studies, but more recently, 2–35% have had metastases (Table 113-3). In the SEER data of 1570 appendiceal carcinoids, 62% were localized, 27% had regional metastases, and 8% had distant metastases. The risk of metastases increases with size, with those <1 cm having a 0 to <10% risk of metastases and those >2 cm having a 25–44% risk. Besides tumor size, other important prognostic factors for metastases include basal location, invasion of mesoappendix, poor differentiation, advanced stage or WHO/ENETS classification, older age, and positive resection margins. The 5-year survival is 88–100% for patients with localized disease, 78–100% for patients with regional involvement, and 12–28% for patients with distal metastases. In patients with tumors <1 cm in diameter, the 5-year survival is 95–100%, whereas it is 29% if tumors are >2 cm in diameter. Most tumors are well-differentiated G1 tumors (87%) (Table 113-4), with the remainder primarily well-differentiated G2 tumors (13%); poorly differentiated G3 tumors are uncommon (<1%). Their percentage of the total number of carcinoids decreased from 43.9% (1950–1969) to 2.4% (1992–1999). Appendiceal goblet cell (GC) NETs (carcinoids)/ carcinomas are a rare subtype (<5%) that are mixed adeno-neuroendocrine carcinomas. They are malignant and are thought to comprise a distinct entity; they frequently present with advanced disease and are recommended to be treated as adenocarcinomas, not carcinoid tumors. Small intestinal (SI) NETs (carcinoids) have a reported incidence of 0.67/100,000 in the United States, 0.32/100,000 in England, and 1.12/100,000 in Sweden and comprise >50% of all SI tumors. There is a male predominance (1.5:1), and race affects frequency, with a lower frequency in Asians and greater frequency in African Americans. The mean age of presentation is 52–63 years, with a wide range (1–93 years). Familial SI carcinoid families exist but are very uncommon. These are frequently multiple; 9–18% occur in the jeunum, 70–80% are present in the ileum, and 70% occur within 6 cm (2.4 in.) of the ileocecal valve. Forty percent are <1 cm in diameter, 32% are 1–2 cm, and 29% are >2 cm. They are characteristically well differentiated; however, they are generally invasive, with 1.2% being intramucosal in location, 27% penetrating the submucosa, and 20% invading the muscularis propria. Metastases occur in a mean of 47–58% (range 20–100%). Liver metastases occur in 38%, to lymph nodes in 37% and more distant in 20–25%. They characteristically cause a marked fibrotic reaction, which can lead to intestinal obstruction. Tumor size is an important variable in the frequency of metastases. However, even small NETs (carcinoids) of the small intestine (<1 cm) have metastases in 15–25% of cases, whereas the proportion increases to 58–100% for tumors 1–2 cm in diameter. Carcinoids also occur in the duodenum, with 31% having metastases. Duodenal tumors <1 cm virtually never metastasize, whereas 33% of those >2 cm had metastases. SI NETs (carcinoids) are the most common cause (60–87%) of the carcinoid syndrome and are discussed in a later section (Table 113-7). Important prognostic factors are listed in (Table 113-5), and particularly important are the During Course At Presentation of Disease Mean 57 yrs 59.2 yrs tumor extent, proliferative index by grading, and stage (Table 113-4). The overall survival at 5 years is 55–75%; however, it varies markedly with disease extent, being 65–90% with localized disease, 66–72% with regional involvement, and 36–43% with distant disease. Rectal NETs (Carcinoids) Rectal NETs (carcinoids) comprise 27% of all GI-NETs (carcinoids) and 16% of all NETs and are increasing in frequency. In the U.S. SEER data, they currently have an incidence of 0.86/100,000 per year (up from 0.2/100,000 per year in 1973) and represent 1–2% of all rectal tumors. They are found in approximately 1 in every 1500/2500 proctoscopies/colonoscopies or 0.05–0.07% of individuals undergoing these procedures. Nearly all occur between 4 and 13 cm above the dentate line. Most are small, with 66–80% being <1 cm in diameter, and rarely metastasize (5%). Tumors between 1 and 2 cm can metastasize in 5–30%, and those >2 cm, which are uncommon, in >70%. Most invade only to the submucosa (75%), with 2.1% confined to the mucosa, 10% to the muscular layer, and 5% to adjacent structures. Histologically, most are well differentiated (98%) with 72% ENETS/WHO grade G1 and 28% grade G2 (Table 113-4). Overall survival is 88%; however, it is very much dependent of the stage, with 5-year survival of 91% for localized disease, 36–49% for regional disease, and 20–32% for distant disease. Risk factors are listed in Table 113-5 and particularly include tumor size, depth of invasion, presence of metastases, differentiation, and recent TNM classification and grade. Bronchial NETs (Carcinoids) Bronchial NETs (carcinoids) comprise 25–33% of all well-differentiated NETs and 90% of all the poorly differentiated NETs found, likely due to a strong association with smoking. Their incidence ranges from 0.2 to 2/100,000 per year in the United States and European countries and is increasing at a rate of 6% per year. They are slightly more frequent in females and in whites compared with those of Hispanic/Asian/African descent, and are most commonly seen in the sixth decade of life, with a younger age of presentation for typical carcinoids (45 years) compared to atypical carcinoids (55 years). A number of different classifications of bronchial GI-NETs (carcinoids) have been proposed. In some studies, they are classified into four categories: typical carcinoid (also called bronchial carcinoid tumor, Kulchitsky cell carcinoma I [KCC-I]), atypical carcinoid (also called well-differentiated neuroendocrine carcinoma [KC-II]), intermediate small-cell neuroendocrine carcinoma, and small-cell neuroendocarcinoma Endocrine Tumors of the Gastrointestinal Tract and Pancreas 564 (KC-III). Another proposed classification includes three categories of lung NETs: benign or low-grade malignant (typical carcinoid), low-grade malignant (atypical carcinoid), and high-grade malignant (poorly differentiated carcinoma of the large-cell or small-cell type). The WHO classification includes four general categories: typical carcinoid, atypical carcinoid, large-cell neuroendocrine carcinoma, and small-cell carcinoma. The ratio of typical to atypical carcinoids is 8–10:1, with the typical carcinoids comprising 1–2% of lung tumors, atypical 0.1–0.2%, large-cell neuroendocrine tumors 0.3%, and small-cell lung cancer 9.8% of all lung tumors. These different categories of lung NETs have different prognoses, varying from excellent for typical carcinoid to poor for small-cell neuroendocrine carcinomas. The occurrence of large-cell and small-cell lung carcinoids, but not typical or atypical lung carcinoids, is related to tobacco use. The 5-year survival is very much influenced by the classification of the tumor, with survival of 92–100% for patients with a typical carcinoid, 61–88% with an atypical carcinoid, 13–57% with a large-cell neuroendocrine tumor, and 5% with a small-cell lung cancer. Gastric NET (Carcinoids) Gastric NETs (carcinoids) account for 3 of every 1000 gastric neoplasms and 1.3–2% of all carcinoids, and their relative frequency has increased threeto fourfold over the last five decades (2.2% in 1950 to 9.6% in 2000–2007, SEER data). At present, it is unclear whether this increase is due to better detection with the increased use of upper GI endoscopy or to a true increase in incidence. Gastric NETs (carcinoids) are classified into three different categories, and this has important implications for pathogenesis, prognosis, and treatment. Each originates from gastric enterochromaffin-like (ECL) cells, one of the six types of gastric neuroendocrine cells, in the gastric mucosa. Two subtypes are associated with hypergastrinemic states, either chronic atrophic gastritis (type I) (80% of all gastric NETs [carcinoids]) or Zollinger-Ellison syndrome, which is almost always a part of the MEN 1 syndrome (type II) (6% of all cases). These tumors generally pursue a benign course, with type I uncommonly (<10%) associated with metastases, whereas type II tumors are slightly more aggressive, with 10–30% associated with metastases. They are usually multiple, small, and infiltrate only to the submucosa. The third subtype of gastric NETs (carcinoids) (type III) (sporadic) occurs without hypergastrinemia (14–25% of all gastric carcinoids) and has an aggressive course, with 54–66% developing metastases. Sporadic carcinoids are usually single, large tumors; 50% have atypical histology, and they can be a cause of the carcinoid syndrome. Five-year survival is 99–100% in patients with type I, 60–90% in patients with type II, and 50% in patients with type III gastric NETs (carcinoids). CLINICAL PRESENTATION OF NETs (CARCINOIDS) GI/Lung NET (Carcinoid) Without the Carcinoid Syndrome The age of patients at diagnosis ranges from 10 to 93 years, with a mean age of 63 years for the small intestine and 66 years for the rectum. The presentation is diverse and is related to the site of origin and the extent of malignant spread. In the appendix, NETs (carcinoids) usually are found incidentally during surgery for suspected appendicitis. SI NETs (carcinoids) in the jejunoileum present with periodic abdominal pain (51%), intestinal obstruction with ileus/invagination (31%), an abdominal tumor (17%), or GI bleeding (11%). Because of the vagueness of the symptoms, the diagnosis usually is delayed approximately 2 years from onset of the symptoms, with a range up to 20 years. Duodenal, gastric, and rectal NETs (carcinoids) are most frequently found by chance at endoscopy. The most common symptoms of rectal carcinoids are melena/bleeding (39%), constipation (17%), and diarrhea (12%). Bronchial NETs (carcinoids) frequently are discovered as a lesion on a chest radiograph, and 31% of the patients are asymptomatic. Thymic NETs (carcinoids) present as anterior mediastinal masses, usually on chest radiograph or computed tomography (CT) scan. Ovarian and testicular NETs (carcinoids) usually present as masses discovered on physical examination or ultrasound. Metastatic NETs (carcinoids) in the liver frequently presents as hepatomegaly in a patient who may have minimal symptoms and nearly normal liver function test results. GI/lung NETs (carcinoids) immunocytochemically can contain numerous GI peptides: gastrin, insulin, somatostatin, motilin, neurotensin, tachykinins (substance K, substance P, neuropeptide K), glucagon, gastrin-releasing peptide, vasoactive intestinal peptide (VIP), PP, ghrelin, other biologically active peptides (ACTH, calcitonin, growth hormone), prostaglandins, and bioactive amines (serotonin). These substances may or may not be released in sufficient amounts to cause symptoms. In various studies of patients with GI-NETs (carcinoids), elevated serum levels of PP were found in 43%, motilin in 14%, gastrin in 15%, and VIP in 6%. Foregut NETs (carcinoids) are more likely to produce various GI peptides than are midgut NETs (carcinoids). Ectopic ACTH production causing Cushing’s syndrome is seen increasingly with foregut carcinoids (respiratory tract primarily) and, in some series, has been the most common cause of the ectopic ACTH syndrome, accounting for 64% of all cases. Acromegaly due to growth hormone–releasing factor release occurs with foregut NETs (carcinoids), as does the somatostatinoma syndrome, but rarely occurs with duodenal NETs (carcinoids). The most common systemic syndrome with GI-NETs (carcinoids) is the carcinoid syndrome, which is discussed in detail in the next section. CARCINOID SYNDROME Clinical Features The cardinal features from a number of series at presentation as well as during the disease course are shown in Table 113-7. Flushing and diarrhea are the two most common symptoms, occurring in a mean of 69–70% of patients initially and in up to 78% of patients during the course of the disease. The characteristic flush is of sudden onset; it is a deep red or violaceous erythema of the upper body, especially the neck and face, often associated with a feeling of warmth and occasionally associated with pruritus, lacrimation, diarrhea, or facial edema. Flushes may be precipitated by stress; alcohol; exercise; certain foods, such as cheese; or certain agents, such as catecholamines, pentagastrin, and serotonin reuptake inhibitors. Flushing episodes may be brief, lasting 2–5 min, especially initially, or may last hours, especially later in the disease course. Flushing usually is associated with metastatic midgut NETs (carcinoids) but can also occur with foregut NETs (carcinoids). With bronchial NETs (carcinoids), the flushes frequently are prolonged for hours to days, reddish in color, and associated with salivation, lacrimation, diaphoresis, diarrhea, and hypotension. The flush associated with gastric NETs (carcinoids) can also be reddish in color, but with a patchy distribution over the face and neck, although the classic flush seen with midgut NETs (carcinoids) can also be seen with gastric NETs (carcinoids). It may be provoked by food and have accompanying pruritus. Diarrhea usually occurs with flushing (85% of cases). The diarrhea usually is described as watery, with 60% of patients having <1 L/d of diarrhea. Steatorrhea is present in 67%, and in 46%, it is >15 g/d (normal <7 g). Abdominal pain may be present with the diarrhea or independently in 10–34% of cases. Cardiac manifestations occur initially in 11–40% (mean 26%) of patients with carcinoid syndrome and in 14–41% (mean 30%) at some time in the disease course. The cardiac disease is due to the formation of fibrotic plaques (composed of smooth-muscle cells, myofibroblasts, and elastic tissue) involving the endocardium, primarily on the right side, although lesions on the left side also occur occasionally, especially if a patent foramen ovale exists. The dense fibrous deposits are most commonly on the ventricular aspect of the tricuspid valve and less commonly on the pulmonary valve cusps. They can result in constriction of the valves, and pulmonic stenosis is usually predominant, whereas the tricuspid valve is often fixed open, resulting in regurgitation predominating. Overall, in patients with carcinoid heart disease, 90–100% have tricuspid insufficiency, 43–59% have tricuspid stenosis, 50–81% have pulmonary insufficiency, 25–59% have pulmonary stenosis, and 11% (0–25%) left-side lesions. Up to 80% of patients with cardiac lesions develop heart failure. Lesions on the left side are much less extensive, occur in 30% at autopsy, and most frequently affect the mitral valve. Up to 80% of patients with cardiac lesions have evidence of heart failure. At diagnosis in various series, 27–43% of patients are in New York Heart Association class I, 30–40% are in class II, 13–31% are in class III, and 3–12% are in class IV. At present, carcinoid heart disease is reported to be decreasing in frequency and severity, with mean occurrence in 20% of patients and occurrence in as few as 3–4% in some reports. Whether this decrease is due to the widespread use of somatostatin analogues, which control the release of bioactive agents thought involved in mediating the heart disease, is unclear. Other clinical manifestations include wheezing or asthma-like symptoms (8–18%), pellagra-like skin lesions (2–25%), and impaired cognitive function. A variety of noncardiac problems due to increased fibrous tissue have been reported, including retroperitoneal fibrosis causing urethral obstruction, Peyronie’s disease of the penis, intraabdominal fibrosis, and occlusion of the mesenteric arteries or veins. Pathobiology Carcinoid syndrome occurred in 8% of 8876 patients with GI-NETs (carcinoids), with a rate of 1.7–18.4% in different studies. It occurs only when sufficient concentrations of products secreted by the tumor reach the systemic circulation. In 91–100% of cases, this occurs after distant metastases to the liver. Rarely, primary GI-NETs (carcinoids) with nodal metastases with extensive retroperitoneal invasion, pNETs (carcinoids) with retroperitoneal lymph nodes, or NETs (carcinoids) of the lung or ovary with direct access to the systemic circulation can cause the carcinoid syndrome without hepatic metastases. All GI-NETs (carcinoids) do not have the same propensity to metastasize and cause the carcinoid syndrome (Table 113-3). Midgut NETs (carcinoids) account for 57–67% of cases of carcinoid syndrome, fore-gut NETs (carcinoids) for 0–33%, hindgut for 0–8%, and an unknown primary location for 2–26% (Tables 113-3 and 113-7). One of the main secretory products of GI-NETs (carcinoids) involved in the carcinoid syndrome is serotonin (5-HT) (Fig. 113-1), which is synthesized from tryptophan. Up to 50% of dietary tryptophan can be used in this synthetic pathway by tumor cells, and this can result in inadequate supplies for conversion to niacin; hence, some patients (2.5%) develop pellagra-like lesions. Serotonin has numerous biologic effects, including stimulating intestinal secretion with inhibition of absorption, stimulating increases in intestinal motility, and stimulating fibrogenesis. In various studies, 56–88% of all GI-NETs (carcinoids) were associated with serotonin overproduction; however, 12–26% of the patients did not have the carcinoid syndrome. In one study, platelet serotonin was elevated in 96% of patients with midgut NETs (carcinoids), 43% with foregut tumors, and 0% with hindgut tumors. In 90–100% of patients with the carcinoid syndrome, there is evidence of serotonin overproduction. Serotonin is thought to be predominantly responsible for the diarrhea. Patients with the carcinoid syndrome have increased colonic motility with a shortened transit time and possibly a secretory/absorptive alteration that is compatible with the known actions of serotonin in the gut mediated primarily through 5-HT3 and, to a lesser degree, 5-HT4 receptors. Serotonin receptor antagonists (especially 5-HT3 antagonists) relieve the diarrhea in many, but not all, patients. A tryptophan 5-hydroxylase inhibitor, LX-1031, which inhibits serotonin synthesis in peripheral tissues, is reported to cause a 44% decrease in bowel movement frequency and a 20% improvement in stool form in patients with the carcinoid syndrome. Additional studies suggest that tachykinins may be important mediators of diarrhea in some patients. In one study, plasma tachykinin levels correlated with symptoms of diarrhea. Serotonin does not appear to be involved in the flushing because serotonin receptor antagonists do not relieve flushing. In patients with gastric carcinoids, the characteristic red, patchy pruritic flush is thought due to histamine release because H1 and H2 receptor antagonists can prevent it. Numerous studies have shown that tachykinins (substance P, neuropeptide K) are stored in GI-NETs (carcinoids) and released during flushing. However, some studies have demonstrated that octreotide can relieve the flushing induced by pentagastrin in these patients without altering the stimulated increase in plasma substance P, suggesting that other mediators must be involved in the flushing. A correlation between plasma tachykinin levels (but not substance P levels) and flushing has been reported. Prostaglandin release could be involved in mediating either the diarrhea or flush, but conflicting data exist. Both histamine and serotonin may be responsible for the wheezing as 565 well as the fibrotic reactions involving the heart, causing Peyronie’s disease and intraabdominal fibrosis. The exact mechanism of the heart disease remains unclear, although increasing evidence supports a central role for serotonin. Patients with heart disease have higher plasma levels of neurokinin A, substance P, plasma atrial natriuretic peptide (ANP), pro-brain natriuretic peptide, chromogranin A, and activin A as well as higher urinary 5-HIAA excretion. The valvular heart disease caused by the appetite-suppressant drug dexfenfluramine is histologically indistinguishable from that observed in carcinoid disease. Furthermore, ergot-containing dopamine receptor agonists used for Parkinson’s disease (pergolide, cabergoline) cause valvular heart disease that closely resembles that seen in the carcinoid syndrome. Furthermore, in animal studies, the formation of valvular plaques/fibrosis occurs after prolonged treatment with serotonin as well as in animals with a deficiency of the 5-HIAA transporter gene, which results in an inability to inactivate serotonin. Metabolites of fenfluramine, as well as the dopamine receptor agonists, have high affinity for serotonin receptor subtype 5-HT2B receptors, whose activation is known to cause fibroblast mitogenesis. Serotonin receptor subtypes normally are expressed in human heart valve intersti- 5-HT1B,1D,2A,2B tial cells. High levels of 5-HT2B receptors are known to occur in heart valves and occur in cardiac fibroblasts and cardiomyocytes. Studies of cultured interstitial cells from human cardiac valves have demonstrated that these valvulopathic drugs induce mitogenesis by activating 5-HT2B receptors and stimulating upregulation of transforming growth factor β and collagen biosynthesis. These observations support the conclu sion that serotonin overproduction by GI-NETs (carcinoids) is important in mediating the valvular changes, possibly by activating 5-HT2B receptors in the endocardium. Both the magnitude of serotonin overproduction and prior chemotherapy are important predictors of progression of the heart disease, whereas patients with high plasma levels of ANP have a worse prognosis. Plasma connective tissue growth factor levels are elevated in many fibrotic conditions; elevated levels occur in patients with carcinoid heart disease and correlate with the presence of right ventricular dysfunction and the extent of valvular regurgitation in patients with GI-NETs (carcinoids). Patients may develop either a typical or, rarely, an atypical carcinoid syndrome (Fig. 113-1). In patients with the typical form, which characteristically is caused by midgut NETs (carcinoids), the conversion of tryptophan to 5-HTP is the rate-limiting step (Fig. 113-1). Once 5-HTP is formed, it is rapidly converted to 5-HT and stored in secretory granules of the tumor or in platelets. A small amount remains in plasma and is converted to 5-HIAA, which appears in large amounts in the urine. These patients have an expanded serotonin pool size, increased blood and platelet serotonin, and increased urinary 5-HIAA. Some GI-NETs (carcinoids) cause an atypical carcinoid syndrome that is thought to be due to a deficiency in the enzyme dopa decarboxylase; thus, 5-HTP cannot be converted to 5-HT (serotonin), and 5-HTP is secreted into the bloodstream (Fig. 113-1). In these patients, plasma serotonin levels are normal but urinary levels may be increased because some 5-HTP is converted to 5-HT in the kidney. Characteristically, urinary 5-HTP and 5-HT are increased, but urinary 5-HIAA levels are only slightly elevated. Foregut carcinoids are the most likely to cause an atypical carcinoid syndrome; however, they also can cause a typical carcinoid syndrome. One of the most immediate life-threatening complications of the carcinoid syndrome is the development of a carcinoid crisis. This is more common in patients who have intense symptoms or have greatly increased urinary 5-HIAA levels (i.e., >200 mg/d). The crisis may occur spontaneously; however, it is usually provoked by procedures such as anesthesia, chemotherapy, surgery, biopsy, endoscopy, or radiologic examinations such as during biopsies, hepatic artery embolization, and vessel catheterization. It can be provoked by stress or procedures as mild as repeated palpation of the tumor during physical examination. Patients develop intense flushing, diarrhea, abdominal pain, cardiac abnormalities including tachycardia, hypertension, or hypotension, and confusion or stupor. If not adequately treated, this can be a terminal event. Endocrine Tumors of the Gastrointestinal Tract and Pancreas 566 DIAGNOSIS OF THE CARCINOID SYNDROME AND GI-NETs (CARCINOIDS) The diagnosis of carcinoid syndrome relies on measurement of urinary or plasma serotonin or its metabolites in the urine. The measurement of 5-HIAA is used most frequently. False-positive elevations may occur if the patient is eating serotonin-rich foods such as bananas, pineapples, walnuts, pecans, avocados, or hickory nuts or is taking certain medications (cough syrup containing guaifenesin, acetaminophen, salicylates, serotonin reuptake inhibitors, or l-dopa). The normal range for daily urinary 5-HIAA excretion is 2–8 mg/d. Serotonin overproduction was noted in 92% of patients with carcinoid syndrome in one study, and in another study, 5-HIAA had 73% sensitivity and 100% specificity for carcinoid syndrome. Serotonin overproduction is not synonymous with the presence of clinical carcinoid syndrome because 12–26% of patients with serotonin overproduction do not have clinical evidence of the carcinoid syndrome. Most physicians use only the urinary 5-HIAA excretion rate; however, plasma and platelet serotonin levels, if available, may provide additional information. Platelet serotonin levels are more sensitive than urinary 5-HIAA but are not generally available. A single plasma 5-HIAA determination was found to correlate with the 24-h urinary values, raising the possibility that this could replace the standard urinary collection because of its greater convenience and avoidance of incomplete or improper collections. Because patients with foregut NETs (carcinoids) may produce an atypical carcinoid syndrome, if this syndrome is suspected and the urinary 5-HIAA is minimally elevated or normal, other urinary metabolites of tryptophan, such as 5-HTP and 5-HT, should be measured (Fig. 113-1). Flushing occurs in a number of other diseases, including systemic mastocytosis, chronic myeloid leukemia with increased histamine release, menopause, reactions to alcohol or glutamate, and side effects of chlorpropamide, calcium channel blockers, and nicotinic acid. None of these conditions cause increased urinary 5-HIAA. The diagnosis of carcinoid tumor can be suggested by the carcinoid syndrome, recurrent abdominal symptoms in a healthy-appearing individual, or the discovery of hepatomegaly or hepatic metastases associated with minimal symptoms. Ileal NETs (carcinoids), which make up 25% of all clinically detected carcinoids, should be suspected in patients with bowel obstruction, abdominal pain, flushing, or diarrhea. Serum chromogranin A levels are elevated in 56–100% of patients with GI-NETs (carcinoids), and the level correlates with tumor bulk. Serum chromogranin A levels are not specific for GI-NETs (carcinoids) because they are also elevated in patients with pNETs and other NETs. Furthermore, a major problem is caused by potent acid antisecretory drugs such as proton pump inhibitors (omeprazole and related drugs) because they almost invariably cause elevation of plasma chromogranin A levels; the elevation occurs rapidly (3–5 days) with continued use, and the elevated levels overlap with the levels seen in many patients with NETs. Plasma neuron-specific enolase levels are also used as a marker of GI-NETs (carcinoids) but are less sensitive than chromogranin A, being increased in only 17–47% of patients. Newer markers have been proposed including pancreastatin (a chromogranin A breakdown product) and activin A. The former is not affected by proton pump inhibitors; however, its sensitivity and specificity are not established. Plasma activin elevations are reported to correlate with the presence of cardiac disease with a sensitivity of 87% and specificity of 57%. Treatment includes avoiding conditions that precipitate flushing, dietary supplementation with nicotinamide, treatment of heart failure with diuretics, treatment of wheezing with oral bronchodilators, and control of the diarrhea with antidiarrheal agents such as loperamide and diphenoxylate. If patients still have symptoms, serotonin receptor antagonists or somatostatin analogues (Fig. 113-2) are the drugs of choice. There are 14 subclasses of serotonin receptors, and antagonists for many are not available. The 5-HT1 and 5-HT2 receptor antagonists methysergide, cyproheptadine, and ketanserin have all been used to control the diarrhea but usually do not decrease flushing. The use of methysergide is limited because it can cause or enhance retroperitoneal fibrosis. Ketanserin diminishes diarrhea in 30–100% of patients. 5-HT3 receptor antagonists (ondansetron, tropisetron, alosetron) can control diarrhea and nausea in up to 100% of patients and occasionally ameliorate the flushing. A combination of histamine H1 and H2 receptor antagonists (i.e., diphenhydramine and cimetidine or ranitidine) may control flushing in patients with foregut carcinoids. The tryptophan 5-hydoxylase inhibitor telotristat etiprate decreased bowel frequency in 44% and improved stool consistency in 20%. 111In-[DTPA-D-Phe1, Tyr3]-octreotide = 111In-pentetreotide; Octreoscan-111 90Y-[DOTA0-D-Phe1 , Tyr3]-octreotide177Lu-[DOTA0-D-Phe1,Tyr3]-octreotateTHRCYSCYSSSCYSCYSSSCYSCYSSSD-PHELYSTYRTHRD-TRPLYSTYRTHRD-TRPLYSTYRTHRD-TRPHOOC-CH2CH2-COCH2-COOHCOOHHOOCNHHOOCN–(CH2)2–N–(CH2)2–ND-PHED-PHETHR-olTHR-olNNONN90Y111InCOOHHOOCNHHOOCNNONN177Lu FIGURE 113-2 Structure of somatostatin and synthetic analogues used for diagnostic or therapeutic indications. Synthetic analogues of somatostatin (octreotide, lanreotide) are now the most widely used agents to control the symptoms of patients with carcinoid syndrome (Fig. 113-2). These drugs are effective at relieving symptoms and decreasing urinary 5-HIAA levels in patients with this syndrome. Octreotide-LAR and lanreotideSR/autogel (Somatuline) (sustained-release formulations allowing monthly injections) control symptoms in 74% and 68% of patients, respectively, with carcinoid syndrome and show a biochemical response in 51% and 64%, respectively. Patients with mild to moderate symptoms usually are treated initially with octreotide 100 µg SC every 8 h and then begun on the long-acting monthly depot forms (octreotide-LAR or lanreotide-autogel). Forty percent of patients escape control after a median time of 4 months, and the depot dosage may have to be increased as well as supplemented with the shorter-acting formulation, SC octreotide. Pasireotide (SOM230) is a somatostatin analogue with broader selectivity (high-affinity somatostatin receptors [sst1, sst2, sst3, sst5]) than octreotide/lanreotide (sst2, sst5). In a phase II study of patients with refractory carcinoid syndrome, pasireotide controlled symptoms in 27%. Carcinoid heart disease is associated with a decreased mean survival (3.8 years), and therefore, it should be sought for and carefully assessed in all patients with carcinoid syndrome. Transthoracic echocardiography remains a key element in establishing the diagnosis of carcinoid heart disease and determining the extent and type of cardiac abnormalities. Treatment with diuretics and somatostatin analogues can reduce the negative hemodynamic effects and secondary heart failure. It remains unclear whether long-term treatment with these drugs will decrease the progression of carcinoid heart disease. Balloon valvuloplasty for stenotic valves or cardiac valve surgery may be required. In patients with carcinoid crises, somatostatin analogues are effective at both treating the condition and preventing their development during known precipitating events such as surgery, anesthesia, chemotherapy, and stress. It is recommended that octreotide 150–250 µg SC every 6 to 8 h be used 24–48 h before anesthesia and then continued throughout the procedure. Currently, sustained-release preparations of both octreotide (octreotide-LAR [long-acting release], 10, 20, 30 mg) and lanreotide (lanreotide-PR [prolonged release, lanreotide-autogel], 60, 90, 120 mg) are available and widely used because their use greatly facilitates long-term treatment. Octreotide-LAR (30 mg/month) gives a plasma level ≥1 ng/mL for 25 days, whereas this requires three to six injections a day of the non-sustained-release form. Lanreotideautogel (Somatuline) is given every 4–6 weeks. Short-term side effects occur in up to one-half of patients. Pain at the injection site and side effects related to the GI tract (59% discomfort, 15% nausea, diarrhea) are the most common. They are usually short-lived and do not interrupt treatment. Important long-term side effects include gallstone formation, steatorrhea, and deterioration in glucose tolerance. The overall incidence of gallstones/biliary sludge in one study was 52%, with 7% having symptomatic disease that required surgical treatment. Interferon α is reported to be effective in controlling symptoms of the carcinoid syndrome either alone or combined with hepatic artery embolization. With interferon α alone, the clinical response rate is 30–70%, and with interferon α with hepatic artery embolization, diarrhea was controlled for 1 year in 43% and flushing was controlled in 86%. Side effects develop in almost all patients, with the most frequent being a flu-like syndrome (80–100%), followed by anorexia and fatigue, even though these frequently improve with continued treatment. Other more severe side effects include bone marrow toxicity, hepatotoxicity, autoimmune disorders, and rarely CNS side effects (depression, mental disorders, visual problems). Hepatic artery embolization alone or with chemotherapy (chemoembolization) has been used to control the symptoms of carcinoid syndrome. Embolization alone is reported to control symptoms in up to 76% of patients, and chemoembolization (5-fluorouracil, doxorubicin, cisplatin, mitomycin) controls symptoms in 60–75% of patients. Hepatic artery embolization can have major side effects, 567 including nausea, vomiting, pain, and fever. In two studies, 5–7% of patients died from complications of hepatic artery occlusion. Other drugs have been used successfully in small numbers of patients to control the symptoms of carcinoid syndrome. Parachlorophenylanine can inhibit tryptophan hydroxylase and therefore the conversion of tryptophan to 5-HTP. However, its severe side effects, including psychiatric disturbances, make it intolerable for long-term use. α-Methyldopa inhibits the conversion of 5-HTP to 5-HT, but its effects are only partial. Peptide radioreceptor therapy (using radiotherapy with radio-labeled somatostatin analogues), the use of radiolabeled micro-spheres, and other methods for treatment of advanced metastatic disease may facilitate control of the carcinoid syndrome and are discussed in a later section dealing with treatment of advanced disease. Surgery is the only potentially curative therapy. Because with most GI-NETs (carcinoids), the probability of metastatic disease increases with increasing size, the extent of surgical resection is determined accordingly. With appendiceal NETs (carcinoids) <1 cm, simple appendectomy was curative in 103 patients followed for up to 35 years. With rectal NETs (carcinoids) <1 cm, local resection is curative. With SI NETs (carcinoids) <1 cm, there is not complete agreement. Because 15–69% of SI NETs (carcinoids) this size have metastases in different studies, some recommend a wide resection with en bloc resection of the adjacent lymph-bearing mesentery. If the tumor is >2 cm for rectal, appendiceal, or SI NETs (carcinoids), a full cancer operation should be done. This includes a right hemicolectomy for appendiceal NETs (carcinoids), an abdominoperineal resection or low anterior resection for rectal NETs (carcinoids), and an en bloc resection of adjacent lymph nodes for SI NETs (carcinoids). For appendiceal NETs (carcinoids) 1–2 cm in diameter, a simple appendectomy is proposed by some, whereas others favor a formal right hemicolectomy. For 1–2 cm rectal NETs (carcinoids), it is recommended that a wide, local, full-thickness excision be performed. With type I or II gastric NETs (carcinoids), which are usually <1 cm, endoscopic removal is recommended. In type I or II gastric carcinoids, if the tumor is >2 cm or if there is local invasion, some recommend total gastrectomy, whereas others recommend antrectomy in type I to reduce the hypergastrinemia, which has led to regression of the carcinoids in a number of studies. For types I and II gastric NETs (carcinoids) of 1–2 cm, there is no agreement, with some recommending endoscopic treatment followed by chronic somatostatin treatment and careful follow-up and others recommending surgical treatment. With type III gastric NETs (carcinoids) >2 cm, excision and regional lymph node clearance are recommended. Most tumors <1 cm are treated endoscopically. Resection of isolated or limited hepatic metastases may be beneficial and will be discussed in a later section on treatment of advanced disease. Functional pNETs usually present clinically with symptoms due to the hormone-excess state (Table 113-2). Only late in the course of the disease does the tumor per se cause prominent symptoms such as abdominal pain. In contrast, all the symptoms due to nonfunctional pNETs are due to the tumor per se. The overall result of this is that some functional pNETs may present with severe symptoms with a small or undetectable primary tumor, whereas nonfunctional tumors usually present late in the disease course with large tumors, which are frequently metastatic. The mean delay between onset of continuous symptoms and diagnosis of a functional pNET syndrome is 4–7 years. Therefore, the diagnoses frequently are missed for extended periods. Endocrine Tumors of the Gastrointestinal Tract and Pancreas Treatment of pNETs requires two different strategies. First, treatment must be directed at the hormone-excess state such as the gastric acid hypersecretion in gastrinomas or the hypoglycemia in insulinomas. Ectopic hormone secretion usually causes the presenting symptoms and can cause life-threatening complications. Second, with all the tumors except insulinomas, >50% are malignant (Table 113-2); therefore, treatment must also be directed against the tumor per se. Because in many patients these tumors are not surgically curable due to the presence of advanced disease at diagnosis, surgical resection for cure, which addresses both treatment aspects, is often not possible. A gastrinoma is an NET that secretes gastrin; the resultant hypergastrinemia causes gastric acid hypersecretion (Zollinger-Ellison syndrome [ZES]). The chronic hypergastrinemia results in marked gastric acid hypersecretion and growth of the gastric mucosa with increased numbers of parietal cells and proliferation of gastric ECL cells. The gastric acid hypersecretion characteristically causes peptic ulcer disease (PUD), often refractory and severe, as well as diarrhea. The most common presenting symptoms are abdominal pain (70–100%), diarrhea (37–73%), and gastroesophageal reflux disease (GERD) (30–35%); 10–20% of patients have diarrhea only. Although peptic ulcers may occur in unusual locations, most patients have a typical duodenal ulcer. Important observations that should suggest this diagnosis include PUD with diarrhea; PUD in an unusual location or with multiple ulcers; PUD refractory to treatment or persistent; PUD associated with prominent gastric folds; PUD associated with findings suggestive of MEN 1 (endocrinopathy, family history of ulcer or endocrinopathy, nephrolithiases); and PUD without Helicobacter pylori present. H. pylori is present in >90% of idiopathic peptic ulcers but is present in <50% of patients with gastrinomas. Chronic unexplained diarrhea also should suggest ZES. Approximately 20–25% of patients with ZES have MEN 1 (MEN1/ ZES), and in most cases, hyperparathyroidism is present before the ZES develops. These patients are treated differently from those without MEN 1 (sporadic ZES); therefore, MEN 1 should be sought in all patients with ZES by family history and by measuring plasma ionized calcium and prolactin levels and plasma hormone levels (parathormone, growth hormone). Most gastrinomas (50–90%) in sporadic ZES are present in the duodenum, followed by the pancreas (10–40%) and other intraabdominal sites (mesentery, lymph nodes, biliary tract, liver, stomach, ovary). Rarely, the tumor may involve extraabdominal sites (heart, lung cancer). In MEN 1/ZES the gastrinomas are also usually in the duodenum (70–90%), followed by the pancreas (10–30%), and are almost always multiple. About 60–90% of gastrinomas are malignant (Table 113-2) with metastatic spread to lymph nodes and liver. Distant metastases to bone occur in 12–30% of patients with liver metastases. Diagnosis The diagnosis of ZES requires the demonstration of inappropriate fasting hypergastrinemia, usually by demonstrating hypergastrinemia occurring with an increased basal gastric acid output (BAO) (hyperchlorhydria). More than 98% of patients with ZES have fasting hypergastrinemia, although in 40–60% the level may be elevated less than tenfold. Therefore, when the diagnosis is suspected, a fasting gastrin is usually the initial test performed. It is important to remember that potent gastric acid suppressant drugs such as proton pump inhibitors (PPIs) (omeprazole, esomeprazole, pantoprazole, lansoprazole, rabeprazole) can suppress acid secretion sufficiently to cause hypergastrinemia; because of their prolonged duration of action, these drugs have to be tapered or frequently discontinued for a week before the gastrin determination. Withdrawal of PPIs should be performed carefully because PUD complications can rapidly develop in some patients and is best done in consultation with GI units with experience in this area. The widespread use of PPIs can confound the diagnosis of ZES by raising a false-positive diagnosis by causing hypergastrinemia in a patient being treated with idiopathic PUD (without ZES) and lead to a false-negative diagnosis because at routine doses used to treat patients with idiopathic PUD, PPIs control symptoms in most ZES patients and thus mask the diagnosis. If ZES is suspected and the gastrin level is elevated, it is important to show that it is increased when gastric pH is ≤2.0 because physiologically hypergastrinemia secondary to achlorhydria (atrophic gastritis, pernicious anemia) is one of the most common causes of hypergastrinemia. Nearly all ZES patients have a fasting pH ≤2 when off antisecretory drugs. If the fasting gastrin is >1000 pg/mL (increased tenfold) and the pH is ≤2.0, which occurs in 40–60% of patients with ZES, the diagnosis of ZES is established after the possibility of retained antrum syndrome has been ruled out by history. In patients with hypergastrinemia with fasting gastrins <1000 pg/mL (<10-fold increased) and gastric pH ≤2.0, other conditions, such as H. pylori infections, antral G-cell hyperplasia/hyperfunction, gastric outlet obstruction, and, rarely, renal failure, can masquerade as ZES. To establish the diagnosis in this group, a determination of BAO and a secretin provocative test should be done. In patients with ZES without previous gastric acid–reducing surgery, the BAO is usually (>90%) elevated (i.e., >15 mEq/h). The secretin provocative test is usually positive, with the criterion of a >120-pg/mL increase over the basal level having the highest sensitivity (94%) and specificity (100%). Unfortunately the diagnosis of ZES is becoming increasing more difficult. This is due not only to the widespread use of PPIs (leading to false-positive results as well as masking ZES presentation), but also recent studies demonstrate than many of the commercial gastrin kits that are used by most laboratories to measure fasting serum gastrin levels are not reliable. In one study, 7 of the 12 tested commercial gastrin kits inaccurately assessed the true serum concentration of gastrin primarily because the antibodies used had inappropriate specificity for the different circulating forms of gastrin and were not adequately validated. Both underestimation and overestimation of fasting serum gastrin levels occurred using these commercial kits. To circumvent this problem, it is either necessary to use one of the five reliable kits identified or, alternatively, to refer the patient to a center with expertise in making the diagnosis in your area, or if this is not possible, to contact such a center and use the gastrin assay they recommend. An accurate gastrin assay is essential for accurate measurement of fasting serum gastrin level as well as for assessing gastrin levels during the secretin provocative test, and thus, the diagnosis of ZES cannot reliably be made without one. Gastric acid hypersecretion in patients with ZES can be controlled in almost every case by oral gastric antisecretory drugs. Because of their long duration of action and potency, which allows dosing once or twice a day, the PPIs (H+, K+-ATPase inhibitors) are the drugs of choice. Histamine H2-receptor antagonists are also effective, although more frequent dosing (q 4–8 h) and high doses are required. In patients with MEN 1/ZES with hyperparathyroidism, correction of the hyperparathyroidism increases the sensitivity to gastric antisecretory drugs and decreases the basal acid output. Long-term treatment with PPIs (>15 years) has proved to be safe and effective, without development of tachyphylaxis. Although patients with ZES, especially those with MEN 1/ZES, more frequently develop gastric NETs (carcinoids), no data suggest that the long-term use of PPIs increases this risk in these patients. With long-term PPI use in ZES patients, vitamin B12 deficiency can develop; thus, vitamin B12 levels should be assessed during follow-up. Epidemiologic studies suggest that long-term PPI use may be associated with an increased incidence of bone fractures; however, at present, there is no such report in ZES patients. With the increased ability to control acid hypersecretion, more than 50% of patients who are not cured (>60% of patients) will die from tumor-related causes. At presentation, careful imaging studies are essential to localize the extent of the tumor to determine the appropriate treatment. A third of patients present with hepatic metastases, and in <15% of those patients, the disease is limited, so that surgical resection may be possible. Surgical short-term cure is possible in 60% of all patients without MEN 1/ZES or liver metastases (40% of all patients) and in 30% of patients long term. In patients with MEN 1/ZES, long-term surgical cure is rare because the tumors are multiple, frequently with lymph node metastases. Surgical studies demonstrate that successful resection of the gastrinoma not only decreases the chances of developing liver metastases but also increases the disease-related survival rate. Therefore, all patients with gastrinomas without MEN 1/ZES or a medical condition that limits life expectancy should undergo surgery by a surgeon experienced in the treatment of these disorders. An insulinoma is an NET of the pancreas that is thought to be derived from beta cells that ectopically secrete insulin, which results in hypoglycemia. The average age of occurrence is 40–50 years old. The most common clinical symptoms are due to the effect of the hypoglycemia on the CNS (neuroglycemic symptoms) and include confusion, headache, disorientation, visual difficulties, irrational behavior, and even coma. Also, most patients have symptoms due to excess catecholamine release secondary to the hypoglycemia, including sweating, tremor, and palpitations. Characteristically, these attacks are associated with fasting. Insulinomas are generally small (>90% are <2 cm) and usually not multiple (90%); only 5–15% are malignant, and they almost invariably occur only in the pancreas, distributed equally in the pancreatic head, body, and tail. Insulinomas should be suspected in all patients with hypoglycemia, especially when there is a history suggesting that attacks are provoked by fasting, or with a family history of MEN 1. Insulin is synthesized as pro-insulin, which consists of a 21-amino-acid α chain and a 30-amino-acid β chain connected by a 33-amino-acid connecting peptide (C peptide). In insulinomas, in addition to elevated plasma insulin levels, elevated plasma proinsulin levels are found, and C-peptide levels are elevated. Diagnosis The diagnosis of insulinoma requires the demonstration of an elevated plasma insulin level at the time of hypoglycemia. A number of other conditions may cause fasting hypoglycemia, such as the inadvertent or surreptitious use of insulin or oral hypoglycemic agents, severe liver disease, alcoholism, poor nutrition, and other extrapancreatic tumors. Furthermore, postprandial hypoglycemia can be caused by a number of conditions that confuse the diagnosis of insulinoma. Particularly important here is the increased occurrence of hypoglycemia after gastric bypass surgery for obesity, which is now widely performed. A new entity, insulinomatosis, was described that can cause hypoglycemia and mimic insulinomas. It occurs in 10% of patients with persistent hyperinsulinemic hypoglycemia and is characterized by the occurrence of multiple macro-/microadenomas expressing insulin, and it is not clear how to distinguish this entity from insulinoma preoperatively. The most reliable test to diagnose insulinoma is a fast up to 72 h with serum glucose, C-peptide, proinsulin, and insulin measurements every 4–8 h. If at any point the patient becomes symptomatic or glucose levels are persistently below <2.2 mmol/L (40 mg/dL), the test should be terminated, and repeat samples for the above studies should be obtained before glucose is given. Some 70–80% of patients will develop hypoglycemia during the first 24 h, and 98% by 48 h. In nonobese normal subjects, serum insulin levels should decrease to <43 pmol/L (<6 µU/mL) when blood glucose decreases to <2.2 mmol/L (<40 mg/ dL) and the ratio of insulin to glucose is <0.3 (in mg/dL). In addition to having an insulin level >6 µU/mL when blood glucose is <40 mg/ dL, some investigators also require an elevated C-peptide and serum proinsulin level, an insulin/glucose ratio >0.3, and a decreased plasma β-hydroxybutyrate level for the diagnosis of insulinomas. Surreptitious use of insulin or hypoglycemic agents may be difficult to distinguish from insulinomas. The combination of proinsulin levels (normal in exogenous insulin/hypoglycemic agent users), C-peptide levels (low in exogenous insulin users), antibodies to insulin (positive in exogenous insulin users), and measurement of sulfonylurea levels in serum or plasma will allow the correct diagnosis to be made. The diagnosis of insulinoma has been complicated by the introduction of specific insulin 569 assays that do not also interact with proinsulin, as do many of the older radioimmunoassays (RIAs), and therefore give lower plasma insulin levels. The increased use of these specific insulin assays has resulted in increased numbers of patients with insulinomas having lower plasma insulin values (<6 µU/mL) than levels proposed to be characteristic of insulinomas by RIA. In these patients, the assessment of proinsulin and C-peptide levels at the time of hypoglycemia is particularly helpful for establishing the correct diagnosis. An elevated proinsulin level when the fasting glucose level is <45 mg/dL is sensitive and specific. Only 5–15% of insulinomas are malignant; therefore, after appropriate imaging (see below), surgery should be performed. In different studies, 75–100% of patients are cured by surgery. Before surgery, the hypoglycemia can be controlled by frequent small meals and the use of diazoxide (150–800 mg/d). Diazoxide is a benzothiadiazide whose hyperglycemic effect is attributed to inhibition of insulin release. Its side effects are sodium retention and GI symptoms such as nausea. Approximately 50–60% of patients respond to diazoxide. Other agents effective in some patients to control the hypoglycemia include verapamil and diphenylhydantoin. Long-acting somatostatin analogues such as octreotide and lanreotide are acutely effective in 40% of patients. However, octreotide must be used with care because it inhibits growth hormone secretion and can alter plasma glucagon levels; therefore, in some patients, it can worsen the hypoglycemia. For the 5–15% of patients with malignant insulinomas, these drugs or somatostatin analogues are used initially. In a small number of patients with insulinomas, some with malignant tumors, mammalian target of rapamycin (mTOR) inhibitors (everolimus, rapamycin) are reported to control the hypoglycemia. If they are not effective, various antitumor treatments such as hepatic arterial embolization, chemoembolization, chemotherapy, and peptide receptor radiotherapy have been used (see below). Insulinomas, which are usually benign (>90%) and intrapancreatic in location, are increasingly resected using a laparoscopic approach, which has lower morbidity rates. This approach requires that the insulinoma be localized on preoperative imaging studies. A glucagonoma is NET of the pancreas that secretes excessive amounts of glucagon, which causes a distinct syndrome characterized by dermatitis, glucose intolerance or diabetes, and weight loss. Glucagonomas principally occur between 45 and 70 years of age. The tumor is clinically heralded by a characteristic dermatitis (migratory necrolytic erythema) (67–90%), accompanied by glucose intolerance (40–90%), weight loss (66–96%), anemia (33–85%), diarrhea (15–29%), and thromboembolism (11–24%). The characteristic rash usually starts as an annular erythema at intertriginous and periorificial sites, especially in the groin or buttock. It subsequently becomes raised, and bullae form; when the bullae rupture, eroded areas form. The lesions can wax and wane. The development of a similar rash in patients receiving glucagon therapy suggests that the rash is a direct effect of the hyperglucagonemia. A characteristic laboratory finding is hypoaminoacidemia, which occurs in 26–100% of patients. Glucagonomas are generally large tumors at diagnosis (5–10 cm). Some 50–80% occur in the pancreatic tail. From 50 to 82% have evidence of metastatic spread at presentation, usually to the liver. Glucagonomas are rarely extrapancreatic and usually occur singly. Two new entities have been described that can also cause hyperglucagonemia and may mimic glucagonomas. Mahvah disease is due to a homozygous P86S mutation of the human glucagon receptor. It is associated with the development of α-cell hyperplasia, hyperglucagonemia, and the development of nonfunctioning pNETs. A second disease called glucagon cell adenomatosis can mimic glucagonoma syndrome clinically and is characterized by the presence of hyperplastic islets staining positive for glucagon instead of a single glucagonoma. Endocrine Tumors of the Gastrointestinal Tract and Pancreas 570 Diagnosis The diagnosis is confirmed by demonstrating an increased plasma glucagon level. Characteristically, plasma glucagon levels exceed 1000 pg/mL (normal is <150 pg/mL) in 90%; 7% are between 500 and 1000 pg/mL, and 3% are <500 pg/mL. A trend toward lower levels at diagnosis has been noted in the last decade. A plasma glucagon level >1000 pg/mL is considered diagnostic of glucagonoma. Other diseases causing increased plasma glucagon levels include cirrhosis, diabetic ketoacidosis, celiac disease, renal insufficiency, acute pancreatitis, hypercorticism, hepatic insufficiency, severe stress, and prolonged fasting or familial hyperglucagonemia, as well as danazol treatment. With the exception of cirrhosis, these disorders do not increase plasma glucagon >500 pg/mL. Necrolytic migratory erythema is not pathognomonic for glucagonoma and occurs in myeloproliferative disorders, hepatitis B infection, malnutrition, short-bowel syndrome, inflammatory bowel disease, zinc deficiency, and malabsorption disorders. In 50–80% of patients, hepatic metastases are present, and so curative surgical resection is not possible. Surgical debulking in patients with advanced disease or other antitumor treatments may be beneficial (see below). Long-acting somatostatin analogues such as octreotide and lanreotide improve the skin rash in 75% of patients and may improve the weight loss, pain, and diarrhea, but usually do not improve the glucose intolerance. The somatostatinoma syndrome is due to an NET that secretes excessive amounts of somatostatin, which causes a distinct syndrome characterized by diabetes mellitus, gallbladder disease, diarrhea, and steatorrhea. There is no general distinction in the literature between a tumor that contains somatostatin-like immunoreactivity (somatostatinoma) and does (11–45%) or does not (55–90%) produce a clinical syndrome (somatostatinoma syndrome) by secreting somatostatin. In a review of 173 cases of somatostatinomas, only 11% were associated with the somatostatinoma syndrome. The mean age is 51 years. Somatostatinomas occur primarily in the pancreas and small intestine, and the frequency of the symptoms and occurrence of the somatostatinoma syndrome differ in each. Each of the usual symptoms is more common in pancreatic than in intestinal somatostatinomas: diabetes mellitus (95% vs 21%), gallbladder disease (94% vs 43%), diarrhea (92% vs 38%), steatorrhea (83% vs 12%), hypochlorhydria (86% vs 12%), and weight loss (90% vs 69%). The somatostatinoma syndrome occurs in 30–90% of pancreatic and 0–5% of SI somatostatinomas. In various series, 43% of all duodenal NETs contain somatostatin; however, the somatostatinoma syndrome is rarely present (<2%). Somatostatinomas occur in the pancreas in 56–74% of cases, with the primary location being the pancreatic head. The tumors are usually solitary (90%) and large (mean size 4.5 cm). Liver metastases are common, being present in 69–84% of patients. Somatostatinomas are rare in patients with MEN 1, occurring in only 0.65%. Somatostatin is a tetradecapeptide that is widely distributed in the CNS and GI tract, where it functions as a neurotransmitter or has paracrine and autocrine actions. It is a potent inhibitor of many processes, including release of almost all hormones, acid secretion, intestinal and pancreatic secretion, and intestinal absorption. Most of the clinical manifestations are directly related to these inhibitory actions. Diagnosis In most cases, somatostatinomas have been found by accident either at the time of cholecystectomy or during endoscopy. The presence of psammoma bodies in a duodenal tumor should particularly raise suspicion. Duodenal somatostatin-containing tumors are increasingly associated with von Recklinghausen’s disease (NF-1) (Table 113-6). Most of these tumors (>98%) do not cause the somatostatinoma syndrome. The diagnosis of the somatostatinoma syndrome requires the demonstration of elevated plasma somatostatin levels. Pancreatic tumors are frequently (70–92%) metastatic at presentation, whereas 30–69% of SI somatostatinomas have metastases. Surgery is the treatment of choice for those without widespread hepatic metastases. Symptoms in patients with the somatostatinoma syndrome are also improved by octreotide treatment. VIPomas are NETs that secrete excessive amounts of vasoactive intestinal peptide (VIP), which causes a distinct syndrome characterized by large-volume diarrhea, hypokalemia, and dehydration. This syndrome also is called Verner-Morrison syndrome, pancreatic cholera, and WDHA syndrome for watery diarrhea, hypokalemia, and achlorhydria, which some patients develop. The mean age of patients with this syndrome is 49 years; however, it can occur in children, and when it does, it is usually caused by a ganglioneuroma or ganglioneuroblastoma. The principal symptoms are large-volume diarrhea (100%) severe enough to cause hypokalemia (80–100%), dehydration (83%), hypochlorhydria (54–76%), and flushing (20%). The diarrhea is secretory in nature, persisting during fasting, and is almost always >1 L/d and in 70% is >3 L/d. In a number of studies, the diarrhea was intermittent initially in up to half the patients. Most patients do not have accompanying steatorrhea (16%), and the increased stool volume is due to increased excretion of sodium and potassium, which, with the anions, accounts for the osmolality of the stool. Patients frequently have hyperglycemia (25–50%) and hypercalcemia (25–50%). VIP is a 28-amino-acid peptide that is an important neurotransmitter, ubiquitously present in the CNS and GI tract. Its known actions include stimulation of SI chloride secretion as well as effects on smooth-muscle contractility, inhibition of acid secretion, and vasodilatory effects, which explain most features of the clinical syndrome. In adults, 80–90% of VIPomas are pancreatic in location, with the rest due to VIP-secreting pheochromocytomas, intestinal carcinoids, and rarely ganglioneuromas. These tumors are usually solitary, 50–75% are in the pancreatic tail, and 37–68% have hepatic metastases at diagnosis. In children <10 years old, the syndrome is usually due to ganglioneuromas or ganglioblastomas and is less often malignant (10%). Diagnosis The diagnosis requires the demonstration of an elevated plasma VIP level and the presence of large-volume diarrhea. A stool volume <700 mL/d is proposed to exclude the diagnosis of VIPoma. When the patient fasts, a number of diseases can be excluded that can cause marked diarrhea because the high volume of diarrhea is not sustained during the fast. Other diseases that can produce a secretory large-volume diarrhea include gastrinomas, chronic laxative abuse, carcinoid syndrome, systemic mastocytosis, rarely medullary thyroid cancer, diabetic diarrhea, sprue, and AIDS. Among these conditions, only VIPomas caused a marked increase in plasma VIP. Chronic surreptitious use of laxatives/diuretics can be particularly difficult to detect clinically. Hence, in a patient with unexplained chronic diarrhea, screens for laxatives should be performed; they will detect many, but not all, laxative abusers. Elevated plasma levels of VIP should not be the only basis of the diagnosis of VIPomas because they can occur with some diarrheal states including inflammatory bowel disease, post small bowel resection, and radiation enteritis. Furthermore, nesidioblastosis can mimic VIPomas by causing elevated plasma VIP levels, diarrhea, and even false-positive location in the pancreatic region on somatostatin receptor scintigraphy. The most important initial treatment in these patients is to correct their dehydration, hypokalemia, and electrolyte losses with fluid and electrolyte replacement. These patients may require 5 L/d of fluid and >350 mEq/d of potassium. Because 37–68% of adults with VIPomas have metastatic disease in the liver at presentation, a significant number of patients cannot be cured surgically. In these patients, long-acting somatostatin analogues such as octreotide and lanreotide are the drugs of choice. Octreotide/lanreotide will control the diarrhea shortand longterm in 75–100% of patients. In nonresponsive patients, the combination of glucocorticoids and octreotide/lanreotide has proved helpful in a small number of patients. Other drugs reported to be helpful in small numbers of patients include prednisone (60– 100 mg/d), clonidine, indomethacin, phenothiazines, loperamide, lidamidine, lithium, propranolol, and metoclopramide. Treatment of advanced disease with cytoreductive surgery, embolization, chemoembolization, chemotherapy, radiotherapy, radiofrequency ablation, and peptide receptor radiotherapy may be helpful (see below). NF-pNETs are NETs that originate in the pancreas and either secrete no products or their products do not cause a specific clinical syndrome. Their symptoms are due entirely to the tumor per se. NF-pNETs secrete chromogranin A (90–100%), chromogranin B (90–100%), α-HCG (human chorionic gonadotropin) (40%), neuron-specific enolase (31%), and β-HCG (20%), and because 40–90% secrete PP, they are also often called PPomas. Because the symptoms are due to the tumor mass, patients with NF-pNETs usually present late in the disease course with invasive tumors and hepatic metastases (64–92%), and the tumors are usually large (72% >5 cm). NF-pNETs are usually solitary except in patients with MEN 1, in which case they are multiple. They occur primarily in the pancreatic head. Even though these tumors do not cause a functional syndrome, immunocytochemical studies show that they synthesize numerous peptides and cannot be distinguished from functional pNETs by immunocytochemistry. In MEN 1, 80–100% of patients have microscopic NF-pNETs, but they become large or symptomatic in a minority (0–13%) of cases. In VHL, 12–17% develop NF-pNETs, and in 4%, they are ≥3 cm in diameter. The most common symptoms are abdominal pain (30–80%), jaundice (20–35%), and weight loss, fatigue, or bleeding; 10–35% are found incidentally. The average time from the beginning of symptoms to diagnosis is 5 years. Diagnosis The diagnosis is established by histologic confirmation in a patient without either the clinical symptoms or the elevated plasma hormone levels of one of the established syndromes. The principal difficulty in diagnosis is to distinguish an NF-pNET from a nonendocrine pancreatic tumor, which is more common, as well as from a functional pNET. Even though chromogranin A levels are elevated in almost every patient, this is not specific for this disease as it can be found in functional pNETs, GI-NETs (carcinoids), and other neuroendocrine disorders. Plasma PP elevations should strongly suggest the diagnosis in a patient with a pancreatic mass because it is usually normal in patients with pancreatic adenocarcinomas. Elevated plasma PP is not diagnostic of this tumor because it is elevated in a number of other conditions, such as chronic renal failure, old age, inflammatory conditions, alcohol abuse, pancreatitis, hypoglycemia, postprandially, and diabetes. A positive somatostatin receptor scan in a patient with a pancreatic mass should suggest the presence of pNET/NF-pNET rather than a nonendocrine tumor. Overall survival in patients with sporadic NF-pNET is 30–63% at 5 years, with a median survival of 6 years. Unfortunately, surgical curative resection can be considered only in a minority of these patients because 64–92% present with diffuse metastatic disease. Treatment needs to be directed against the tumor per se using the various modalities discussed below for advanced disease. The treatment of NF-pNETs in either MEN 1 patients or patients with VHL is controversial. Most recommend surgical resection for any tumor >2–3 cm in diameter; however, there is no consensus on smaller 571 NF-pNETs in these inherited disorders, with most recommending careful surveillance of these patients. The treatment of small sporadic, asymptomatic NF-pNETs (≤2 cm) is also controversial. Most of these are low-or intermediate-grade lesions, and <7% are malignant. Some advocate a nonoperative approach with careful, regular follow-up, whereas other recommend an operative approach with specially consideration for a laparoscopic surgical approach. GRFomas are NETs that secrete excessive amounts of growth hormone– releasing factor (GRF) that cause acromegaly. GRF is a 44-amino-acid peptide, and 25–44% of pNETs have GRF immunoreactivity, although it is uncommonly secreted. GRFomas are lung tumors in 47–54% of cases, pNETs in 29–30%, and SI carcinoids in 8–10%; up to 12% occur at other sites. Patients have a mean age of 38 years, and the symptoms usually are due to either acromegaly or the tumor per se. The acromegaly caused by GRFomas is indistinguishable from classic acromegaly. The pancreatic tumors are usually large (>6 cm), and liver metastases are present in 39%. They should be suspected in any patient with acromegaly and an abdominal tumor, a patient with MEN 1 with acromegaly, or a patient without a pituitary adenoma with acromegaly or associated with hyperprolactinemia, which occurs in 70% of GRFomas. GRFomas are an uncommon cause of acromegaly. GRFomas occur in <1% of MEN 1 patients. The diagnosis is established by performing plasma assays for GRF and growth hormone. Most GRFomas have a plasma GRF level >300 pg/mL (normal <5 pg/mL men, <10 pg/mL women). Patients with GRFomas also have increased plasma levels of insulin-like growth factor type I (IGF-I) similar to those in classic acromegaly. Surgery is the treatment of choice if diffuse metastases are not present. Long-acting somatostatin analogues such as octreotide and lanreotide are the agents of choice, with 75–100% of patients responding. Cushing’s syndrome (ACTHoma) due to a pNET occurs in 4–16% of all ectopic Cushing’s syndrome cases. It occurs in 5% of cases of sporadic gastrinomas, almost invariably in patients with hepatic metastases, and is an independent poor prognostic factor. Paraneoplastic hypercalcemia due to pNETs releasing parathyroid hormone–related peptide (PTHrP), a PTH-like material, or unknown factor, is rarely reported. The tumors are usually large, and liver metastases are usually present. Most (88%) appear to be due to release of PTHrP. pNETs occasionally can cause the carcinoid syndrome. A number of very rare pNET syndromes involving a few cases (less than five) have been described; these include a reninproducing pNET in a patient presenting with hypertension; pNETs secreting luteinizing hormone, resulting in masculinization or decreased libido; a pNET secreting erythropoietin, resulting in polycythemia; pNETs secreting IGF-II, causing hypoglycemia; and pNETs secreting enteroglucagon, causing small intestinal hypertrophy, colonic/SI stasis, and malabsorption (Table 113-2). A number of other possible functional pNETs have been proposed, but most authorities classify these as unclear or as a nonfunctional pNET because in each case numerous patients have been described with similar plasma hormone elevations that do not cause any symptoms. These include pNETs secreting calcitonin, neurotensin (neurotensinoma), PP (PPoma), and ghrelin (Table 113-2). Localization of the primary tumor and knowledge of the extent of the disease are essential to the proper management of all GI-NETs (carcinoids) and pNETs. Without proper localization studies, it is not possible to determine whether the patient is a candidate for surgical resection (curative or cytoreductive) or requires antitumor treatment, to determine whether the patient is responding to antitumor therapies, or to appropriately classify/stage the patient’s disease to assess prognosis. Numerous tumor localization methods are used in both types of NETs, including cross-sectional imaging studies (CT, magnetic resonance imaging [MRI], transabdominal ultrasound), selective angiography, somatostatin receptor scintigraphy (SRS), and positron emission Endocrine Tumors of the Gastrointestinal Tract and Pancreas 572 tomography. In pNETs, endoscopic ultrasound (EUS) and functional localization by measuring venous hormonal gradients are also reported to be useful. Bronchial carcinoids are usually detected by standard chest radiography and assessed by CT. Rectal, duodenal, colonic, and gastric carcinoids are usually detected by GI endoscopy. Because of their wide availability, CT and MRI are generally initially used to determine the location of the primary NETs and the extent of disease. NETs are hyper-vascular tumors, and with both MRI and CT, contrast enhancement is essential for maximal sensitivity, and it is recommended that generally triple-phase scanning be used. The ability of cross-sectional imaging and, to a lesser extent, SRS to detect NETs is a function of NET size. With CT and MRI, <10% of tumors <1 cm in diameter are detected, 30–40% of tumors 1–3 cm are detected, and >50% of tumors >3 cm are detected. Many primary GI-NETs (carcinoids) are small, as are insulinomas and duodenal gastrinomas, and are frequently not detected by cross-sectional imaging, whereas most other pNETs present late in the course of their disease and are large (>4 cm). Selective angiography is more sensitive, localizing 60–90% of all NETs; however, it is now used infrequently. For detecting liver metastases, CT and MRI are more sensitive than ultrasound, and with recent improvements, 5–25% of patients with liver metastases will be missed by CT and/or MRI. pNETs, as well as GI-NETs (carcinoids), frequently (>80%) overexpress high-affinity somatostatin receptors in both the primary tumors and the metastases. Of the five types of somatostatin receptors (sst1–5), radiolabeled octreotide binds with high affinity to sst2 and sst5, has a lower affinity for sst3, and has a very low affinity for sst1 and sst4. Between 80 and 100% of GI-NETs (carcinoids) and pNETs possess sst2, and many also have the other four sst subtypes. Interaction with these receptors can be used to treat these tumors as well as to localize NETs by using radiolabeled somatostatin analogues (SRS). In the United States, [111In-DTPA-d-Phe1]octreotide (octreoscan) is generally used with gamma camera detection using single-photon emission computed tomography (SPECT) imaging. Numerous studies, primarily in Europe, using gallium-68-labeled somatostatin analogues and positron emission tomography (PET) detection, demonstrate even greater sensitivity than with SRS with 111In-labeled somatostatin analogues. Although not yet approved in the United States, there are a number of centers starting to use this approach. Because of its sensitivity and ability to localize tumor throughout the body, SRS is the initial imaging modality of choice for localizing both the primary tumor and metastatic NETs. SRS localizes tumor in 73–95% of patients with GI-NETs (carcinoids) and in 56–100% of patients with pNETs, except insulinomas. Insulinomas are usually small and have low densities of sst receptors, resulting in SRS being positive in only 12–50% of patients with insulinomas. SRS identifies >90–95% of patients with liver metastases due to NETs. Figure 113-3 shows an example of the increased sensitivity of SRS in a patient with a GI-NET (carcinoid) tumor. The CT scan showed a single liver metastasis, whereas the SRS demonstrated three metastases in the liver in multiple locations. Occasional false-positive responses with SRS can occur (12% in one study) because numerous other normal tissues as well as diseases can have high densities of sst receptors, including granulomas (sarcoid, tuberculosis, etc.), thyroid diseases (goiter, thyroiditis), and activated lymphocytes (lymphomas, wound infections). If liver metastases are identified by SRS, to plan the proper treatment, either a CT or an MRI (with contrast enhancement) is recommended to assess the size and exact location of the metastases because SRS does not provide information on tumor size. For pNETs in the pancreas, EUS is highly sensitive, localizing 77–100% of insulinomas, which occur almost exclusively within the pancreas. Endoscopic ultrasound is less sensitive for extrapancreatic tumors. It is increasingly used in patients with MEN 1, and to a lesser extent VHL, to detect small pNETs not seen with other modalities or for serial pNET assessments to determine size changes or rapid growth in patients in whom surgery is deferred. EUS with cytologic evaluation also is used frequently to distinguish an NF-pNET from a pancreatic adenocarcinoma or another nonendocrine pancreatic tumor. Not infrequently patients present with liver metastases due to an NET and the primary site is unclear. Occult small intestinal NETs (carcinoids) are increasingly detected by double-balloon enteroscopy or capsule endoscopy. FIGURE 113-3 Ability of computed tomography (CT) scanning (top) or somatostatin receptor scintigraphy (SRS) (bottom) to localize metastatic carcinoid in the liver. Insulinomas frequently overexpress receptors for glucagon-like peptide-1 (GLP-1), and radiolabeled GLP-1 analogues have been developed that can detect occult insulinomas not localized by other imaging modalities. Functional localization by measuring hormonal gradients is now uncommonly used with gastrinomas (after intra-arterial secretin injections) but is still frequently used in insulinoma patients in whom other imaging studies are negative (assessing hepatic vein insulin concentrations post-intra-arterial calcium injections). Functional localization measuring hormone gradients in insulinomas or gastrin gradients in gastrinoma is a sensitive method, being positive in 80–100% of patients. The intra-arterial calcium test may also allow differentiation of the cause of the hypoglycemia and indicate whether it is due to an insulinoma or a nesidioblastosis. The latter entity is becoming increasingly important because hypoglycemia after gastric bypass surgery for obesity is increasing in frequency, and it is primarily due to nesidioblastosis, although it can occasionally be due to an insulinoma. PET and use of hybrid scanners such as CT and SRS may have increased sensitivity. PET scanning with 18F-fluoro-DOPA in patients with carcinoids or with 11C-5-HTP in patients with pNETs or GI-NETs (carcinoids) has greater sensitivity than cross-sectional imaging studies and may be used increasingly in the future. PET scanning for GI-NETs is not currently approved in the United States. The single most important prognostic factor for survival is the presence of liver metastases (Fig. 113-4). For patients with foregut carcinoids without hepatic metastases, the 5-year survival in one A.ENETS Stage D. Appendiceal NETs (Carcinoids) ---Stage 3–4 ENETS pT classification pT1–2 vs pT3–4: p = 0.0004 Probability of survival .75 .5 .25 B.UICC/AJCC/WHO2010 Stage E. Appendiceal NETs (Carcinoids) Endocrine Tumors of the Gastrointestinal Tract and Pancreas WHO/AJCC pT classification pT1–2 vs pT3–4: p <0.0001 C.ENETS/WHO Grade F.Midgut NETs (Carcinoids) Probability of survival .75 .5 .25 .75 .5 .25 FIGURE 113-4 Survival (Kaplan-Meier plots) of patients with pancreatic neuroendocrine tumors (pNETs; n = 1072) (A–C) or gastrointestinal neuroendocrine tumors (GI-NETs; carcinoids) (appendix, n = 138; midgut, n = 238) (D–F) stratified according to recent proposed classification and grading systems. (Panels A–C are drawn from data in G Rindi et al: J Natl Cancer Inst 104:764, 2012; panels D and E are drawn from data in M Volante et al: Am J Surg Pathol 37:606, 2013; and panel F is drawn from data in MS Khan: Br J Cancer 108:1838, 2013.) study was 95%, and with distant metastases, it was 20% (Fig. 113-4). With gastrinomas, the 5-year survival without liver metastases is 98%; with limited metastases in one hepatic lobe, it is 78%; and with diffuse metastases, 16% (Fig. 113-4). In a large study of 156 patients (67 pNETs, rest carcinoids), the overall 5-year survival rate was 77%; it was 96% without liver metastases, 73% with liver metastases, and 50% with distant disease. Another very important prognostic factor is whether the NET is well-differentiated (G1/G2) or poorly differentiated (<1% of all NETs) (G3). Well-differentiated NETs have a 5-year survival of 50–80%, whereas poorly differentiated NETs have a 5-year survival of only 0–15%. Therefore, treatment for advanced metastatic disease is an important challenge. A number of different modalities are reported to be effective, including cytoreductive surgery (surgically or by radiofrequency ablation [RFA]), treatment with chemotherapy, somatostatin analogues, interferon α, hepatic embolization alone or with chemotherapy (chemoembolization), molecular targeted therapy, radiotherapy with radiolabeled beads/microspheres, peptide radioreceptor therapy (PRRT), and liver transplantation. Cytoreductive surgery is considered if either all of the visible metastatic disease or at last 90% is thought resectable; however, unfortunately, this is possible in only the 9–22% of patients who present with limited hepatic metastases. Although no randomized studies have proven that it extends life, results from a number of studies suggest that it may increase survival; therefore, it is recommended, if possible. RFA can be applied to NET liver metastases if they are 574 limited in number (usually less than five) and size (usually <3.5 cm in diameter). It can be used at the time of surgery (either general or laparoscopic) or using radiologic guidance. Response rates are >80%, the responses can last up to 3 years, the morbidity rate is low, and this procedure may be particularly helpful in patients with functional pNETs that are difficult to control medically. Although RFA has not been established in a controlled trial, both the European and North American Neuroendocrine Tumor Society guidelines (ENETS, NANETS) state it can be an effective antitumor treatment for both refractory functional syndromes and for palliative treatment. Chemotherapy plays a different role in the treatment of patients with pNETs and GI-NETs (carcinoids). Chemotherapy continues to be widely used in the treatment of patients with advanced pNETs with moderate success (response rates 20–70%); however, in general, its results in patients with metastatic GI-NETs (carcinoids) has been disappointing, with response rates of 0–30% with various twoand three-drug combinations, and thus, it is infrequently used in these patients. An important distinction in patients with pNETs is whether the tumor is well differentiated (G1/G2) or poorly differentiated (G3). The chemotherapeutic approach is different for these two groups. The current regimen of choice for patients with well-differentiated pNETs is the combination of streptozotocin and doxorubicin with or without 5-fluorouracil. Streptozotocin is a glucosamine nitrourea compound originally found to have cytotoxic effects on pancreatic islets, and later in studies with doxorubicin with or without 5-fluorouracil, it produced response rates of 20–45% in advanced pNETs. Streptozotocin causes considerable morbidity, with 70–100% of patients developing side effects (most prominent being nausea/vomiting in 60–100% or leukopenia/thrombocytopenia) and 15–40% of patients developing some degree of renal dysfunction (proteinuria in 40–50%, decreased creatine clearance). The combination of temozolomide (TMZ) with capecitabine produces partial response rates as high as 70% in patients with advance pNETs and a 2-year survival of 92%. The use of TMZ or another alkylating agent in advanced pNETs is supported by studies that show low levels of the DNA repair enzyme O6-methylguanine DNA methyltransferase in pNETs, but not in GI-NETs (carcinoids), which increases the sensitivity of pNETs to TMZ. In poorly differentiated NETs (G3), chemotherapy with a cisplatin-based regimen with etoposide or other agents (vincristine, paclitaxel) is the recommended treatment, with response rates of 40–70%; however, responses are generally short-lived (<12 months). This chemotherapy regimen can be associated with significant toxicity including GI toxicities (nausea, vomiting), myelosuppression, and renal toxicity. In addition to the effectiveness in controlling the functional hormonal state, long-acting somatostatin analogues such as octreotide and lanreotide are increasingly used for their antiproliferative effects. Whereas somatostatin analogues rarely decrease tumor size (i.e., 0–17%), these drugs have tumoristatic effects, stopping additional growth in 26–95% of patients with NETs. In a randomized, double-blind study in patients with metastatic midgut carcinoids (PROMID study) octreotide-LAR demonstrated a marked lengthening of time to progression (14.3 vs 6 months, p = .000072). This improvement was seen in patients with limited liver involvement. This study did not assess whether such treatment will extend survival. A double-blind, randomized, placebo-controlled, phase III study in patients with well-differentiated, metastatic, inoperable pNETs (45%) or GI-NETs (carcinoids) (55%) (CLARINET study) showed that monthly treatment with lanreotide-autogel reduced tumor progression or death by 53%. Somatostatin analogues can induce apoptosis in GI-NETs (carcinoids), which probably contributes to their tumoristatic effects. Treatment with somatostatin analogues is generally well-tolerated, with most side effects being mild and uncommonly leading to stopping the drug. Potential longterm side effects include diabetes/glucose intolerance, steatorrhea, and the development of gallbladder sludge/gallstones (10–80%), although only 1% of patients develop symptomatic gallbladder disease. Because of these phase III studies, somatostatin analogues are generally recommended as first-line treatment for patients with well-differentiated metastatic NETs. Interferon α, similar to somatostatin analogues, is effective at controlling the hormonal excess symptoms of NETs and has antiproliferative effects in NETs, which primarily result in disease stabilization (30–80%), with a decrease in tumor size in <15% of patients. Interferon can inhibit DNA synthesis, block cell cycle progression in the G1 phase, inhibit protein synthesis, inhibit angiogenesis, and induce apoptosis. Interferon α treatment results in side effects in the majority of patients, with the most frequent being a flu-like syndrome (80–100%), anorexia with weight loss, and fatigue. These side effects frequently decrease in severity with continued treatment. In addition, patients become accommodated to the symptoms. More serious side effects include hepatotoxicity (31%), hyperlipidemia (31%), bone marrow toxicity, thyroid disease (19%), and rarely CNS side effects (depression, mental/visual disorders). ENETS 2012 guidelines conclude that in patients with well-differentiated NETs that are slowly progressive, interferon α treatment should be considered if the tumor is somatostatin receptor negative or if somatostatin treatment fails. Selective internal radiation therapy (SIRT) using yttrium-90 (90Y) glass or resin microspheres is a relatively newer approach being evaluated in patients with unresectable NET liver metastases, with approximately 500 NET patients treated. The treatment requires careful evaluation for vascular shunting before treatment and a pretreatment angiogram to evaluate placement of the catheter and is generally is reserved for patients without extrahepatic metastatic disease and with adequate hepatic reserve. One of two types of 90Y microspheres are used: either microspheres with a 20to 60-µm diameter and 50 Bq/sphere (SIR-Spheres) or glass microspheres (TheraSpheres) with a 20to 30-µm diameter and 2500 Bq/sphere. The 90Y-microspheres are delivered to the liver by intra-arterial injection from percutaneously placed catheters. In four studies involving metastatic NETs, the response rate varied from 50–61% (partial or complete), tumor stabilization occurred in 22–41%, 60–100% had symptomatic improvement, and overall survival varied from 25–70 months. Side effects include postembolization syndrome (pain, fever, nausea/vomiting [frequent]), which is usually mild, although grade 2 (43%) or grade 3 (1%) symptoms can occur; radiation-induced liver disease (<1%); and radiation pneumonitis (<1%). Contraindications to use include excess shunting to the GI tract or lung, inability to isolate the liver arterial supply, and inadequate liver reserve. Because of the limited data available in the ENETS 2012 guidelines, treatment with SIRTs is considered experimental. Molecular targeted medical treatment with either an mTOR inhibitor (everolimus) or a tyrosine kinase inhibitor (sunitinib) is now approved treatment in the United States and Europe for patients with metastatic unresectable pNET, each supported by a phase III, double-blind, prospective, placebo-controlled trial. mTOR is a serine-threonine kinase that plays an important role in proliferation, cell growth, and apoptosis in both normal and neoplastic cells. Activation of the mTOR cascade is important in mediating NET cell growth, especially in pNETs. A number of mTOR inhibitors have shown promising antitumor activity in NETs including everolimus and temsirolimus, with the former undergoing a phase III trial (RADIANT-3) involving 410 patients with advance progressive pNETs. Everolimus caused significant improvement in progression-free survival (11 vs 4.6 months, p <.001) and increased by a factor of 3.7 the proportion of patients progression-free at 18 months (37% vs 9%). Everolimus treatment was associated with frequent side effects, causing a twofold increase in adverse events, with the most frequent being grade 1 or 2. Grade 3 or 4 side effects included hematologic, GI (diarrhea), stomatitis, or hypoglycemia occurring in 3–7% of patients. Most grade 3 or 4 side effects were controlled by dose reduction or drug interruption. The ENETS 2012 guidelines conclude that everolimus, similar to sunitinib (below), should be considered as a first-line treatment in selected cases of well-differentiated pNETs that are unresectable. NETs, like other normal and neoplastic cells, frequently possess multiple types of the 20 different tyrosine kinase (TK) receptors that are known and mediate the action of different growth factors. Numerous studies demonstrate that TK receptors in normal and neoplastic tissues as well as NETs are especially important in mediating cell growth, angiogenesis, differentiation, and apoptosis. Whereas a number of TK inhibitors show antiproliferative activity in NETs only sunitinib has undergone a phase III controlled trial. Sunitinib is an orally active small-molecule inhibitor of TK receptors (PDGFRs, VEGFR-1, VEGFR-2, c-KIT, FLT-3). In a phase III study in which 171 patients with progressive, metastatic, nonresectable pNETs were treated with sunitinib (37.5 mg/d) or placebo, sunitinib treatment caused a doubling of progression-free survival (11.4 vs 4.5 months, p <.001), an increase in objective tumor response rate (9% vs 0%, p = .007), and an increase in overall survival. Sunitinib treatment was associated with an overall threefold increase in side effects, although most were grade 1 or 2. The most frequent grade 3 or 4 side effects were neutropenia (12%) and hypertension (9.6%), which were controlled by dose reduction or temporary interruption. There is no consensus regarding the order of sunitinib or everolimus use in patients with advanced, well-differentiated, progressive pNETs. PRRT for NETs involves treatment with radiolabeled somatostatin analogues. The success of this approach is based on the finding that somatostatin receptors (sst) are overexpressed or ectopically expressed by 60–100% of all NETs, which allows the targeting of cytotoxic, radiolabeled somatostatin receptor ligands. Three different radionuclides are being used. High doses of [111In-DTPA-D-Phe1]octreotide, which emits γ-rays, internal conversion, and Auger electrons; 90yttrium, which emits high-energy β-particles coupled by a DOTA chelating group to octreotide or octreotate; and 177lutetium-coupled analogues, which emit both, are all in clinical studies. At present, the 177lutetium-coupled analogues are the most widely used. 111Indium-, 90yttrium-, and 177lutetiumlabeled compounds caused tumor stabilization in 41–81%, 44–88%, and 23–40%, respectively, and a decrease in tumor size in 8–30%, 6–37%, and 38%, respectively, of patients with advanced metastatic NETs. In one large study involving 504 patients with malignant NETs, 177lutetium-labeled analogues produced a reduction of tumor size of >50% in 30% of patients (2% complete) and tumor stabilization in 51% of patients. An effect on survival has not been established. At present, PRRT is not approved for use in either the United States or Europe, but because of the above promising results, a large phase III study is now being conducted in both the United States and Europe. The ENETS 2012, NANETS 2010, Nordic 2010, and European Society for Medical Oncology (ESMO) guidelines list PRRT as an experimental or investigational treatment at present. The use of liver transplantation has been abandoned for treatment of most metastatic tumors to the liver. However, for metastatic NETs, it is still a consideration. Among 213 European patients with NETs (50% functional NETs) who had liver transplantation from 1982 to 2009, the overall 5-year survival was 52% and disease free-survival was 30%. In various studies, the postoperative mortality rate is 10–14%. These results are similar to the United Network for Organ Sharing data in the United States in which 150 NET patients had liver transplants and the 5-year survival was 49%. In various studies, important prognostic factors for a poor outcome include a major resection performed in addition at the time of the liver transplant; poor tumor differentiation; hepatomegaly; age >45 years; a primary NET in the duodenum or pancreas; the presence of extrahepatic metastatic disease or extensive liver involvement (>50%); Ki-67 proliferative index >10%; and abnormal E-cadherin staining. The ENETS 2012 guidelines conclude that liver transplantation should be viewed as providing palliative care, with cure an exception, and recommend it be reserved for patients with life-threatening hormonal disturbances refractory to other treatments or for selected patients with a nonfunctional tumor with diffuse liver metastatic disease refractory to all other treatments. Howard I. Scher, Jonathan E. Rosenberg, Robert J. Motzer Transitional cell epithelium lines the urinary tract from the renal pelvis to the ureter, urinary bladder, and the proximal two-thirds of the urethra. Cancers can occur at any point: 90% of malignancies develop in the bladder, 8% in the renal pelvis, and 2% in the ureter or urethra. Bladder cancer is the fourth most common cancer in men and the thirteenth in women, with an estimated 72,570 new cases and 15,210 deaths in the United States predicted for the year 2013. The almost 5:1 ratio of incidence to mortality reflects the higher frequency of the less lethal superficial variants compared to the more lethal invasive and metastatic variants. The incidence is roughly four times higher in men than in women and twofold higher in white men than in black men, with a median age of 65 years. Once diagnosed, urothelial tumors exhibit polychronotropism, which is the tendency to recur over time in new locations in the urothelial tract. As long as urothelium is present, continuous monitoring is required. Cigarette smoking is believed to contribute to up to 50% of urothelial cancers in men and nearly 40% in women. The risk of developing a urothelial cancer in male smokers is increased twoto fourfold relative to nonsmokers and continues for 10 years or longer after cessation. Other implicated agents include aniline dyes, the drugs phenacetin and chlornaphazine, and external beam radiation. Chronic cyclophosphamide exposure also increases risk, whereas vitamin A supplements appear to be protective. Exposure to Schistosoma haematobium, a parasite found in many developing countries, is associated with an increase in both squamous and transitional cell carcinomas of the bladder. Clinical subtypes are grouped into three categories: 75% are superficial, 20% invade muscle, and 5% are metastatic at presentation. Staging of the tumor within the bladder is based on the pattern of growth and depth of invasion. The revised tumor, node, metastasis (TNM) staging system is illustrated in Fig. 114-1. About half of invasive tumors presented originally as superficial lesions that later progressed. Tumors are also rated by grade. Low-grade (highly differentiated) tumors rarely progress to a higher stage, whereas high-grade tumors do. More than 95% of urothelial tumors in the United States are transitional cell in origin. Pure squamous cancers with keratinization constitute 3%, adenocarcinomas 2%, and small cell tumors (often with paraneoplastic syndromes) <1%. Adenocarcinomas develop primarily in the urachal remnant in the dome of the bladder or in the periurethral tissues. Paragangliomas, lymphomas, and melanomas are rare. Of the transitional cell tumors, low-grade papillary lesions that grow on a central stalk are most common. These tumors are very friable, have a tendency to bleed, and have a high risk for recurrence, yet they rarely progress to the more lethal invasive variety. In contrast, carcinoma in situ (CIS) is a high-grade tumor that is considered a precursor of the more lethal muscle-invasive disease. The multicentric nature of the disease and high recurrence suggests a field effect in the urothelium that results in a predisposition to develop cancer. Molecular genetic analyses suggest that the superficial and invasive lesions develop along distinct molecular pathways. Low-grade noninvasive papillary tumors harbor constitutive activation of the receptor tyrosine kinase-Ras signal transduction pathway and high frequencies of fibroblast growth factor receptor 3 and phosphoinositide-3 576 kinase α subunit mutations. In contrast, CIS and invasive tumors have a higher frequency of TP53 and RB gene alterations. Within all clinical stages, including Tis, T1, and T2 or greater lesions, tumors with alterations in p53, p21, and/or RB have a higher probability of recurrence, metastasis, and death from disease. CLINICAL PRESENTATION, DIAGNOSIS, AND STAGING Hematuria occurs in 80–90% of patients and often reflects exophytic tumors. The bladder is the most common source of gross hematuria (40%), but benign cystitis (22%) is a more common cause than bladder cancer (15%) (Chap. 61). Microscopic hematuria is more commonly of prostate origin (25%); only 2% of bladder cancers produce microscopic hematuria. Once hematuria is documented, a urinary cytology, visualization of the urothelial tract by computed tomography (CT) or magnetic resonance urogram or intravenous pyelogram, and cystoscopy are recommended if no other etiology is found. Screening asymptomatic individuals for hematuria increases the diagnosis of tumors at an early stage but has not been shown to prolong life. After hematuria, irritative symptoms are the next most common presentation. Ureteral obstruction may cause flank pain. Symptoms of metastatic disease are rarely the first presenting sign. The endoscopic evaluation includes an examination under anesthesia to determine whether a palpable mass is present. A flexible endoscope is inserted into the bladder, and bladder barbotage for cytology is performed. Visual inspection includes mapping the location, size, and number of lesions, as well as a description of the growth pattern (solid vs papillary). All visible tumors should be resected, and a sample of the muscle underlying the tumor should be obtained to assess the depth of invasion. Normal-appearing areas are biopsied at random to ensure no CIS is present. A notation is made as to whether a tumor was completely or incompletely resected. Selective catheterization and visualization of the upper tracts should be performed if the cytology is positive and no disease is visible in the bladder. Ultrasonography, CT, and/or magnetic resonance imaging (MRI) are used to determine whether a tumor extends to perivesical fat (T3) and to document nodal spread. Distant metastases are assessed by CT of the chest and abdomen, MRI, or radionuclide imaging of the skeleton. Stage TNM 5-Year Survival Superficial Superficial Infiltrating Invasion of adjacent structures Lymph node invasion Distant extension IV IV IV I III III T1 Ta Tis T3a T4a T4b L. Nodes% 26 7–30 10–20% T3b 50 35–50% 90% 70 100 100 60 Ois Oa T2 70%II FIGURE 114-1 Bladder staging. TNM, tumor, node, metastasis. Management depends on whether the tumor invades muscle and whether it has spread to the regional lymph nodes and beyond. The probability of spread increases with increasing T stage. At a minimum, the management is complete endoscopic resection with or without intravesical therapy. The decision to recommend intravesical therapy depends on the histologic subtype, number of lesions, depth of invasion, presence or absence of CIS, and antecedent history. Recurrences develop in upward of 50% of cases, of which 5–20% progress to a more advanced stage. In general, solitary papillary lesions are managed by transurethral surgery alone. CIS and recurrent disease are treated by transurethral surgery followed by intravesical therapy. Intravesical therapies are used in two general contexts: as an adjuvant to a complete endoscopic resection to prevent recurrence or to eliminate disease that cannot be controlled by endoscopic resection alone. Intravesical treatments are advised for patients with diffuse CIS, recurrent disease, >40% involvement of the bladder surface by tumor, or T1 disease. The standard therapy, based on randomized comparisons, is Bacillus Calmette-Guérin (BCG) in six weekly instillations, often followed by maintenance administrations for ≥1 year. Other agents with activity include mitomycin C, interferon, and gemcitabine. The side effects of intravesical therapies include dysuria, urinary frequency, and, depending on the drug, myelosuppression or contact dermatitis. Rarely, intravesical BCG may produce a systemic illness associated with granulomatous infections in multiple sites requiring antituberculin therapy. Following the endoscopic resection, patients are monitored for recurrence at 3-month intervals during the first year. Recurrence may develop anywhere along the urothelial tract, including the renal pelvis, ureter, or urethra. Persistent disease in the bladder and new tumors are treated with a second course of BCG or intravesical chemotherapy with valrubicin or gemcitabine. In some cases, cystectomy is recommended. Tumors in the ureter or renal pelvis are typically managed by resection during retrograde examination or, in some cases, by instillation through the renal pelvis. Prostatic urethral tumors may require cystoprostatectomy if the tumor cannot be resected completely. The treatment of a tumor that has invaded muscle can be separated into control of the primary tumor and systemic chemotherapy to treat micrometastatic disease. Radical cystectomy is the standard treatment in the United States, although in selected cases, a bladder-sparing approach is used. This approach includes complete endoscopic resection; partial cystectomy; or a combination of resection, systemic chemotherapy, and external beam radiation therapy. In some countries, external beam radiation therapy is considered standard. In the United States, it is generally limited to those patients deemed unfit for cystectomy, those with unresectable local disease, or as part of an experimental bladder-sparing approach. Indications for cystectomy include muscle-invading tumors not suitable for segmental resection; non–muscle-invasive tumors unsuitable for conservative management (e.g., due to multicentric and frequent recurrences resistant to intravesical instillations); high-grade T1 tumors especially if associated with CIS; and bladder symptoms (e.g., frequency or hemorrhage) that impair quality of life. Radical cystectomy is major surgery that requires appropriate preoperative evaluation and management. It involves removal of the bladder and pelvic lymph nodes and creation of a conduit or reservoir for urinary flow. Grossly abnormal lymph nodes are evaluated by frozen section. If metastases are confirmed, the procedure is often aborted. In males, radical cystectomy includes the removal of the prostate, seminal vesicles, and proximal urethra. Impotence is universal unless the nerves responsible for erectile function are preserved. In females, the procedure includes removal of the bladder, urethra, uterus, fallopian tubes, ovaries, anterior vaginal wall, and surrounding fascia. Several options are frequently used for urinary diversion. Ileal conduits bring urine directly from the ureter to the abdominal wall. Some patients receive either a continent cutaneous reservoir constructed from detubularized bowel or an orthotopic neobladder. Approximately 25% of men receive a neobladder, leading to 85–90% continence during the day. Cutaneous reservoirs are drained by intermittent catheterization. Contraindications to a neobladder include renal insufficiency, an inability to self-catheterize, or CIS or an exophytic tumor in the urethra. Diffuse CIS in the bladder is a relative contraindication based on the risk of a urethral recurrence. Concurrent ulcerative colitis or Crohn’s disease may hinder the use of bowel. A partial cystectomy may be considered when the disease is limited to the dome of the bladder, a ≥2 cm margin can be achieved, there is no associated CIS, and the bladder capacity is adequate after resection. This occurs in 5–10% of cases. Carcinomas in the ureter or in the renal pelvis are treated with nephroureterectomy with a bladder cuff to remove the tumor. The probability of recurrence following surgery is based on pathologic stage, presence or absence of lymphatic or vascular invasion, and nodal spread. Among those whose cancers recur, the recurrence develops in a median of 1 year. Long-term outcomes vary by pathologic stage and histology (Table 114-1). The number of lymph nodes removed is also prognostic, whether or not the nodes contained tumor. Chemotherapy (described below) has been shown to prolong the survival of patients with muscle-invasive disease when combined with definitive treatment of the bladder by radical cystectomy or radiation therapy. Presurgical (or neoadjuvant) chemotherapy has been the most thoroughly explored, and increases the cure rate by 5–15%, whereas postsurgical (adjuvant) chemotherapy has not been proven definitively beneficial. For the majority of patients, chemotherapy alone is inadequate to eradicate the disease. Use of neoadjuvant chemotherapy is increasing, although it still remains underused. Experimental studies are evaluating bladder preservation strategies by combining chemotherapy and radiation therapy in patients whose tumors were endoscopically removed. The primary goal of metastatic disease treatment is to achieve complete remission with chemotherapy alone or with a combined-modality approach of chemotherapy followed by surgical resection of residual disease. One can define a goal in terms of cure or palliation on the basis of the probability of achieving a complete response 577 to chemotherapy using prognostic factors, such as Karnofsky performance status (KPS) (<80%) and whether the pattern of spread is nodal or visceral (liver, lung, or bone). For those with zero, one, or two risk factors, the probability of complete remission is 38, 25, and 5%, respectively, and median survival is 33, 13.4, and 9.3 months, respectively. Patients who have low KPS or who have visceral disease or bone metastases rarely achieve long-term survival. The toxicities also vary as a function of risk, and treatment-related mortality rates are as high as 3–4% using some combinations in these poor-risk patient groups. For most patients, treatment is palliative, aimed at delaying or relieving cancer-related symptoms, because few patients experience durable complete remissions. A number of chemotherapeutic drugs have activity as single agents; cisplatin, paclitaxel, and gemcitabine are considered most active. Standard therapy consists of two-, three-, or four-drug combina tions. Overall response rates of >50% have been reported using combinations such as methotrexate, vinblastine, doxorubicin, and cisplatin (MVAC); gemcitabine and cisplatin (GC); or gemcitabine, paclitaxel, and cisplatin (GPC). MVAC was considered standard, but the toxicities of neutropenia and fever, mucositis, diminished renal and auditory function, and peripheral neuropathy led to the development of alternative regimens. At present, GC is used more commonly than MVAC based on the results of a comparative trial of MVAC versus GC that showed less neutropenia and fever and less mucositis for the GC regimen with similar response rates and median overall survival. Anemia and thrombocytopenia were more common with GC. GPC is not more effective than GC. Chemotherapy has also been tested in the neoadjuvant and adjuvant settings. In a randomized trial, patients receiving three cycles of neoadjuvant MVAC followed by cystectomy had a significantly better median (6.2 years) and 5-year survival (57%) compared to cystectomy alone (median survival 3.8 years; 5-year survival 42%). Similar results were obtained in an international study of three cycles of cisplatin, methotrexate, and vinblastine (CMV) followed by either radical cystectomy or radiation therapy. The decision to administer adjuvant therapy is based on recurrence risk after cystectomy. Studies of adjuvant chemotherapy have been underpowered, and most closed for lack of accrual. One underpowered study using the GPC regimen suggested that adjuvant treatment improved survival, although many patients never received chemotherapy for metastases. Another underpowered study did not show a benefit for GC chemotherapy. Therefore, preoperative chemotherapy is preferred when medically appropriate. Indications for adjuvant chemotherapy in patients who did not receive neoadjuvant treatment include nodal disease, extravesical tumor extension, or vascular invasion in the resected specimen. The management of bladder cancer is summarized in Table 114-2. About 5000 cases of renal pelvis and ureter cancer occur each year; nearly all are transitional cell carcinomas similar to bladder cancer in biology and appearance. This tumor is associated with chronic phenacetin abuse and aristolochic acid consumption in Chinese herbal preparations; aristolochic acid also seems to be associated with Balkan nephropathy, a chronic interstitial nephritis endemic in Bulgaria, Greece, Bosnia-Herzegovina, taBLe 114-1 survivaL foLLoWing surgery for BLaDDer CanCer Pathologic Stage 5-Year Survival, % 10-Year Survival, % taBLe 114-2 management of BLaDDer CanCer Nature of Lesion Management Approach 578 and Romania. In addition, upper tract urothelial carcinoma is linked to hereditary nonpolyposis colorectal cancer. The most common symptom is painless gross hematuria, and the disease is usually detected on imaging during the workup for hematuria. Patterns of spread are like bladder cancer. For low-grade disease localized to the renal pelvis and ureter, nephroureterectomy (including excision of the distal ureter with a portion of the bladder) is associated with 5-year survival of 80–90%. More invasive or poorly differentiated tumors are more likely to recur locally and to metastasize. Metastatic disease is treated with the chemotherapy used in bladder cancer, and the outcome is similar to that of metastatic bladder cancer. Renal cell carcinomas account for 90–95% of malignant neoplasms arising from the kidney. Notable features include resistance to cytotoxic agents, infrequent responses to biologic response modifiers such as interleukin (IL) 2, robust activity to antiangiogenesis targeted agents, and a variable clinical course for patients with metastatic disease, including anecdotal reports of spontaneous regression. The incidence of renal cell carcinoma continues to rise and is now nearly 65,000 cases annually in the United States, resulting in 13,700 deaths. The male-to-female ratio is 2:1. Incidence peaks between the ages of 50 and 70 years, although this malignancy may be diagnosed at any age. Many environmental factors have been investigated as possible contributing causes; the strongest association is with cigarette smoking. Risk is also increased for patients who have acquired cystic disease of the kidney associated with end-stage renal disease and for those with tuberous sclerosis. Most cases are sporadic, although familial forms have been reported. One is associated with von Hippel-Lindau (VHL) syndrome. VHL syndrome is an autosomal dominant disorder. Genetic studies identified the VHL gene on the short arm of chromosome 3. Approximately 35% of individuals with VHL disease develop clear cell renal cell carcinoma. Other associated neoplasms include retinal hemangioma, hemangioblastoma of the spinal cord and cerebellum, pheochromocytoma, neuroendocrine tumors and cysts, and cysts in the epididymis of the testis in men and the broad ligament in women. Renal cell neoplasia represents a heterogeneous group of tumors with distinct histopathologic, genetic, and clinical features ranging from benign to high-grade malignant (Table 114-3). They are classified on the basis of morphology and histology. Categories include clear cell carcinoma (60% of cases), papillary tumors (5–15%), chromophobe tumors (5–10%), oncocytomas (5–10%), and collecting or Bellini duct tumors (<1%). Papillary tumors tend to be bilateral and multifocal. Chromophobe tumors have a more indolent clinical course, and oncocytomas are considered benign neoplasms. In contrast, Bellini duct carcinomas, which are thought to arise from the collecting ducts within the renal medulla, are rare but often very aggressive. Clear cell tumors, the predominant histology, are found in >80% of patients who Carcinoma Growth Type Pattern Cell of Origin Cytogenetics Clear cell Acinar or Proximal tubule 3p-, 5q+, 14qsarcomatoid Papillary Papillary or Proximal tubule +7, +17, -Y sarcomatoid Chromophobe Solid, tubular, Distal tubules/corti-Whole arm losses or sarcomatoid cal collecting duct (1, 2, 6, 10, 13, 17, and 21) develop metastases. Clear cell tumors arise from the epithelial cells of the proximal tubules and usually show chromosome 3p deletions. Deletions of 3p21–26 (where the VHL gene maps) are identified in patients with familial as well as sporadic tumors. VHL encodes a tumor suppressor protein that is involved in regulating the transcription of vascular endothelial growth factor (VEGF), platelet-derived growth factor (PDGF), and a number of other hypoxia-inducible proteins. Inactivation of VHL leads to overexpression of these agonists of the VEGF and PDGF receptors, which promote tumor angiogenesis and tumor growth. Agents that inhibit proangiogenic growth factor activity show antitumor effects. Enormous genetic variability has been documented in tumors from individual patients. Although the tumors have a clear clonal origin and often contain VHL mutations in common, different portions of the primary tumor and different metastatic sites may have wide variation in genetic lesions they contain. This tumor heterogeneity may underlie the emergence of treatment resistance. The presenting signs and symptoms include hematuria, abdominal pain, and a flank or abdominal mass. Other symptoms are fever, weight loss, anemia, and a varicocele. The tumor is most commonly detected as an incidental finding on a radiograph. Widespread use of radiologic cross-sectional imaging procedures (CT, ultrasound, MRI) contributes to earlier detection, including incidental renal masses detected during evaluation for other medical conditions. The increasing number of incidentally discovered low-stage tumors has contributed to an improved 5-year survival for patients with renal cell carcinoma and increased use of nephron-sparing surgery (partial nephrectomy). A spectrum of paraneoplastic syndromes has been associated with these malignancies, including erythrocytosis, hypercalcemia, nonmetastatic hepatic dysfunction (Stauffer’s syndrome), and acquired dysfibrinogenemia. Erythrocytosis is noted at presentation in only about 3% of patients. Anemia, a sign of advanced disease, is more common. The standard evaluation of patients with suspected renal cell tumors includes a CT scan of the abdomen and pelvis, chest radio-graph, urine analysis, and urine cytology. If metastatic disease is suspected from the chest radiograph, a CT of the chest is warranted. MRI is useful in evaluating the inferior vena cava in cases of suspected tumor involvement or invasion by thrombus. In clinical practice, any solid renal masses should be considered malignant until proven otherwise; a definitive diagnosis is required. If no metastases are demonstrated, surgery is indicated, even if the renal vein is invaded. The differential diagnosis of a renal mass includes cysts, benign neoplasms (adenoma, angiomyolipoma, oncocytoma), inflammatory lesions (pyelonephritis or abscesses), and other primary or metastatic cancers. Other malignancies that may involve the kidney include transitional cell carcinoma of the renal pelvis, sarcoma, lymphoma, and Wilms’ tumor. All of these are less common causes of renal masses than is renal cell cancer. Staging is based on the American Joint Committee on Cancer (AJCC) staging system (Fig. 114-2). Stage I tumors are <7 cm in greatest diameter and confined to the kidney, stage II tumors are ≥7 cm and confined to the kidney, stage III tumors extend through the renal capsule but are confined to Gerota’s fascia (IIIa) or involve a single hilar lymph node (N1), and stage IV disease includes tumors that have invaded adjacent organs (excluding the adrenal gland) or involve multiple lymph nodes or distant metastases. The 5-year survival rate varies by stage: >90% for stage I, 85% for stage II, 60% for stage III, and 10% for stage IV. FIGURE 114-2 Renal cell carcinoma staging. TNM, tumor, node, metastasis. its contents, including the kidney, the ipsilateral adrenal gland in some cases, and adjacent hilar lymph nodes. The role of a regional lymphadenectomy is controversial. Extension into the renal vein or inferior vena cava (stage III disease) does not preclude resection even if cardiopulmonary bypass is required. If the tumor is resected, half of these patients have prolonged survival. Nephron-sparing approaches via open or laparoscopic surgery may be appropriate for patients who have only one kidney, depending on the size and location of the lesion. A nephron-sparing approach can also be used for patients with bilateral tumors. Partial nephrectomy techniques are applied electively to resect small masses for patients with a normal contralateral kidney. Adjuvant therapy following this surgery does not improve outcome, even in cases with a poor prognosis. Surgery has a limited role for patients with metastatic disease. Longterm survival may occur in patients who relapse after nephrectomy in a solitary site that is removed. One indication for nephrectomy with metastases at initial presentation is to alleviate pain or hemorrhage of a primary tumor. Also, a cytoreductive nephrectomy before systemic treatment improves survival for carefully selected patients with stage IV tumors. Metastatic renal cell carcinoma is refractory to chemotherapy. Cytokine therapy with IL-2 or interferon α (IFN-α) produces regression in 10–20% of patients. IL-2 produces durable complete remission in a small proportion of cases. In general, cytokine therapy is considered unsatisfactory for most patients. The situation changed dramatically when two large-scale randomized trials established a role for antiangiogenic therapy, as predicted by the genetic studies. These trials separately evaluated two orally administered antiangiogenic agents, sorafenib and sunitinib, that inhibited receptor tyrosine kinase signaling through the VEGF and PDGF receptors. Both showed efficacy as second-line treatment following progression during cytokine treatment, resulting in approval by regulatory authorities for the treatment of advanced renal cell carcinoma. A randomized phase III trial comparing sunitinib to IFN-α showed superior efficacy for sunitinib with an acceptable safety profile. The trial resulted in a change in the standard first-line treatment from IFN to sunitinib. Sunitinib is usually given orally at a dose of 50 mg/d for 4 out of 6 weeks. Pazopanib and axitinib are newer agents of the same class. Pazopanib was compared to sunitinib in a randomized first-line phase III trial. Efficacy was similar, and there was less fatigue and skin toxicity, resulting in better quality of life scores for pazopanib compared with sunitinib. Temsirolimus and everolimus, inhibitors of the mammalian target of rapamycin (mTOR), show activity in patients with untreated poor-prognosis tumors and in sunitinib/sorafenib-refractory tumors. Patients benefit from the sequential use of axitinib and everolimus following progression to sunitinib or pazopanib first-line therapy. The prognosis of metastatic renal cell carcinoma is variable. In one analysis, no prior nephrectomy, a KPS <80, low hemoglobin, high corrected calcium, and abnormal lactate dehydrogenase were poor prognostic factors. Patients with zero, one or two, and three or more factors had a median survival of 24, 12, and 5 months, respectively. These tumors may follow an unpredictable and protracted clinical course. It may be best to document progression before considering systemic treatment. 115 of the prostate Howard I. Scher, James A. Eastham Benign and malignant changes in the prostate increase with age. Autopsies of men in the eighth decade of life show hyperplastic changes in >90% and malignant changes in >70% of individuals. The high prevalence of these diseases among the elderly, who often have competing causes of morbidity and mortality, mandates a risk-adapted approach to diagnosis and treatment. This can be achieved by considering these diseases as a series of states. Each state represents a distinct clinical milestone for which therapy(ies) may be recommended based on current symptoms, the risk of developing symptoms, or death from disease in relation to death from other causes within a given time frame. For benign proliferative disorders, symptoms of urinary frequency, infection, and potential for obstruction are weighed against the side effects and complications of medical or surgical intervention. For prostate malignancies, the risks of developing the disease, symptoms, or death from cancer are balanced against the morbidities of the recommended treatments and preexisting comorbidities. The prostate is located in the pelvis and is surrounded by the rectum, the bladder, the periprostatic and dorsal vein complexes and neurovascular bundles that are responsible for erectile function, and the urinary sphincter that is responsible for passive urinary control. The Benign and Malignant Diseases of the Prostate 580 prostate is composed of branching tubuloalveolar glands arranged in lobules surrounded by fibromuscular stroma. The acinar unit includes an epithelial compartment made up of epithelial, basal, and neuroendocrine cells and separated by a basement membrane, and a stromal compartment that includes fibroblasts and smooth-muscle cells. Prostate-specific antigen (PSA) and prostatic acid phosphatase (PAP) are produced in the epithelial cells. Both prostate epithelial cells and stromal cells express androgen receptors (ARs) and depend on androgens for growth. Testosterone, the major circulating androgen, is converted by the enzyme 5α-reductase to dihydrotestosterone in the gland. The periurethral portion of the gland increases in size during puberty and after the age of 55 years due to the growth of nonmalignant cells in the transition zone of the prostate that surrounds the urethra. Most cancers develop in the peripheral zone, and cancers in this location may be palpated during a digital rectal examination (DRE). In 2013, approximately 238,590 prostate cancer cases were diagnosed, and 29,720 men died from prostate cancer in the United States. The absolute number of prostate cancer deaths has decreased in the past 5 years, which has been attributed by some to the widespread use of PSA-based detection strategies. However, the benefit of screening on survival is unclear. The paradox of management is that although 1 in 6 men will eventually be diagnosed with the disease, and the disease remains the second leading cause of cancer deaths in men, only 1 man in 30 with prostate cancer will die of his disease. Epidemiologic studies show that the risk of being diagnosed with prostate cancer increases by a factor of two if one first-degree relative is affected and by four if two or more are affected. Current estimates are that 40% of early-onset and 5–10% of all prostate cancers are hereditary. Prostate cancer affects ethnic groups differently. Matched for age, African-American males have both a higher incidence of prostate cancer and larger tumors and more worrisome histologic features than white males. Polymorphic variants of the AR, the cytochrome P450 C17, and the steroid 5α-reductase type II (SRD5A2) genes have been implicated in the variations in incidence. The prevalence of autopsy-detected cancers is similar around the world, while the incidence of clinical disease varies. Thus, environmental and dietary factors may play a role in prostate cancer growth and progression. High consumption of dietary fats, such as α-linoleic acid or the polycyclic aromatic hydrocarbons that form when red meats are cooked, is believed to increase risk. Similar to breast cancer in Asian women, the risk of prostate cancer in Asian men increases when they move to Western environments. Protective factors include consumption of the isoflavonoid genistein (which inhibits 5α-reductase) found in many legumes, cruciferous vegetables that contain the isothiocyanate sulforaphane, retinoids such as lycopene found in tomatoes, and inhibitors of cholesterol biosynthesis (e.g., statin drugs). The development of prostate cancer is a multistep process. One early change is hypermethylation of the GSTP1 gene promoter, which leads to loss of function of a gene that detoxifies carcinogens. The finding that many prostate cancers develop adjacent to a lesion termed proliferative inflammatory atrophy (PIA) suggests a role for inflammation. Currently no drugs or dietary supplements are approved by the U.S. Food and Drug Administration (FDA) for prevention of prostate cancer, nor are any recommended by the major clinical guidelines. Although statins may have some protective effect, the potential risks outweigh the benefits given the small number of men who die of prostate cancer. The results from several large, double-blind, randomized chemoprevention trials established 5α-reductase inhibitors (5ARI) as the most likely therapy to reduce the future risk of a prostate cancer diagnosis. The Prostate Cancer Prevention Trial (PCPT), in which men older than age 55 years received placebo or the 5ARI finasteride, which inhibits the type 1 isoform, showed a 25% (95% confidence interval 19–31%) reduction in the period prevalence of prostate cancer across all age groups in favor of finasteride (18.4%) over placebo (24.4%). In the Reduction by Dutasteride of Prostate Cancer Events (REDUCE) trial, a similar 23% reduction in the 4-year period prevalence was observed in favor of dutasteride (p = .001). Dutasteride inhibits both the type 1 and type 2 5ARI isoforms. While both studies met their endpoint, there was concern that most of the cancers that were prevented were low risk and that there was a slightly higher rate of clinically significant cancers (those with higher Gleason score) in the treatment arm. Neither drug was FDA-approved for prostate cancer prevention. In comparison, the Selenium and Vitamin E Cancer Prevention Trial (SELECT), which enrolled African-American men age ≥50 years and others age ≥55 years, showed no difference in cancer incidence in patients receiving vitamin E (4.6%) or selenium (4.9%) alone or in combination (4.6%) relative to placebo (4.4%). A similar lack of benefit for vitamin E, vitamin C, and selenium was seen in the Physicians Health Study II. The prostate cancer continuum—from the appearance of a preneoplastic and invasive lesion localized to the prostate, to a metastatic lesion that results in symptoms and, ultimately, mortality—can span decades. To facilitate disease management, competing risks are considered in the context of a series of clinical states (Fig. 115-1). The states are defined operationally on the basis of whether or not a cancer diagnosis has been established and, for those with a diagnosis, whether or not metastases are detectable on imaging studies and the measured level of testosterone in the blood. With this approach, an individual resides in only one state and remains in that state until he has progressed. At each assessment, the decision to offer treatment and the specific form of treatment are based on the risk posed by the cancer relative to competing causes of mortality that may be present in that individual. It follows that the more advanced the disease, the greater is the need for treatment. For those without a cancer diagnosis, the decision to undergo testing to detect a cancer is based on the individual’s estimated life expectancy and, separately, the probability that a clinically significant cancer may be present. For those with a prostate cancer diagnosis, the clinical states model considers the probability of developing symptoms or dying from prostate cancer. Thus, a patient with localized prostate cancer who has had all cancer removed surgically remains in the state of localized disease as long as the PSA remains undetectable. The time within a state becomes a measure of the efficacy of an intervention, although the effect may not be assessable for years. Because many men with active cancer are not at risk for metastases, symptoms, or death, the clinical states model allows a distinction between cure—the elimination of all cancer cells, the primary therapeutic objective when treating most cancers—and cancer control, in which the tempo of the illness is altered and symptoms are controlled until the patient dies of other causes. These can be equivalent therapeutically from a patient standpoint if the patient has not experienced symptoms of the disease or the treatment needed to control it. Even when a recurrence is documented, immediate therapy is not always necessary. Rather, as at the time of diagnosis, the need for intervention is based on the tempo of the illness as it unfolds in the individual, relative to the risk-to-benefit ratio of the therapy being considered. SCREENING AND DIAGNOSIS Physical Examination The need to pursue a diagnosis of prostate cancer is based on symptoms, an abnormal DRE, or, more typically, a change in or an elevated serum PSA. The urologic history should focus on symptoms of outlet obstruction, continence, potency, or change in ejaculatory pattern. The DRE focuses on prostate size and consistency and abnormalities within or beyond the gland. Many cancers occur in the peripheral zone and may be palpated on DRE. Carcinomas are characteristically hard, nodular, and irregular, while induration may also be due to benign prostatic hypertrophy (BPH) or calculi. Overall, 20–25% of men with an abnormal DRE have cancer. No Cancer Diagnosis Clinically Localized Disease Rising PSA: No Visible Metastases: Non-Castrate Rising PSA: No Visible Metastases: Castrate Clinical Metastases: Non-Castrate Death from cancer exceeds death from other causes FIGURE 115-1 Clinical states of prostate cancer. PSA, prostate-specific antigen. Prostate-Specific Antigen PSA (kallikrein-related peptidase 3; KLK3) is a kallikrein-related serine protease that causes liquefaction of seminal coagulum. It is produced by both nonmalignant and malignant epithelial cells and, as such, is prostate-specific, not prostate cancer–specific. Serum levels may also increase from prostatitis and BPH. Serum levels are not significantly affected by DRE, but the performance of a prostate biopsy can increase PSA levels up to tenfold for 8–10 weeks. PSA circulating in the blood is inactive and mainly occurs as a complex with the protease inhibitor α1-antichymotrypsin and as free (unbound) PSA forms. The formation of complexes between PSA, α2-macroglobulin, or other protease inhibitors is less significant. Free PSA is rapidly eliminated from the blood by glomerular filtration with an estimated half-life of 12–18 h. Elimination of PSA bound to α1-antichymotrypsin is slow (estimated half-life of 1–2 weeks) because it too is largely cleared by the kidneys. Levels should be undetectable after about 6 weeks if the prostate has been removed. Immunohistochemical staining for PSA can be used to establish a prostate cancer diagnosis. Psa-Based screening and early detection PSA testing was approved by the U.S. FDA in 1994 for early detection of prostate cancer, and the widespread use of the test has played a significant role in the proportion of men diagnosed with early-stage cancers: more than 70–80% of newly diagnosed cancers are clinically organ-confined. The level of PSA in blood is strongly associated with the risk and outcome of prostate cancer. A single PSA measured at age 60 is associated (area under the curve [AUC] of 0.90) with lifetime risk of death from prostate cancer. Most prostate cancer deaths (90%) occur among men with PSA levels in the top quartile (>2 ng/mL), although only a minority of men with PSA >2 ng/mL will develop lethal prostate cancer. Despite this and mortality rate reductions reported from large randomized prostate cancer screening trials, routine use of the test remains controversial. The U.S. Preventive Services Task Force (USPSTF) reviewed the evidence for screening for prostate cancer and made a clear recommendation against screening. By giving a grade of “D” in the recommendation statement that was based on this review, the USPSTF concluded that “there is moderate or high certainty that this service has no net benefit or that the harms outweigh the benefits.” Whether the harms of screening, overdiagnosis, and overtreatment are justified by the benefits in terms of reduced prostate cancer mortality is open to reasonable doubt. In response to the USPSTF, the American Urological Association (AUA) updated their consensus statement regarding prostate cancer screening. They concluded that the quality of evidence for the benefits of screening was moderate, and evidence for harm was high for men age 55–69 years. For men outside this age range, evidence was lacking for benefit, but the harms of screening, including overdiagnosis and overtreatment, remained. The AUA recommends shared decision making considering PSA-based screening for men age 55–69, a target age group for whom benefits may outweigh harms. Outside this age range, PSA-based screening as a routine test was not recommended based on the available evidence. The entire guideline is available at www.AUAnet.org/education/guidelines/ prostate-cancer-detection.cfm. The PSA criteria used to recommend a diagnostic prostate biopsy have evolved over time. However, based on the commonly used cut point for prostate biopsy (a total PSA ≥4 ng/mL), most men with a PSA elevation do not have histologic evidence of prostate cancer at biopsy. In addition, many men with PSA levels below this cut point harbor cancer cells in their prostate. Information from the PCPT demonstrates that there is no PSA below which the risk of prostate cancer is zero. Thus, the PSA level establishes the likelihood that a man will harbor cancer if he undergoes a prostate biopsy. The goal is to increase the sensitivity of the test for younger men more likely to die of the disease and to reduce the frequency of detecting cancers of low malignant potential in elderly men more likely to die of other causes. Patients with symptomatic prostatitis should have a course of antibiotics before biopsy. However, the routine use of antibiotics in an asymptomatic man with an elevated PSA level is strongly discouraged. Prostate Biopsy A diagnosis of cancer is established by an image-guided needle biopsy. Direct visualization by transrectal ultrasound (TRUS) or magnetic resonance imaging (MRI) assures that all areas of the gland are sampled. Contemporary schemas advise an extended-pattern 12-core biopsy that includes sampling from the peripheral zone as well as a lesion-directed palpable nodule or suspicious image-guided sampling. Men with an abnormal PSA and negative biopsy are advised to undergo a repeat biopsy. BioPsy Pathology Each core of the biopsy is examined for the presence of cancer, and the amount of cancer is quantified based on the length of the cancer within the core and the percentage of the core involved. Of the cancers identified, >95% are adenocarcinomas; the rest are squamous or transitional cell tumors or, rarely, carcinosarcomas. Metastases to the prostate are rare, but in some cases colon cancers or transitional cell tumors of the bladder invade the gland by direct extension. Benign and Malignant Diseases of the Prostate Tx Primary tumor cannot be assessed T0 No evidence of primary tumor T1 Clinically inapparent tumor, neither palpable nor visible by imaging T1a Tumor incidental histologic finding in ≤5% of resected tissue; not palpable T1b Tumor incidental histologic finding in >5% of resected tissue T1c Tumor identified by needle biopsy (e.g., because of elevated PSA) T2 Tumor confined within prostateb T2a Tumor involves half of one lobe or less T2b Tumor involves more than one half of one lobe, not both lobes T2c Tumor involves both lobes T3 Tumor extends through the prostate capsulec T3a Extracapsular extension (unilateral or bilateral) T3b Tumor invades seminal vesicle(s) T4 Tumor is fixed or invades adjacent structures other than seminal vesicles such as external sphincter, rectum, bladder, levator muscles, and/or pelvic wall. aRevised from SB Edge et al (eds): AJCC Cancer Staging Manual, 7th ed. New York, Springer, 2010. bTumor found in one or both lobes by needle biopsy, but not palpable or reliably visible by imaging, is classified as T1c. cInvasion into the prostatic apex or into (but not beyond) the prostatic capsule is classified not as T3 but as T2. Abbreviations: PSA, prostate-specific antigen; TNM, tumor, node, metastasis. When prostate cancer is diagnosed, a measure of histologic aggressiveness is assigned using the Gleason grading system, in which the dominant and secondary glandular histologic patterns are scored from 1 (well-differentiated) to 5 (undifferentiated) and summed to give a total score of 2–10 for each tumor. The most poorly differentiated area of tumor (i.e., the area with the highest histologic grade) often determines biologic behavior. The presence or absence of perineural invasion and extracapsular spread is also recorded. Prostate Cancer Staging The tumor, node, metastasis (TNM) staging system includes categories for cancers identified solely on the basis of an abnormal PSA (T1c), those that are palpable but clinically confined to the gland (T2), and those that have extended outside the gland (T3 and T4) (Table 115-1, Fig. 115-2). DRE alone is inaccurate in determining the extent of disease within the gland, the presence or absence of capsular invasion, involvement of seminal vesicles, and extension of disease to lymph nodes. Because of the inadequacy of DRE for staging, the TNM staging system was modified to include the results of imaging. Unfortunately, no single test has proven to accurately indicate the stage or the presence of organ-confined disease, seminal vesicle involvement, or lymph node spread. TRUS is the imaging technique most frequently used to assess the primary tumor, but its chief use is directing prostate biopsies, not staging. No TRUS finding consistently indicates cancer with certainty. Computed tomography (CT) lacks sensitivity and specificity to detect extraprostatic extension and is inferior to MRI in visualization of lymph nodes. In general, MRI performed with an endorectal coil is superior to CT to detect cancer in the prostate and to assess local disease extent. T1-weighted MRI produces a high signal in the periprostatic fat, periprostatic venous plexus, perivesicular tissues, lymph nodes, and bone marrow. T2-weighted MRI demonstrates the internal architecture of the prostate and seminal vesicles. Most cancers have a low signal, while the normal peripheral zone has a high signal, although the technique lacks sensitivity and specificity. MRI is also useful for the planning of surgery and radiation therapy. Radionuclide bone scans (bone scintigraphy) are used to evaluate spread to osseous sites. This test is sensitive but relatively nonspecific because areas of increased uptake are not always related to metastatic disease. Healing fractures, arthritis, Paget’s disease, and other conditions will also cause abnormal uptake. True-positive bone scans are uncommon when the PSA is <10 ng/mL unless the tumor is high grade. Clinically localized prostate cancers are those that appear to be nonmetastatic after staging studies are performed. Patients with clinically localized disease are managed by radical prostatectomy, radiation therapy, or active surveillance. Choice of therapy requires the consideration of several factors: the presence of symptoms, the probability that the untreated tumor will adversely affect the quality or duration of survival and thus require treatment, and the probability that the tumor can be cured by single-modality therapy directed at the prostate or that it will require both local and systemic therapy to achieve cure. Data from the literature do not provide clear evidence for the superiority of any one treatment relative to another. Comparison of outcomes of various forms of therapy is limited by the lack of prospective trials, referral bias, the experience of the treating teams, and differences in endpoints and cancer control definitions. Often, PSA relapse– free survival is used because an effect on metastatic progression FIGURE 115-2 T stages of prostate cancer. (A) T1—Clinically inapparent tumor, neither palpable nor visible by imaging; (B) T2—Tumor confined within prostate; (C ) T3—Tumor extends through prostate capsule and may invade the seminal vesicles; (D) T4—Tumor is fixed or invades adjacent structures. Eighty-one percent of patients present with local disease (T1 and T2), which is associated with a 5-year survival rate of 100%. An additional 12% of patients present with regional disease (T3 and T4 without metastases), which is also associated with a 100% survival rate after 5 years. Four percent of patients present with distant disease (T4 with metastases), which is associated with a 28% 5-year survival rate. (Three percent of patients are ungraded, and this group is associated with a 73% 5-year survival rate.) (Data from AJCC, http://seer .cancer.gov/statfacts/html/prost.html. Figure © 2014 Memorial Sloan-Kettering Cancer Center; used with permission.) or survival may not be apparent for years. After radical surgery to remove all prostate tissue, PSA should become undetectable in the blood within 6 weeks. If PSA remains or becomes detectable after radical prostatectomy, the patient is considered to have persistent disease. After radiation therapy, in contrast, PSA does not become undetectable because the remaining nonmalignant elements of the gland continue to produce PSA even if all cancer cells have been eliminated. Similarly, cancer control is not well defined for a patient managed by active surveillance because PSA levels will continue to rise in the absence of therapy. Other outcomes are time to objective progression (local or systemic), cancer-specific survival, and overall survival; however, these outcomes may take years to assess. The more advanced the disease, the lower the probability of local control and the higher the probability of systemic relapse. More important is that within the categories of T1, T2, and T3 disease are cancers with a range of prognoses. Some T3 tumors are curable with therapy directed solely at the prostate, and some T1 lesions have a high probability of systemic relapse that requires the integration of local and systemic therapy to achieve cure. For T1c cancers in particular, stage alone is inadequate to predict outcome and select treatment; other factors must be considered. Nomograms To better assess risk and guide treatment selection, many groups have developed prognostic models or nomograms that use a combination of the initial clinical T stage, biopsy Gleason score, and baseline PSA. Some use discrete cut points (PSA <10 or ≥10 ng/mL; Gleason score of ≤6, 7, or ≥8); others employ nomograms that use PSA and Gleason score as continuous variables. More than 100 nomograms have been reported to predict the probability that a clinically significant prostate cancer is present, disease extent (organ-confined vs non–organ-confined, node-negative or -positive), or the probability of success of treatment for specific local therapies using pretreatment variables. Considerable controversy exists over what constitutes “high risk” based on a predicted probability of success or failure. In these situations, nomograms and predictive models can only go so far. Exactly what probability of success or failure would lead a physician to recommend and a patient to seek alternative approaches is controversial. As an example, it may be appropriate to recommend radical surgery for a younger patient with a low probability of cure. Nomograms are being refined continually to incorporate additional clinical parameters, biologic determinants, and year of treatment, which can also affect outcomes, making treatment decisions a dynamic process. Treatment-Related Adverse Events The frequency of adverse events varies by treatment modality and the experience of the treating team. For example, following radical prostatectomy, incontinence rates range from 2–47% and impotence rates range from 25–89%. Part of the variability relates to how the complication is defined and whether the patient or physician is reporting the event. The time of the assessment is also important. After surgery, impotence is immediate but may reverse over time, while with radiation therapy impotence is not immediate but may develop over time. Of greatest concern to patients are the effects on continence, sexual potency, and bowel function. Radical Prostatectomy The goal of radical prostatectomy is to excise the cancer completely with a clear margin, to maintain continence by preserving the external sphincter, and to preserve potency by sparing the autonomic nerves in the neurovascular bundle. The procedure is advised for patients with a life expectancy of 10 years or more and is performed via a retropubic or perineal approach or via a minimally invasive robotic-assisted or hand-held laparoscopic approach. Outcomes can be predicted using postoperative nomograms that consider pretreatment factors and the pathologic findings at surgery. PSA failure is usually defined as a value greater than 0.1 or 0.2 ng/mL. Specific criteria to guide the choice of one approach over another are lacking. Minimally invasive approaches offer the advantage of a shorter hospital stay and reduced blood loss. Rates of cancer control, recovery of continence, and recovery of erectile function are comparable between open and minimally invasive approaches. The individual surgeon rather than the surgi-583 cal approach used is most important in determining outcomes after surgery. Neoadjuvant hormonal therapy has also been explored in an attempt to improve the outcomes of surgery for high-risk patients, using a variety of definitions. The results of several large trials testing 3 or 8 months of androgen depletion before surgery showed that serum PSA levels decreased by 96%, prostate volumes decreased by 34%, and margin positivity rates decreased from 41% to 17%. Unfortunately, hormones did not produce an improvement in PSA relapse–free survival. Thus, neoadjuvant hormonal therapy is not recommended. Factors associated with incontinence following radical prostatectomy include older age and urethral length, which impacts the ability to preserve the urethra beyond the apex and the distal sphincter. The skill and experience of the surgeon are also factors. Recovery of erectile function is associated with younger age, quality of erections before surgery, and the absence of damage to the neurovas cular bundles. In general, erectile function begins to return about 6 months after surgery if both neurovascular bundles are preserved. Potency is reduced by half if at least one neurovascular bundle is sacrificed. Overall, with the availability of drugs such as phosphodiesterase-5 (PDE5) inhibitors, intraurethral inserts of alprostadil, and intracavernosal injections of vasodilators, many patients recover satisfactory sexual function. Radiation Therapy Radiation therapy is given by external beam, by radioactive sources implanted into the gland, or by a combination of the two techniques. external-Beam radiation theraPy Contemporary external-beam radiation therapy requires three-dimensional conformal treatment plans to maximize the dose to the prostate and to minimize the exposure of the surrounding normal tissue. Intensity-modulated radiation therapy (IMRT) permits shaping of the dose and allows the delivery of higher doses to the prostate and a further reduction in normal tissue exposure than three-dimensional conformal treatment alone. These advances have enabled the safe administration of doses >80 Gy and resulted in higher local control rates and fewer side effects. Cancer control after radiation therapy has been defined by various criteria, including a decline in PSA to <0.5 or 1 ng/mL, “nonrising” PSA values, and a negative biopsy of the prostate 2 years after completion of treatment. The current standard definition of biochemical failure (the Phoenix definition) is a rise in PSA by ≥2 ng/mL higher than the lowest PSA achieved. The date of failure is “at call” and not backdated. Radiation dose is critical to the eradication of prostate cancer. In a representative study, a PSA nadir of <1.0 ng/mL was achieved in 90% of patients receiving 75.6 or 81.0 Gy versus 76% and 56% of those receiving 70.2 and 64.8 Gy, respectively. Positive biopsy rates at 2.5 years were 4% for those treated with 81 Gy versus 27% and 36% for those receiving 75.6 and 70.2 Gy, respectively. Overall, radiation therapy is associated with a higher frequency of bowel complications (mainly diarrhea and proctitis) than surgery. The frequency relates directly to the volume of the anterior rectal wall receiving full-dose treatment. In one series, grade 3 rectal or urinary toxicities were seen in 2.1% of patients who received a median dose of 75.6 Gy, whereas grade 3 urethral strictures requiring dilatation developed in 1% of cases, all of whom had undergone a transurethral resection of the prostate (TURP). Pooled data show that the frequency of grade 3 and 4 toxicities is 6.9% and 3.5%, respectively, for patients who received >70 Gy. The frequency of erectile dysfunction is related to the age of the patient, the quality of erections pretreatment, the dose administered, and the time of assessment. Postradiation erectile dysfunction is related to a disruption of the vascular supply and not the nerve fibers. Neoadjuvant hormone therapy before radiation therapy has the aim of decreasing the size of the prostate and, consequently, reducing the exposure of normal tissues to full-dose radiation, increasing local control rates, and decreasing the rate of systemic failure. Benign and Malignant Diseases of the Prostate 584 Short-term hormone therapy can reduce toxicities and improve local control rates, but long-term treatment (2–3 years) is needed to prolong the time to PSA failure and lower the risk of metastatic disease in men with high-risk cancers. The impact on survival has been less clear. BrachytheraPy Brachytherapy is the direct implantation of radioactive sources (seeds) into the prostate. It is based on the principle that the deposition of radiation energy in tissues decreases as a function of the square of the distance from the source (Chap. 103e). The goal is to deliver intensive irradiation to the prostate, minimizing the exposure of the surrounding tissues. The current standard technique achieves a more homogeneous dose distribution by placing seeds according to a customized template based on imaging assessment of the cancer and computer-optimized dosimetry. The implantation is performed transperineally as an outpatient procedure with real-time imaging. Improvements in brachytherapy techniques have resulted in fewer complications and a marked reduction in local failure rates. In a series of 197 patients followed for a median of 3 years, 5-year actuarial PSA relapse–free survival for patients with pretherapy PSA levels of 0–4, 4–10, and >10 ng/mL were 98%, 90%, and 89%, respectively. In a separate report of 201 patients who underwent posttreatment biopsies, 80% were negative, 17% were indeterminate, and 3% were positive. The results did not change with longer follow-up. Nevertheless, many physicians feel that implantation is best reserved for patients with good or intermediate prognostic features. Brachytherapy is well tolerated, although most patients experience urinary frequency and urgency that can persist for several months. Incontinence has been seen in 2–4% of cases. Higher complication rates are observed in patients who have undergone a prior TURP, whereas those with obstructive symptoms at baseline are at a higher risk for retention and persistent voiding symptoms. Proctitis has been reported in <2% of patients. Active Surveillance Although prostate cancer is the most common form of cancer affecting men in the United States, patients are being diagnosed earlier and more frequently present with early-stage disease. Active surveillance, described previously as watchful waiting or deferred therapy, is the policy of monitoring the illness at fixed intervals with DREs, PSA measurements, and repeat prostate biopsies as indicated until histopathologic or serologic changes correlative of progression warrant treatment with curative intent. It evolved from studies that evaluated predominantly elderly men with well-differentiated tumors who demonstrated no clinically significant progression for protracted periods, recognition of the contrast between incidence and disease-specific mortality, the high prevalence of autopsy cancers, and an effort to reduce overtreatment. A recent screening study estimated that between 50–100 men with low-risk disease would need to be treated to prevent one prostate cancer–specific death. Arguing against active surveillance are the results of a Swedish randomized trial of radical prostatectomy versus active surveillance. With a median follow-up of 6.2 years, men treated by radical surgery had a lower risk of prostate cancer death relative to active surveillance patients (4.6% vs 8.9%) and a lower risk of metastatic progression (hazard ratio 0.63). Case selection is critical, and determining clinical parameters predictive of cancer aggressiveness that can be used to reliably select men most likely to benefit from active surveillance is an area of intense study. In one prostatectomy series, it was estimated that 10–15% of those treated had “insignificant” disease. One set of criteria includes men with clinical T1c tumors that are biopsy Gleason grade 6 or less involving three or fewer cores, each of them having less than 50% involvement by tumor and a PSA density of less than 0.15. Concerns about active surveillance include the limited ability to predict pathologic findings by needle biopsy even when multiple cores are obtained, the recognized multifocality of the disease, and the possibility of a missed opportunity to cure the disease. Nomograms to help predict which patients can safely be managed by active surveillance continue to be refined, and as their predictive accuracy improves, it can be anticipated that more patients will be candidates. This term is applied to a group of patients in whom the sole manifestation of disease is a rising PSA after surgery and/or radiation therapy. By definition, there is no evidence of disease on an imaging study. For these patients, the central issue is whether the rise in PSA results from persistent disease in the primary site, systemic disease, or both. In theory, disease in the primary site may still be curable by additional local treatment. The decision to recommend radiation therapy after prostatectomy is guided by the pathologic findings at surgery, because imaging studies such as CT and bone scan are typically uninformative. Some recommend a choline-11 positron emission tomography (PET) scan, but availability in the United States is limited. Others recommend that a biopsy of the urethrovesical anastomosis be obtained before considering radiation, whereas others treat empirically based on risk. Factors that predict for response to salvage radiation therapy are a positive surgical margin, lower Gleason score in the radical prostatectomy specimen, long interval from surgery to PSA failure, slow PSA doubling time, absence of disease in the lymph nodes, and a low (<0.5–1 ng/mL) PSA value at the time of radiation treatment. Radiation therapy is generally not recommended if the PSA was persistently elevated after surgery, which usually indicates that the disease has spread outside of the area of the prostate bed and is unlikely to be controlled with radiation therapy. As is the case for other disease states, nomograms to predict the likelihood of success are available. For patients with a rising PSA after radiation therapy, salvage local therapy can be considered if the disease was “curable” at the time of diagnosis, if persistent disease has been documented by a biopsy of the prostate, and if no metastatic disease is seen on imaging studies. Unfortunately, case selection is poorly defined in most series, and morbidities are significant. Options include salvage radical prostatectomy, salvage cryotherapy, salvage radiation therapy, and salvage irreversible electroporation. The rise in PSA after surgery or radiation therapy may indicate subclinical or micrometastatic disease with or without local recurrence. In these cases, the need for treatment depends, in part, on the estimated probability that the patient will develop clinically detectable metastatic disease on a scan and in what time frame. That immediate therapy is not always required was shown in a series where patients who developed a biochemical recurrence after radical prostatectomy received no systemic therapy until metastatic disease was documented. Overall, the median time to metastatic progression by imaging was 8 years, and 63% of the patients with rising PSA values remained free of metastases at 5 years. Factors associated with progression included the Gleason score of the radical prostatectomy specimen, time to recurrence, and PSA doubling time. For those with Gleason grade ≥8, the probability of metastatic progression was 37%, 51%, and 71% at 3, 5, and 7 years, respectively. If the time to recurrence was <2 years and PSA doubling time was long (>10 months), the proportions with metastatic disease at the same time intervals were 23%, 32%, and 53%, versus 47%, 69%, and 79% if the doubling time was short (<10 months). PSA doubling times are also prognostic for survival. In one series, all patients who succumbed to disease had PSA doubling times of 3 months or less. Most physicians advise treatment if the PSA doubling time is 12 months or less. A difficulty with predicting the risk of metastatic spread, symptoms, or death from disease in the rising PSA state is that most patients receive some form of therapy before the development of metastases. Nevertheless, predictive models continue to be refined. METASTATIC DISEASE: NONCASTRATE The state of noncastrate metastatic prostate cancer includes men with metastases visible on an imaging study and noncastrate levels of testosterone (>150 ng/dL). The patient may be newly diagnosed or have a recurrence after treatment for localized disease. Symptoms of metastatic disease include pain from osseous spread, although many patients are asymptomatic despite extensive spread. Less common are symptoms related to marrow compromise (myelophthisis), spinal cord compression, or a coagulopathy. Standard treatment is to deplete/lower androgens by medical or surgical means and/or to block androgen binding to the AR with antiandrogens. More than 90% of male hormones originate in the testes; <10% are synthesized in the adrenal gland. Surgical orchiectomy is the “gold standard” but is rarely used due to the availability of effective medical therapies and the more widespread use of hormones on an intermittent basis by which patients are treated for defined periods of time, following which the treatments are intentionally discontinued (discussed further below) (Fig. 115-3). Testosterone-Lowering Agents Medical therapies that lower testosterone levels include the gonadotropin-releasing hormone (GnRH) agonists/antagonists, 17,20-lyase inhibitors, CYP17 inhibitors, estrogens, and progestational agents. Of these, GnRH analogues such as leuprolide acetate and goserelin acetate initially produce a rise in luteinizing hormone and follicle-stimulating hormone, followed by a downregulation of receptors in the pituitary gland, which effects a chemical castration. They were approved on the basis of randomized comparisons showing an improved safety profile (specifically, reduced cardiovascular toxicities) relative to diethylstilbestrol (DES), with equivalent potency. The initial rise in testosterone may result in a clinical flare of the disease. As such, these agents are relatively contraindicated in men with significant obstructive symptoms, cancer-related pain, or spinal cord compromise. GnRH antagonists such as degarelix achieve castrate levels of testosterone within 48 h without the initial rise in serum testosterone and do not cause a flare in the disease. Estrogens such as DES are rarely used due to the risk of vascular complications such as fluid retention, phlebitis, embolic events, and stroke. Progestational agents alone are less efficacious. Agents that lower testosterone are associated with an androgen-depletion syndrome that includes hot flushes, weakness, fatigue, loss of libido, impotence, sarcopenia, anemia, change in personality, and depression. Changes in lipids, obesity, and insulin resistance, along with an increased risk of diabetes and cardiovascular disease, can also occur, mimicking the metabolic syndrome. A decrease in bone density may also result that worsens over time and results in an increased risk of clinical fractures. This is a particular concern, often underappreciated, for men with preexisting osteopenia secondary to hypogonadism or glucocorticoid or alcohol use. Baseline fracture risk can be assessed using the Fracture Risk Assessment Scale (FRAX), and to minimize fracture risk, patients are advised to take calcium and vitamin D supplementation, along with a bisphosphonate or the RANK ligand inhibitor, denosumab. Antiandrogens First-generation nonsteroidal antiandrogens such as flutamide, bicalutamide, and nilutamide block ligand binding to the AR and were initially approved to block the disease flare that may occur with the rise in serum testosterone associated with GnRH agonist therapy. When antiandrogens are given alone, testosterone levels typically increase above baseline, but relative to testosterone-lowering therapies, they cause fewer hot flushes, less of an effect on libido, less muscle wasting, fewer personality changes, and less bone loss. Gynecomastia remains a significant problem but can be alleviated in part by tamoxifen. Most reported randomized trials suggest that the cancer-specific outcomes are inferior when antiandrogens are used alone. Bicalutamide, even at 150 mg (three times the recommended dose), was associated with a shorter time to progression and inferior survival compared to surgical castration for patients with established metastatic disease. Nevertheless, some men may accept the trade-off of a potentially inferior cancer outcome for an improved quality of life. Combined androgen blockade, the administration of an antiandrogen plus a GnRH analogue or surgical orchiectomy, and triple androgen blockade, which includes the addition of a 5ARI, have not Chapter 115 Benign and Malignant Diseases of the Prostate XXTestis HypothalmusPituitary ProstateProstate cell Prostate cell nucleus DNA Adrenal Glands CYP17 inhibitors abiraterone Testosterone androstenedione DHEA DHEA-S GnRH GnRH agonists GnRH antagonist Estrogens dutasteride prednisone ACTHLH CRH DHT DHT DHT AR AR AR AR AR Next generation anti-androgens ARE XXenzalutamide FIGURE 115-3 Sites of action of different hormone therapies. ACTH, adrenocorticotropic hormone; AR, androgen receptor; ARE, androgen-response element; CRH, corticotropin-releasing hormone; DHEA, dehydroepiandrosterone; DHEA-S, dehydroepi-androsterone sulphate; DHT, dihydrotestosterone; GnRH, gonado-tropin-releasing hormone; LH, luteinizing hormone. 586 been shown to be superior to androgen depletion monotherapies and are no longer recommended. In practice, most patients who are treated with a GnRH agonist receive an antiandrogen for the first 2–4 weeks of treatment to protect against the flare. Intermittent Androgen Deprivation Therapy (IADT) The use of hormones in an “on-and-off” approach was initially proposed as a way to prevent the selection of cells that are resistant to androgen depletion and to reduce side effects. The hypothesis is that by allowing endogenous testosterone levels to rise, the cells that survive androgen depletion will induce a normal differentiation pathway. It is postulated that by allowing the surviving cells to proliferate in the presence of androgen, sensitivity to subsequent androgen depletion will be retained and the chance of developing a castration-resistant state will be reduced. Applied in the clinic, androgen depletion is continued for 2–6 months beyond the point of maximal response. Once treatment is stopped, endogenous testosterone levels increase, and the symptoms associated with hormone treatment abate. PSA levels also begin to rise, and at some level, treatment is restarted. With this approach, multiple cycles of regression and proliferation have been documented in individual patients. It is unknown whether the intermittent approach increases, decreases, or does not change the overall duration of sensitivity to androgen depletion. The approach is safe, but long-term data are needed to assess the course in men with low PSA levels. A randomized trial showed similar survival time between patients treated with intermittent versus continuous treatment, with a slightly higher risk of prostate cancer–specific mortality in the intermittent group, and higher cardiovascular mortality in patients on continuous therapy. The intermittent therapy was better tolerated. Outcomes of Androgen Depletion The anti–prostate cancer effects of the various androgen depletion/blockade strategies are similar, and the outcomes predictable: an initial response, then a period of stability in which tumor cells are dormant and nonproliferative, followed after a variable period of time by a rise in PSA and tumor regrowth as a castration-resistant lesion that for most men is invariably lethal. Androgen depletion is not curative because cells that survive castration are present when the disease is first diagnosed. Considered by disease manifestation, PSA levels return to normal in 60–70% of cases, and measurable lesions regress in about 50%; improvements in bone scan occur in 25% of cases, but the majority of cases remain stable. The duration of response and survival is inversely proportional to disease extent at the time androgen depletion is first started, whereas the degree of PSA decline at 6 months has been shown to be prognostic. In a large-scale trial, PSA nadir proved prognostic. An active question is whether hormones should be given in the adjuvant setting after surgery or radiation treatment of the primary tumor or whether to wait until PSA recurrence, metastatic disease, or symptoms are documented. Trials in support of early therapy have often been underpowered relative to the reported benefit or have been criticized on methodologic grounds. One trial showing a survival benefit for patients treated with radiation therapy and 3 years of androgen depletion, relative to radiation alone, was criticized for the poor outcomes of the control group. Another showing a survival benefit for patients with positive lymph nodes who were randomized to immediate medical or surgical castration compared to observation (p = .02) was criticized because the confidence intervals around the 5and 8-year survival distributions for the two groups overlapped. A large randomized study comparing early to late hormone treatment (orchiectomy or GnRH analogue) in patients with locally advanced or asymptomatic metastatic disease showed that patients treated early were less likely to progress from M0 to M1 disease, to develop pain, and to die of prostate cancer. This trial was criticized because therapy was delayed “too long” in the late-treatment group. Noteworthy is that the American Society of Clinical Oncology Guidelines recommend deferring treatment until the disease has recurred and the prognosis has been reassessed. These guidelines do not support immediate therapy. METASTATIC DISEASE: CASTRATE Castration-resistant prostate cancer (CRPC) is defined as disease that progresses despite androgen suppression by medical or surgical therapies where the measured levels of testosterone are 50 ng/ mL or lower. The rise in PSA indicates continued signaling through the AR signaling axis, the result of a series of oncogenic changes that include overexpression of androgen biosynthetic enzymes that can lead to increased intratumoral androgens, and overexpression of the receptor itself that enables signaling to occur even in the setting of low levels of androgen. The majority of CRPC cases are not “hormone-refractory,” and considering them as such can deny patients safe and effective treatment. CRPC can manifest in many ways. For some, it is a rise in PSA with no change in radiographs and no new symptoms. In others, it is a rising PSA and progression in bone with or without symptoms of disease. Still others will show soft tissue disease with or without osseous metastases, and others have visceral spread. For the individual patient, it is first essential to ensure that a castrate status be documented. Patients receiving an antiandrogen alone, whose serum testosterone levels are elevated, should be treated first with a GnRH analogue or orchiectomy and observed for response. Patients on an antiandrogen in combination with a GnRH analogue should have the antiandrogen discontinued, because approximately 20% will respond to the selective discontinuation of the antiandrogen. Chemotherapy and New Agents Through 2009, docetaxel was the only systemic therapy proven to prolong life. As a single agent, the drug produced PSA declines in 50% of patients, measurable disease regression in 25%, and improvement in both preexisting pain and prevention of future cancer-related pain. Since then, six agents with diverse mechanisms of action that target the tumor itself or other aspects of the metastatic process have been proven to prolong life and were FDA approved. The first was sipuleucel-T, the first biologic approach shown to prolong life in which antigen-presenting cells are activated ex vivo, pulsed with antigen, and reinfused. The second, cabazitaxel, a non–cross-resistant taxane, was shown to be superior to mitoxantrone in the post-docetaxel setting. This was followed by the CYP17 inhibitor abiraterone acetate, which lowers androgen levels in the tumor, adrenal glands, and testis, and the next-generation antiandrogen enzalutamide, which not only has a higher binding affinity to the AR relative to first-generation compounds, but uniquely inhibits nuclear location and DNA binding of the receptor complex. Both abiraterone acetate and enzalutamide were first approved for postchemotherapy treated patients on the basis of placebo-controlled phase III trials—a further indication that these tumors are not uniformly hormone-refractory. The indication for abiraterone acetate was later expanded to the prechemotherapy setting, based on a second trial using a co-primary endpoint of radiographic progression–free survival and overall survival. Similar results were seen with enzalutamide, for which an expanded indication is also anticipated. Alpharadin (radium-223 chloride), an alpha-emitting bone-seeking radioisotope, has been shown to prolong life in patients with symptoms related to osseous disease. The alpharadin result validated the bone microenvironment as a therapeutic target independent of direct effects on the tumor itself, as no declines in PSA were observed in the trial. Notable is that in addition to a survival benefit, the drug also reduced the development of significant skeletal events. Other bone-targeted agents, such as the bisphosphonates and the RANK ligand inhibitor denosumab, protect against bone loss associated with androgen depletion and also reduce skeletal-related events by targeting bone osteoclasts. In one trial, denosumab was shown to be superior to zoledronic acid with respect to skeletal-related events, but had a slightly higher frequency of osteonecrosis of the jaw. In clinical practice, most men seek to avoid chemotherapy and are first treated with a biologic agent and/or newer hormonal agent approved for this indication. It is crucial to the management of the individual patient to define therapeutic objectives before initiating treatment, as there are defined standards of care for different disease manifestations. For example, sipuleucel-T is not indicated for patients with symptoms or visceral disease because the effects on the disease occur late. Similarly, alpharadin is not indicated for patients with disease that is predominantly in soft tissue or who have osseous disease that is not causing symptoms. Pain Management Management of pain secondary to osseous metastatic disease is a critical part of therapy. Optimal palliation requires assessing whether the symptoms are from metastases that threaten or that are already affecting the spinal cord, the cauda equina, or the base of the skull, which are best treated with external-beam radiation, as are single sites of pain. Neurologic symptoms require emergency evaluation because loss of function may be permanent if not addressed quickly. Because the disease is often diffuse, palliation at one site is often followed by the emergence of symptoms in a separate site that had not received radiation. In these cases, bone-seeking radioisotopes such as alpharadin or the beta emitter 153Sm-EDTMP (Quadramet) can be considered in addition to abiraterone acetate, docetaxel, and mitoxantrone, each of which is formally approved for the palliation of pain due to prostate cancer metastases. BPH is a pathologic process that contributes to the development of lower urinary tract symptoms in men. Such symptoms, arising from lower urinary tract dysfunction, are further subdivided into obstructive symptoms (urinary hesitancy, straining, weak stream, terminal dribbling, prolonged voiding, incomplete emptying) and irritative symptoms (urinary frequency, urgency, nocturia, urge incontinence, small voided volumes). Lower urinary tract symptoms and other sequelae of BPH are not just due to a mass effect, but are also likely due to a combination of the prostatic enlargement and age-related detrusor dysfunction. The symptoms are generally measured using a validated, reproducible index that is designed to determine disease severity and response to therapy—the AUA’s Symptom Index (AUASI), also adopted as the International Prostate Symptom Score (IPSS) (Table 115–2). Serial AUASI is particularly useful in following patients as they are treated with various forms of therapy. Asymptomatic patients do not require treatment regardless of the size of the gland, whereas patients with an inability to urinate, gross hematuria, recurrent infection, or bladder stones may require surgery. In patients with 587 symptoms, uroflowmetry can identify those with normal flow rates who are unlikely to benefit from treatment, and bladder ultrasound can identify those with high postvoid residuals who may need intervention. Pressure-flow (urodynamic) studies detect primary bladder dysfunction. Cystoscopy is recommended if hematuria is documented and to assess the urinary outflow tract before surgery. Imaging of the upper tracts is advised for patients with hematuria, a history of calculi, or prior urinary tract problems. Symptomatic relief is the most common reason men seek treatment for BPH, and therefore the goal of therapy for BPH is usually relief of these symptoms. Alpha-adrenergic receptor antagonists are thought to treat the dynamic aspect of BPH by reducing sympathetic tone of the bladder outlet, thereby decreasing resistance and improving urinary flow. 5ARIs are thought to treat the static aspect of BPH by reducing prostate volume and having a similar, albeit delayed effect. They have also proven to be beneficial in the prevention of BPH progression, as measured by prostate volume, the risk of developing acute urinary retention, and the risk of having BPH-related surgery. The use of an alpha-adrenergic receptor antagonist and a 5ARI as combination therapy seeks to provide symptomatic relief while preventing progression of BPH. Another class of medications that has shown improvement in lower urinary tract symptoms secondary to BPH is PDE5 inhibitors, used currently in the treatment of erectile dysfunction. All three of the PDE5 inhibitors available in the United States, sildenafil, vardenafil, and tadalafil, appear to be effective in the treatment of symp toms secondary to BPH. The use of PDE5 inhibitors is not without controversy, however, given the fact that short-acting phosphodiesterase inhibitors such as sildenafil need to be dosed separately from alpha blockers such as tamsulosin because of potential hypotensive effects. Newer classes of pharmacologic agents have been used to treat symptoms secondary to BPH. Symptoms due to BPH often coexist with symptoms due to overactive bladder, and the most common pharmacologic agents for the treatment of overactive bladder symptoms are anticholinergics. This has led to multiple studies evaluating the efficacy of anticholinergics for the treatment of lower urinary tract symptoms secondary to BPH. Surgical therapy is now considered second-line therapy and is usually reserved for patients after a trial of medical therapy. The goal of surgical therapy is to reduce the size of the prostate, effectively reducing resistance to urine flow. Surgical approaches include TURP, transurethral incision, or removal of the gland via a retropubic, suprapubic, or perineal approach. Also used are transurethral ultrasound-guided laser-induced prostatectomy (TULIP), stents, and hyperthermia. Benign and Malignant Diseases of the Prostate Less Than Less Than About Half More Than Almost Questions to Be Answered Not at All 1 Time in 5 Half the Time the Time Half the Time Always Over the past month, how often have you had a sensation of not emptying your bladder completely after you finished urinating? Over the past month, how often have you had to urinate again less than 2 h after you finished urinating? Over the past month, how often have you found you stopped and started again several times when you urinated? Over the past month, how often have you found it difficult to postpone urination? Over the past month, how often have you had a weak urinary stream? Over the past month, how often have you had to push or strain to begin urination? Over the past month, how many times did you most typically get up to urinate from the time you went to bed at night until the time you got up in the morning? Sum of 7 circled numbers (AUA Symptom Score): ____ Abbreviation: AUA, American Urological Association. Source: MJ Barry et al: J Urol 148:1549, 1992. Used with permission. 588 testicular Cancer Robert J. Motzer, Darren R. Feldman, George J. Bosl Primary germ cell tumors (GCTs) of the testis arising by the malig-nant transformation of primordial germ cells constitute 95% of all testicular neoplasms. Infrequently, GCTs arise from an extragonadal site, including the mediastinum, retroperitoneum, and, very rarely, the 116 pineal gland. This disease is notable for the young age of the afflicted patients, the totipotent capacity for differentiation of the tumor cells, and its curability; approximately 95% of newly diagnosed patients are cured. Experience in the management of GCTs leads to improved outcome. The incidence of testicular GCT is now approximately 8000 cases annually in the United States, resulting in nearly 400 deaths. The tumor occurs most frequently in men between the ages of 20 and 40 years. A testicular mass in a male ≥50 years should be regarded as a lymphoma until proved otherwise. GCT is at least four to five times more common in white than in African-American males, and a higher incidence has been observed in Scandinavia and New Zealand than in the United States. Cryptorchidism is associated with a several-fold higher risk of GCT. Abdominal cryptorchid testes are at a higher risk than inguinal crypt-orchid testes. Orchiopexy should be performed before puberty, if possible. Early orchiopexy reduces the risk of GCT and improves the ability to save the testis. An abdominal cryptorchid testis that cannot be brought into the scrotum should be removed. Approximately 2% of men with GCTs of one testis will develop a primary tumor in the other testis. Testicular feminization syndromes and family history increase the risk of testicular GCT, and Klinefelter’s syndrome is associated with mediastinal GCT. An isochromosome of the short arm of chromosome 12 [i(12p)] is pathognomonic for GCT. Excess 12p copy number, either in the form of i(12p) or as increased 12p on aberrantly banded marker chromosomes, occurs in nearly all GCTs, but the gene(s) on 12p involved in the pathogenesis are not yet defined. A painless testicular mass is pathognomonic for a testicular malignancy. More commonly, patients present with testicular discomfort or swelling suggestive of epididymitis and/or orchitis. In this circumstance, a trial of antibiotics is reasonable. However, if symptoms persist or a residual abnormality remains, then testicular ultrasound examination is indicated. Ultrasound of the testis is indicated whenever a testicular malignancy is considered and for persistent or painful testicular swelling. If a testicular mass is detected, a radical inguinal orchiectomy should be performed. Because the testis develops from the gonadal ridge, its blood supply and lymphatic drainage originate in the abdomen and descend with the testis into the scrotum. An inguinal approach is taken to avoid breaching anatomic barriers and permitting additional pathways of spread. Back pain from retroperitoneal metastases is common and must be distinguished from musculoskeletal pain. Dyspnea from pulmonary metastases occurs infrequently. Patients with increased serum levels of human chorionic gonadotropin (hCG) may present with gynecomastia. A delay in diagnosis is associated with a more advanced stage and possibly worse survival. The staging evaluation for GCT includes a determination of serum levels of α fetoprotein (AFP), hCG, and lactate dehydrogenase (LDH). After orchiectomy, a computed tomography (CT) scan of the chest, abdomen, and pelvis is generally performed. Stage I disease is limited to the testis, epididymis, or spermatic cord. Stage II disease is limited to retroperitoneal (regional) lymph nodes. Stage III disease is disease outside the retroperitoneum, involving supradiaphragmatic nodal sites or viscera. The staging may be “clinical”—defined solely by physical examination, blood marker evaluation, and radiographs—or “pathologic”—defined by an operative procedure. The regional draining lymph nodes for the testis are in the retro-peritoneum, and the vascular supply originates from the great vessels (for the right testis) or the renal vessels (for the left testis). As a result, the lymph nodes that are involved first by a right testicular tumor are the interaortocaval lymph nodes just below the renal vessels. For a left testicular tumor, the first involved lymph nodes are lateral to the aorta (para-aortic) and below the left renal vessels. In both cases, further retroperitoneal nodal spread is inferior, contralateral, and, less commonly, above the renal hilum. Lymphatic involvement can extend cephalad to the retrocrural, posterior mediastinal, and supraclavicular lymph nodes. Treatment is determined by tumor histology (seminoma versus nonseminoma) and clinical stage (Fig. 116-1). GCTs are divided into nonseminoma and seminoma subtypes. Nonseminomatous GCTs are most frequent in the third decade of life and can display the full spectrum of embryonic and adult cellular differentiation. This entity comprises four histologies: embryonal carcinoma, teratoma, choriocarcinoma, and endodermal sinus (yolk sac) tumor. Choriocarcinoma, consisting of both cytotrophoblasts and syncytiotrophoblasts, represents malignant trophoblastic differentiation and is invariably associated with secretion of hCG. Endodermal sinus tumor is the malignant counterpart of the fetal yolk sac and is associated with secretion of AFP. Pure embryonal carcinoma may secrete AFP or hCG, or both; this pattern is biochemical evidence of differentiation. Teratoma is composed of somatic cell types derived from two or more germ layers (ectoderm, mesoderm, or endoderm). Each of these histologies may be present alone or in combination with others. Nonseminomatous GCTs tend to metastasize early to sites such as the retroperitoneal lymph nodes and lung parenchyma. Sixty percent of patients present with disease limited to the testis (stage I), 20% with retroperitoneal metastases (stage II), and 20% with more extensive supradiaphragmatic nodal or visceral metastases (stage III). Seminoma represents approximately 50% of all GCTs, has a median age in the fourth decade, and generally follows a more indolent clinical course. Eighty percent of patients present with stage I disease, approximately 10% with stage II disease, and 10% with stage III disease; lung or other visceral metastases are rare. When a tumor contains both seminoma and nonseminoma components, patient management is directed by the more aggressive nonseminoma component. Careful monitoring of the serum tumor markers AFP and hCG is essential in the management of patients with GCT, because these markers are important for diagnosis, as prognostic indicators, in monitoring treatment response, and in the early detection of relapse. Approximately 70% of patients presenting with disseminated nonseminomatous GCT have increased serum concentrations of AFP and/or hCG. Although hCG concentrations may be increased in patients with either nonseminoma or seminoma histology, the AFP concentration is increased only in patients with nonseminoma. The presence of an increased AFP level in a patient whose tumor shows only seminoma indicates that an occult nonseminomatous component exists, and the patient should be treated for nonseminomatous GCT. LDH levels are less specific than AFP or hCG but are increased in 50–60% patients with metastatic nonseminoma and in up to 80% of patients with advanced seminoma. AFP, hCG, and LDH levels should be determined before and after orchiectomy. Increased serum AFP and hCG concentrations decay according to first-order kinetics; the half-life is 24–36 h for hCG and 5–7 days for AFP. AFP and hCG should be assayed serially during and after treatment. The reappearance of hCG and/or AFP or the failure of these markers to decline according to the predicted half-life is an indicator of persistent or recurrent tumor. Stage Extent of Disease Stage Extent of Disease Testis only, IIA IIC IB IIB Observation Chemotherapy or RT Observation Chemotherapy or RT Observation Retroperitoneal Nodes 2-5 cm RT or Chemotherapy RTRetroperitoneal Nodes < 2 cm RPLND +/– adjuvant Chemotherapy or Chemotherapy, often followed by RPLND Retroperitoneal Nodes > 5 cm Chemotherapy Chemotherapy, often followed by RPLND Testis only, with vascular/lymphatic invasion (T2), or extension through tunica albuginea (T2), or involvement of spermatic cord (T3) or scrotum (T4) RPLND or Chemotherapy Treatment Option Distant Metastases Nonseminoma Seminoma Chemotherapy, often followed by RPLND FIGURE 116-1 Germ cell tumor staging and treatment. RPLND, retroperitoneal lymph node dissection; RT, radiotherapy. Patients with radiographs and physical examination showing no evidence of disease and serum AFP and hCG concentrations that are either normal or declining to normal according to the known half-life have clinical stage I disease. Approximately 20–50% of such patients will have retroperitoneal lymph node metastases (pathologic stage II) but will still be cured in over 95% of cases. Depending on risk of relapse, which is determined by the pathology (see below), surveillance, a nerve-sparing retroperitoneal lymph node dissection (RPLND), or adjuvant chemotherapy (one to two cycles of bleomycin, etoposide, and cisplatin [BEP]) may be appropriate choices depending on the availability of surgical expertise and patient and physician preference. If the primary tumor shows no evidence for lymphatic or vascular invasion and is limited to the testis (T1, clinical stage IA), then the risk of relapse is only 10–20%. Because over 80% of patients with clinical stage IA nonseminoma are cured with orchiectomy alone and there is no survival advantage to RPLND (or adjuvant chemotherapy), surveillance is the preferred treatment option. This avoids overtreatment with the potential for both acute and long-term toxicities (see below). Surveillance requires patients to be carefully followed with periodic chest radiography, physical examination, CT scan of the abdomen, and serum tumor marker determinations. The median time to relapse is approximately 7 months, and late relapses (>2 years) are rare. Noncompliant patients can be considered for RPLND or adjuvant BEP. If lymphatic or vascular invasion is present or the tumor extends through the tunica, spermatic cord, or scrotum (T2 through T4, clinical stage IB), then the risk of relapse is approximately 50%, and RPLND and adjuvant chemotherapy can be considered. Relapse rates are reduced to 3–5% after one to two cycles of adjuvant BEP. All three approaches (surveillance, RPLND, and adjuvant BEP) should cure >95% of patients with clinical stage IB disease. RPLND is the standard operation for removal of the regional lymph nodes of the testis (retroperitoneal nodes). The operation removes the lymph nodes draining the primary site and the nodal groups adjacent to the primary landing zone. The standard (modified bilateral) RPLND removes all node-bearing tissue down to the bifurcation of the great vessels, including the ipsilateral iliac nodes. The major long-term effect of this operation is retrograde ejaculation with resultant infertility. Nerve-sparing RPLND can preserve anterograde ejaculation in ~90% of patients. Patients with pathologic stage I disease are observed, and only the <10% who relapse require additional therapy. If nodes are found to be involved at RPLND, then a decision regarding adjuvant chemotherapy is made on the basis of the extent of retroperitoneal disease (see “Stage II Nonseminoma” below). Hence, because less than 20% of patients require chemotherapy, of the three approaches, RPLND results in the lowest number of patients at risk for the late toxicities of chemotherapy. Patients with limited, ipsilateral retroperitoneal adenopathy ≤2 cm in largest diameter and normal levels of AFP and hCG can be treated with either a modified bilateral nerve-sparing RPLND or chemotherapy. The local recurrence rate after a properly performed RPLND is very low. Depending on the extent of disease, the postoperative management options include either surveillance or two cycles of adjuvant chemotherapy. Surveillance is the preferred approach for patients with resected “low-volume” metastases (tumor nodes ≤2 cm in diameter and <6 nodes involved) because the probability of relapse is one-third or less. For those who relapse, risk-directed chemotherapy is indicated (see section on advanced GCT below). Because relapse occurs in ≥50% of patients with “highvolume” metastases (>6 nodes involved, or any involved node >2 cm in largest diameter, or extranodal tumor extension), two cycles of adjuvant chemotherapy should be considered, as it results in a cure in ≥98% of patients. Regimens consisting of etoposide plus cisplatin (EP) with or without bleomycin every 3 weeks are effective and well tolerated. Increased levels of either AFP or hCG imply metastatic disease outside the retroperitoneum; full-dose (not adjuvant) chemotherapy is used in this setting. Primary management with chemotherapy is also favored for patients with larger (>2 cm) or bilateral retroperitoneal nodes (see section on advanced GCT below). Inguinal orchiectomy followed by immediate retroperitoneal radiation therapy or surveillance with treatment at relapse both result in cure in nearly 100% of patients with stage I seminoma. Historically, radiation was the mainstay of treatment, but the reported association between radiation and secondary malignancies and the absence of a survival advantage of radiation over surveillance has led many to favor surveillance for compliant patients. Approximately 15% of patients relapse, which is usually treated with chemotherapy. Longterm follow-up is essential, because approximately 30% of relapses occur after 2 years and 5% occur after 5 years. A single dose of carboplatin has also been investigated as an alternative to radiation therapy; the outcome was similar, but long-term safety data are lacking, and the retroperitoneum remained the most frequent site of relapse. Generally, nonbulky retroperitoneal disease (stage IIA and small IIB) is treated with retroperitoneal radiation therapy. Approximately 90% of patients achieve relapse-free survival with retroperitoneal masses <3 cm in diameter. Due to higher relapse rates after radiation for bulkier disease, initial chemotherapy is preferred for all stage IIC and some stage IIB patients. Chemotherapy has been studied as an alternative to radiation for stage IIA and small stage IIB seminoma with lower recurrence rates compared with historical controls. These results, combined with studies demonstrating a threefold increase in the incidence of secondary malignancies and cardiovascular disease among patients who receive both radiation and chemotherapy (patients relapsing after radiation fall into this category), have led some experts to prefer chemotherapy for all stage II seminomas. Regardless of histology, all patients with stage IIC and stage III and most with stage IIB GCT are treated with chemotherapy. Combination chemotherapy programs based on cisplatin at doses of 100 mg/m2 plus etoposide at doses of 500 mg/m2 per cycle cure 70–80% of such patients, with or without bleomycin, depending on risk stratification (see below). A complete response (the complete disappearance of all clinical evidence of tumor on physical examination and radiography plus normal serum levels of AFP and hCG for ≥1 month) occurs after chemotherapy alone in ~60% of patients, and another 10–20% become disease free with surgical resection of residual masses containing viable GCT. Lower doses of cisplatin result in inferior survival rates. The toxicity of four cycles of the BEP is substantial. Nausea, vomiting, and hair loss occur in most patients, although nausea and vomiting have been markedly ameliorated by modern antiemetic regimens. Myelosuppression is frequent, and symptomatic bleomycin pulmonary toxicity occurs in ~5% of patients. Treatment-induced mortality due to neutropenia with septicemia or bleomycin-induced pulmonary failure occurs in 1–3% of patients. Dose reductions for myelosuppression are rarely indicated. Long-term permanent toxicities include nephrotoxicity (reduced glomerular filtration and persistent magnesium wasting), ototoxicity, peripheral neuropathy, and infertility. When bleomycin is administered by weekly bolus injection, Raynaud’s phenomenon appears in 5–10% of patients. Other evidence of small blood vessel damage, such as transient ischemic attacks and myocardial infarction, is seen less often. Because not all patients are cured and treatment may cause significant toxicities, patients are stratified into “good-risk,” “intermediaterisk,” and “poor-risk” groups according to pretreatment clinical features established by the International Germ Cell Cancer Consensus Group (Table 116-1). For good-risk patients, the goal is to achieve necessary. If viable tumor is present but is completely excised, two 591 additional cycles of chemotherapy are given. If the initial histology is pure seminoma, mature teratoma is Abbreviations: AFP, α fetoprotein; hCG, human chorionic gonadotropin; LDH, lactate dehydrogenase. Source: From International Germ Cell Cancer Consensus Group. maximum efficacy with minimal toxicity. For intermediateand poor-risk patients, the goal is to identify more effective therapy with tolerable toxicity. The marker cut offs are included in the TNM (primary tumor, regional nodes, metastasis) staging of GCT. Hence, TNM stage groupings are based on both anatomy (site and extent of disease) and biology (marker status and histology). Seminoma is either goodor intermediate-risk, based on the absence or presence of nonpulmonary visceral metastases. No poor-risk category exists for seminoma. Marker levels and primary site play no role in defining risk for seminoma. Nonseminomas have good-, intermediate-, and poor-risk categories based on the primary site of the tumor, the presence or absence of nonpulmonary visceral metastases, and marker levels. For ~90% of patients with good-risk GCTs, four cycles of EP or three cycles of BEP produce durable complete responses, with minimal acute and chronic toxicity, and a low relapse rate. Pulmonary toxicity is absent when bleomycin is not used and is rare when therapy is limited to 9 weeks; myelosuppression with neutropenic fever is less frequent; and the treatment mortality rate is negligible. Approximately 75% of intermediate-risk patients and 50% of poor-risk patients achieve durable complete remission with four cycles of BEP, and no regimen has proved superior. POSTCHEMOTHERAPY SURGERY Resection of residual metastases after the completion of chemotherapy is an integral part of therapy. If the initial histology is nonseminoma and the marker values have normalized, all sites of residual disease should be resected. In general, residual retroperitoneal disease requires a modified bilateral RPLND. Thoracotomy (unilateral or bilateral) and neck dissection are less frequently required to remove residual mediastinal, pulmonary parenchymal, or cervical nodal disease. Viable tumor (seminoma, embryonal carcinoma, yolk sac tumor, or choriocarcinoma) will be present in 15%, mature teratoma in 40%, and necrotic debris and fibrosis in 45% of resected specimens. The frequency of teratoma or viable disease is highest in residual mediastinal tumors. If necrotic debris or mature teratoma is present, no further chemotherapy is rarely present, and the most frequent finding is necrotic debris. For residual retroperitoneal disease, a complete RPLND is technically difficult due to extensive postchemotherapy fibrosis. Observation is recommended when no radiographic abnormality exists on CT scan. Positive findings on a positron emission tomography (PET) scan correlate with viable seminoma in residua and mandate surgical excision or biopsy. Of patients with advanced GCT, 20–30% fail to achieve a durable complete response to first-line chemotherapy. A combination of vinblastine, ifosfamide, and cisplatin (VeIP) will cure approximately 25% of patients as a second-line therapy. Patients are more likely to achieve a durable complete response if they had a testicular primary tumor and relapsed from a prior complete remission to first-line cisplatin-containing chemotherapy. Substitution of paclitaxel for vinblastine (TIP) in this setting was associated with durable remission in nearly two-thirds of patients. In contrast, for patients with a primary mediastinal nonseminoma or who did not achieve a complete response with first-line chemotherapy, then VeIP standard-dose salvage therapy is rarely beneficial. Such patients are usually managed with high-dose chemotherapy and/or surgical resection. Chemotherapy consisting of dose-intensive, high-dose carboplatin plus high-dose etoposide, with peripheral blood stem cell support, induces a complete response in 25–40% of patients who have progressed after ifosfamide-containing salvage chemotherapy. Approximately one-half of the complete responses will be durable. High-dose therapy is standard of care for this patient population and has been suggested as the treatment of choice for all patients with relapsed or refractory disease. Paclitaxel is active when incorporated into high-dose combination programs. Cure is still possible in some relapsed patients. The prognosis and management of patients with extragonadal GCT depends on the tumor histology and site of origin. All patients with a diagnosis of extragonadal GCT should have a testicular ultrasound examination. Nearly all patients with retroperitoneal or mediastinal seminoma achieve a durable complete response to BEP or EP. The clinical features of patients with primary retroperitoneal nonseminoma GCT are similar to those of patients with a primary tumor of testis origin, and careful evaluation will find evidence of a primary testicular GCT in about two-thirds of cases. In contrast, a primary mediastinal nonseminomatous GCT is associated with a poor prognosis; one-third of patients are cured with standard therapy (four cycles of BEP). Patients with newly diagnosed mediastinal nonseminoma are considered to have poor-risk disease and should be considered for clinical trials testing regimens of possibly greater efficacy. In addition, mediastinal nonseminoma is associated with hematologic disorders, including acute myelogenous leukemia, myelodysplastic syndrome, and essential thrombocytosis unrelated to previous chemotherapy. These hematologic disorders are very refractory to treatment. Nonseminoma of any primary site may change into other malignant histologies such as embryonal rhabdomyosarcoma or adenocarcinoma. This is called malignant transformation. i(12p) has been identified in the transformed cell type, indicating GCT clonal origin. A group of patients with poorly differentiated tumors of unknown histogenesis, midline in distribution, and not associated with secretion of AFP or hCG has been described; a few (10–20%) are cured by standard cisplatin-containing chemotherapy. An i(12p) is present in ~25% of such tumors (the fraction that are cisplatin-responsive), confirming their origin from primitive germ cells. This finding is also predictive of the response to cisplatin-based chemotherapy and resulting long-term survival. These tumors are heterogeneous; neuroepithelial tumors and lymphoma may also present in this fashion. 592 FERTILITY Infertility is an important consequence of the treatment of GCTs. Preexisting infertility or impaired fertility is often present. Azoospermia and/or oligospermia are present at diagnosis in at least 50% of patients with testicular GCTs. Ejaculatory dysfunction is associated with RPLND, and germ cell damage may result from cisplatin-containing chemotherapy. Nerve-sparing techniques to preserve the retroperitoneal sympathetic nerves have made retrograde ejaculation less likely in the subgroups of patients who are candidates for this operation. Spermatogenesis does recur in some patients after chemotherapy. However, because of the significant risk of impaired reproductive capacity, semen analysis and cryopreservation of sperm in a sperm bank should be recommended to all patients before treatment. part 7 gynecologic malignancies Michael V. Seiden OVARIAN CANCER INCIDENCE AND PATHOLOGY Cancer arising in or near the ovary is actually a collection of diverse malignancies. This collection of malignancies, often referred to as 117 “ovary cancer,” is the most lethal gynecologic malignancy in the United States and other countries that routinely screen women for cervical neoplasia. In 2014, it was estimated that there were 21,980 cases of ovarian cancer with 14,270 deaths in the United States. The ovary is a complex and dynamic organ and, between the ages of approximately 11 and 50 years, is responsible for follicle maturation associated with egg maturation, ovulation, and cyclical sex steroid hormone production. These complex and linked biologic functions are coordinated through a variety of cells within the ovary, each of which possesses neoplastic potential. By far the most common and most lethal of the ovarian neoplasms arise from the ovarian epithelium or, alternatively, the neighboring specialized epithelium of the fallopian tube, uterine corpus, or cervix. Epithelial tumors may be benign (50%), malignant (33%), or of borderline malignancy (16%). Age influences risk of malignancy; tumors in younger women are more likely benign. The most common of the ovarian epithelial malignancies are serous tumors (50%); tumors of mucinous (25%), endometrioid (15%), clear cell (5%), and transitional cell histology or Brenner tumor (1%) represent smaller proportions of epithelial ovarian tumors. In contrast, stromal tumors arise from the steroid hormone–producing cells and likewise have different phenotypes and clinical presentations largely dependent on the type and quantity of hormone production. Tumors arising in the germ cell are most similar in biology and behavior to testicular tumors in males (Chap. 116). Tumors may also metastasize to the ovary from breast, colon, appendiceal, gastric, and pancreatic primaries. Bilateral ovarian masses from metastatic mucin-secreting gastrointestinal cancers are termed Krukenberg tumors. OVARIAN CANCER OF EPITHELIAL ORIGIN Epidemiology and Pathogenesis A female has approximately a 1 in 72 lifetime risk (1.6%) of developing ovarian cancer, with the majority of affected women developing epithelial tumors. Each of the histologic variants of epithelial tumors is distinct with unique molecular features. As a group of malignancies, epithelial tumors of the ovary have a peak incidence in women in their sixties, although age at presentation can range across the extremes of adult life, with cases being reported in women in their twenties to nineties. Each histologic subtype of ovarian cancer likely has its own associated risk factors. Serous cancer, the most common type of epithelial ovarian cancer, is seen with increased frequency in women who are nulliparous or have a history of use of talc agents applied to the perineum; other risk factors include obesity and probably hormone replacement therapy. Protective factors include the use of oral contraceptives, multiparity, and breast-feeding. These protective factors are thought to work through suppression of ovulation and perhaps the associated reduction of ovulation associated inflammation of the ovarian epithelium or, alternatively, the serous epithelium located within the fimbriae of the fallopian tube. Other protective factors, such as fallopian tube ligation, are thought to protect the ovarian epithelium (or perhaps the distal fallopian tube fimbriae) from carcinogens that migrate from the vagina to the tubes and ovarian surface epithelium. Mucinous tumors are more frequent in women with a history of cigarette smoking, whereas endometrioid and clear cell tumors are more frequent in women with a history of endometriosis. Considerable evidence now suggests that the precursor cell to serous carcinoma of the ovary might actually arise in the fimbria of the fallopian tube with extension or metastasis to the ovarian surface or capture of preneoplastic or neoplastic exfoliating tubal cells into an involuting ovarian follicle around the time of ovulation. Careful histologic and molecular analysis of tubal epithelium demonstrates molecular and histologic abnormalities, termed serous tubular intraepithelial carcinoma (STIC) lesions, in a high proportion of women undergoing risk-reducing salpingo-oophorectomies in the context of high-risk germline mutations in BRCA1 and BRCA2, as well as a modest proportion of women with ovarian cancer in the absence of such mutations. Genetic Risk Factors A variety of genetic syndromes substan tially increase a woman’s risk of developing ovarian cancer. Approximately 10% of women with ovarian cancer have a germline mutation in one of two DNA repair genes: BRCA1 (chromosome 17q12-21) or BRCA2 (chromosome 13q12-13). Individuals inheriting a single copy of a mutant allele have a very high incidence of breast and ovarian cancer. Most of these women have a family history that is notable for multiple cases of breast and/or ovarian cancer, although inheritance through male members of the family can camouflage this genotype through several generations. The most common malignancy in these women is breast carcinoma, although women harboring germline BRCA1 mutations have a marked increased risk of developing ovarian malignancies in their forties and fifties with a 30–50% lifetime risk of developing ovarian cancer. Women harboring a mutation in BRCA2 have a lower penetrance of ovarian cancer with perhaps a 20–40% chance of developing this malignancy, with onset typically in their fifties or sixties. Women with a BRCA2 mutation also are at slightly increased risk of pancreatic cancer. Likewise women with mutations in the DNA mismatch repair genes associated with Lynch syndrome, type 2 (MSH2, MLH1, MLH6, PMS1, PMS2) may have a risk of ovarian cancer as high as 1% per year in their forties and fifties. Finally, a small group of women with familial ovarian cancer may have mutations in other BRCA-associated genes such as RAD51, CHK2, and others. Screening studies in this select population suggest that current screening techniques, including serial evaluation of the CA-125 tumor marker and ultrasound, are insufficient at detecting early-stage and curable disease, so women with these germline mutations are advised to undergo prophylactic removal of ovaries and fallopian tubes typically after completing childbearing and ideally before age 35–40 years. Early prophylactic oophorectomy also protects these women from subsequent breast cancer with a reduction of breast cancer risk of approximately 50%. Presentation Neoplasms of the ovary tend to be painless unless they undergo torsion. Symptoms are therefore typically related to compression of local organs or due to symptoms from metastatic disease. Women with tumors localized to the ovary do have an increased incidence of symptoms including pelvic discomfort, bloating, and perhaps changes in a woman’s typical urinary or bowel pattern. Unfortunately, these symptoms are frequently dismissed by either the woman or her health care team. It is believed that high-grade tumors metastasize early in the neoplastic process. Unlike other epithelial malignancies, these tumors tend to exfoliate throughout the peritoneal cavity and thus present with symptoms associated with disseminated intraperitoneal tumors. The most common symptoms at presentation include a multimonth period of progressive complaints that typically include some combination of heartburn, nausea, early satiety, indigestion, constipation, and abdominal pain. Signs include the rapid increase in abdominal girth due to the accumulation of ascites that typically alerts the patient and her physician that the concurrent gastrointestinal symptoms are likely associated with serious pathology. Radiologic evaluation typically demonstrates a complex adnexal mass and ascites. Laboratory evaluation usually demonstrates a markedly elevated CA-125, a shed mucin (Muc 16) associated with, but not specific for, ovarian cancer. Hematogenous and lymphatic spread are seen but are not the typical presentation. Ovarian cancers are divided into four stages, with stage I tumors confined to the ovary, stage II malignancies confined to the pelvis, and stage III tumors confined to the peritoneal cavity (Table 117-1). These three stages are subdivided, with the most common presentation, stage IIIC, defined as tumors with bulky intraperitoneal disease. About 60% of women present with stage IIIC disease. Stage IV disease includes women with parenchymal metastases (liver, lung, spleen) or, alternatively, abdominal wall or pleural disease. The 40% not presenting with stage IIIC disease are roughly evenly distributed among the other stages, although mucinous and clear cell tumors are overrepresented in stage I tumors. Screening Ovarian cancer is the fifth most lethal malignancy in women in the United States. It is curable in early stages, but seldom curable in advanced stages; hence, the development of effective screening strategies is of considerable interest. Furthermore, the ovary is well visualized with a variety of imaging techniques, most notably trans-vaginal ultrasound. Early-stage tumors often produce proteins that can be measured in the blood such as CA-125 and HE-4. Nevertheless, the incidence of ovarian cancer in the middle-aged female population is low, with only approximately 1 in 2000 women between the ages of 50 and 60 carrying an asymptomatic and undetected tumor. Thus effective screening techniques must be sensitive but, more importantly, highly specific to minimize the number of false-positive results. Even a screening test with 98% specificity and 50% sensitivity would have a positive predictive value of only about 1%. A large randomized study of active screening versus usual standard care demonstrated that a screening program consisting of six annual CA-125 measurements and four annual transvaginal ultrasounds in a population of women age 55–74 was not effective at reducing death from ovarian cancer and was associated with significant morbidity in the screened arm due to complications associated with diagnostic testing in the screened group. Although ongoing studies are evaluating the utility of alternative screening strategies, currently screening of normal-risk women is not recommended outside of a clinical trial. In women presenting with a localized ovarian mass, the principal diagnostic and therapeutic maneuver is to determine if the tumor is benign or malignant and, in the event that the tumor is malignant, whether the tumor arises in the ovary or is a site of metastatic disease. Metastatic disease to the ovary can be seen from primary tumors of the colon, appendix, stomach (Krukenberg tumors), and breast. Typically women undergo a unilateral salpingo-oophorectomy, and if pathology reveals a primary ovarian malignancy, then the procedure is followed by a hysterectomy, removal of the remaining tube and ovary, omentectomy, and pelvic node sampling along with some random biopsies of the peritoneal cavity. This extensive surgical procedure is performed because approximately 30% of tumors that by visual inspection appear to be confined to the ovary have already disseminated to the peritoneal cavity and/or surrounding lymph nodes. If there is evidence of bulky intraabdominal disease, a comprehensive attempt at maximal tumor cytoreduction is attempted even if it involves partial bowel resection, splenectomy, and in certain cases more extensive upper abdominal surgery. The ability to debulk metastatic ovarian cancer to minimal visible disease is associated with an improved prognosis compared with women left with visible disease. Patients without gross residual disease after resection have a median survival of 39 months, compared with 17 months for those left with macroscopic tumor. Once tumors have been surgically debulked, women receive therapy with a platinum agent, typically a taxane. Debate continues as to whether this therapy should be delivered intravenously or, alternatively, whether some of the therapy should be delivered directly into the peritoneal cavity via a catheter. Three randomized studies have demonstrated improved survival with intraperitoneal therapy, but this approach is still not widely accepted due to technical challenges associated with this delivery route and increased toxicity. In women who present with bulky intraabdominal disease, an alternative approach is to treat with platinum plus a taxane for several cycles before attempting a surgical debulking procedure (neoadjuvant therapy). Subsequent surgical procedures are more effective at leaving the patient without gross residual tumor and appear to be less morbid. Two studies have demonstrated that the neoadjuvant approach is associated with an overall survival that is comparable to the traditional approach of primary surgery followed by chemotherapy. With optimal debulking surgery and platinum-based chemotherapy (usually carboplatin dosed to an area under the curve [AUC] of 6 plus paclitaxel 175 mg/m2 by 3-h infusion in 21-day cycles), 70% of women who present with advanced-stage tumors respond, and 40–50% experience a complete remission with normalization of their CA-125, computed tomography (CT) scans, and physical examination. Unfortunately, a small proportion of women who obtain a complete response to therapy will remain in remission. Disease recurs within 1–4 years from the completion of their primary therapy in 75% of the complete responders. CA-125 levels often increase as a first sign of relapse; however, data are not clear that early intervention in relapsing patients influences survival. Recurrent disease is effectively managed, but not cured, with a variety of chemotherapeutic agents. Eventually all women with recurrent disease develop chemotherapy-refractory disease at which point refractory ascites, poor bowel motility, and obstruction or pseudoobstruction due to a 594 tumor-infiltrated aperistaltic bowel are common. Limited surgery to relieve intestinal obstruction, localized radiation therapy to relieve pressure or pain from masses, or palliative chemotherapy may be helpful. Agents with >15% response rates include gemcitabine, topotecan, liposomal doxorubicin, pemetrexed, and bevacizumab. Approximately 10% of ovarian cancers are HER2/neu positive, and trastuzumab may induce responses in this subset. Five-year survival correlates with the stage of disease: stage I, 85–90%; stage II, 70–80%; stage III, 20–50%; and stage IV, 1–5% (Table 117-1). Low-grade serous tumors are molecularly distinct from high-grade serous tumors and are, in general, poorly responsive to chemotherapy. Targeted therapies focused on inhibiting kinases downstream of RAS and BRAF are being tested. Patients with tumors of low malignant potential are managed by surgery; chemotherapy and radiation therapy do not improve survival. OVARIAN SEX CORD AND STROMAL TUMORS Epidemiology, Presentation, and Predisposing Syndromes Approximately 7% of ovarian neoplasms are stromal or sex cord tumors, with approximately 1800 cases expected each year in the United States. Ovarian stromal tumors or sex cord tumors are most common in women in their fifties or sixties, but tumors can present in the extremes of age, including the pediatric population. These tumors arise from the mesenchymal components of the ovary, including steroid-producing cells as well as fibroblasts. Essentially all of these tumors are of low malignant potential and present as unilateral solid masses. Three clinical presentations are common: the detection of an abdominal mass; abdominal pain due to ovarian torsion, intratumoral hemorrhage, or rupture; or signs and symptoms due to hormonal production by these tumors. The most common hormone-producing tumors include thecomas, granulosa cell tumor, or juvenile granulosa tumors in children. These estrogen-producing tumors often present with breast tenderness as well as isosexual precocious pseudopuberty in children, menometrorrhagia, oligomenorrhea, or amenorrhea in premenopausal women, or alternatively as postmenopausal bleeding in older women. In some women, estrogen-associated secondary malignancies, such as endometrial or breast cancer, may present as synchronous malignancies. Alternatively, endometrial cancer may serve as the presenting malignancy with evaluation subsequently identifying a unilateral solid ovarian neoplasm that proves to be an occult granulosa cell tumor. Sertoli-Leydig tumors often present with hirsutism, virilization, and occasionally Cushing’s syndrome due to increased production of testosterone, androstenedione, or other 17-ketosteroids. Hormonally inert tumors include fibroma that presents as a solitary mass often in association with ascites and occasionally hydrothorax also known as Meigs’ syndrome. A subset of these tumors present in individuals with a variety of inherited disorders that predispose them to mesenchymal neoplasia. Associations include juvenile granulosa cell tumors and perhaps Sertoli-Leydig tumors with Ollier’s disease (multiple enchondromatosis) or Maffucci’s syndrome, ovarian sex cord tumors with annular tubules with Peutz-Jeghers syndrome, and fibromas with Gorlin’s disease. Essentially all granulosa tumors and a minority of juvenile granulosa cell tumors and thecomas have a defined somatic point mutation in the FOXL2 gene at C134W generated by replacement of cysteine with a guanine at position 402. About 30% of Sertoli-Leydig tumors harbor a mutation in the RNA-processing gene DICER in the RNAIIIb domain. The mainstay of treatment for sex cord tumors is surgical resection. Most women present with tumors confined to the ovary. For the small subset of women who present with metastatic disease or develop evidence of tumor recurrence after primary resection, survival is still typically long, often in excess of a decade. Because these tumors are slow growing and relatively refractory to chemotherapy, women with metastatic disease are often debulked because disease is usually peritoneal-based (as with epithelial ovarian cancer). Definitive data that surgical debulking of metastatic or recurrent disease prolongs survival are lacking, but ample data document women who have survived years or, in some cases, decades after resection of recurrent disease. In addition, large peritoneal-based metastases also have a proclivity for hemorrhage, sometimes with catastrophic complications. Chemotherapy is occasionally effective, and women tend to receive regimens designed to treat epithelial or germ cell tumors. Bevacizumab has some activity in clinical trials but is not approved for this specific indication. These tumors often produce high levels of müllerian inhibiting substance (MIS), inhibin, and, in the case of Sertoli-Leydig tumors, α fetoprotein (AFP). These proteins are detectable in serum and can be used as tumor markers to monitor women for recurrent disease because the increase or decrease of these proteins in the serum tends to reflect the changing bulk of systemic tumor. Germ cell tumors, like their counterparts in the testis, are cancers of germ cells. These totipotent cells contain the programming for differentiation to essentially all tissue types, and hence the germ cell tumors include a histologic menagerie of bizarre tumors, including benign teratomas and a variety of malignant tumors, such as immature teratomas, dysgerminomas, yolk sac malignancies, and choriocarcinomas. Benign teratoma (or dermoid cyst) is the most common germ cell neoplasm of the ovary and often presents in young woman. These tumors include a complex mixture of differentiated tissue including tissues from all three germ layers. In older women, these differentiated tumors can develop malignant transformation, most commonly squamous cell carcinomas. Malignant germ cell tumors include dysgerminomas, yolk sac tumors, immature teratomas, and embryonal carcinoma and choriocarcinomas. There are no known genetic abnormalities that unify these tumors. A subset of dysgerminomas harbor mutations in c-Kit oncogenes (as seen in gastrointestinal stromal tumors [GIST]), whereas a subset of germ cell tumors have isochromosome 12 abnormalities, as seen in testicular malignancies. In addition, a subset of dysgerminomas is associated with dysgenetic ovaries. Identification of a dysgerminoma arising in genotypic XY gonads is important in that it highlights the need to identify and remove the contralateral gonad due to risk of gonadoblastoma. Presentation Germ cell tumors can present at all ages, but the peak age of presentation tends to be in females in their late teens or early twenties. Typically these tumors will become large ovarian masses, which eventually present as palpable low abdominal or pelvic masses. Like sex cord tumors, torsion or hemorrhage may present urgently or emergently as acute abdominal pain. Some of these tumors produce elevated levels of human chorionic gonadotropin (hCG), which can lead to isosexual precocious puberty when tumors present in younger girls. Unlike epithelial ovarian cancer, these tumors have a higher proclivity for nodal or hematogenous metastases. As with testicular tumors, some of these tumors tend to produce AFP (yolk sac tumors) or hCG (embryonal carcinoma, choriocarcinomas, and some dysgerminomas) that are reliable tumor markers. Germ cell tumors typically present in women who are still of childbearing age, and because bilateral tumors are uncommon (except in dysgerminoma, 10–15%), the typical treatment is unilateral oophorectomy or salpingo-oophorectomy. Because nodal metastases to pelvic and para-aortic nodes are common and may affect treatment choices, these nodes should be carefully inspected and, if enlarged, should be resected if possible. Women with malignant germ cell tumors typically receive bleomycin, etoposide, and cisplatin (BEP) chemotherapy. In the majority of women, even those with advanced-stage disease, cure is expected. Close follow-up without adjuvant therapy of women with stage I tumors is reasonable if there is high confidence that the patient and health care team are committed to compulsive and careful follow-up, as chemotherapy at the time of tumor recurrence is likely to be curative. Dysgerminoma is the ovarian counterpart of testicular seminoma. The 5-year disease-free survival is 100% in early-stage patients and 61% in stage III disease. Although the tumor is highly radiation-sensitive, radiation produces infertility in many patients. BEP chemotherapy is as effective or more so without causing infertility. The use of BEP following incomplete resection is associated with a 2-year disease-free survival rate of 95%. This chemotherapy is now the treatment of choice for dysgerminoma. Transport of the egg to the uterus occurs via transit through the fallopian tube, with the distal ends of these tubes composed of fimbriae that drape about the ovarian surface and capture the egg as it erupts from the ovarian cortex. Fallopian tube malignancies are typically serous tumors. Previous teaching was that these malignancies were rare, but more careful histologic examination suggests that many “ovarian malignancies” might actually arise in the distal fimbria of the fallopian tube (see above). These women often present with adnexal masses, and like ovarian cancer, these tumors spread relatively early throughout the peritoneal cavity and respond to platinum and taxane therapy and have a natural history that is essentially identical to ovarian cancer (Table 117-1). Cervical cancer is the second most common and most lethal malignancy in women worldwide likely due to the widespread infection with high-risk strains of human papillomavirus (HPV) and limited utilization of or access to Pap smear screening in many nations throughout the world. Nearly 500,000 cases of cervical cancer are expected worldwide, with approximately 240,000 deaths annually. Cancer incidence is particularly high in women residing in Central and South America, the Caribbean, and southern and eastern Africa. Mortality rate is disproportionately high in Africa. In the United States, 12,360 women were diagnosed with cervical cancer and 4020 women died in 2014. Developed countries have looked at high-technology screening techniques for HPV involving automated polymerase chain reaction in thin preps that identify dysplastic cytology as well as high-risk HPV genetic material. Visual inspection of the cervix coated with acetic acid has demonstrated the ability to reduce mortality from cervical cancer with potential broad applicability in low-resource environments. The development of effective vaccines for high-risk HPV types makes it imperative to determine economical, socially acceptable, and logistically feasible strategies to deliver and distribute this vaccine to girls and boys before their engagement in sexual activity. HPV is the primary neoplastic-initiating event in the vast majority of women with invasive cervical cancer. This double-strand DNA virus infects epithelium near the transformation zone of the cervix. More than 60 types of HPV are known, with approximately 20 types having the ability to generate high-grade dysplasia and malignancy. HPV16 and -18 are the types most frequently associated with high-grade dysplasia and targeted by both U.S. Food and Drug Administration– approved vaccines. The large majority of sexually active adults are exposed to HPV, and most women clear the infection without specific intervention. The 8-kilobase HPV genome encodes seven early genes, most notably E6 and E7, which can bind to RB and p53, respectively. High-risk types of HPV encode E6 and E7 molecules that are particularly effective at inhibiting the normal cell cycle checkpoint functions of these regulatory proteins, leading to immortalization but not full transformation of cervical epithelium. A minority of woman will fail 595 to clear the infection with subsequent HPV integration into the host genome. Over the course of as short as months but more typically years, some of these women develop high-grade dysplasia. The time from dysplasia to carcinoma is likely years to more than a decade and almost certainly requires the acquisition of other poorly defined genetic mutations within the infected and immortalized epithelium. Risk factors for HPV infection and, in particular, dysplasia include a high number of sexual partners, early age of first intercourse, and history of venereal disease. Smoking is a cofactor; heavy smokers have a higher risk of dysplasia with HPV infection. HIV infection, especially when associated with low CD4+ T cell counts, is associated with a higher rate of high-grade dysplasia and likely a shorter latency period between infection and invasive disease. The administration of highly active antiretroviral therapy reduces the risk of high-grade dysplasia associated with HPV infection. Currently approved vaccines include the recombinant proteins to the late proteins, L1 and L2, of HPV-16 and -18. Vaccination of women before the initiation of sexual activity dramatically reduces the rate of HPV-16 and -18 infection and subsequent dysplasia. There is also partial protection against other HPV types, although vaccinated women are still at risk for HPV infection and still require standard Pap smear screening. Although no randomized trial data demonstrate the utility of Pap smears, the dramatic drop in cervical cancer incidence and death in developed countries employing wide-scale screening provides strong evidence for its effectiveness. In addition, even visual inspection of the cervix with preapplication of acetic acid using a “see and treat” strategy has demonstrated a 30% reduction in cervical cancer death. The incorporation of HPV testing by polymerase chain reaction or other molecular techniques increases the sensitivity of detecting cervical pathology but at the cost of identifying many women with transient infections who require no specific medical intervention. The majority of cervical malignancies are squamous cell carcinomas associated with HPV. Adenocarcinomas are also HPV-related and arise deep in the endocervical canal; they are typically not seen by visual inspection of the cervix and thus are often missed by Pap smear screening. A variety of rarer malignancies including atypical epithelial tumors, carcinoids, small cell carcinomas, sarcomas, and lymphomas have also been reported. The principal role of Pap smear testing is the detection of asymptomatic preinvasive cervical dysplasia of squamous epithelial lining. Invasive carcinomas often have symptoms or signs including post-coital spotting or intermenstrual cycle bleeding or menometrorrhagia. Foul-smelling or persistent yellow discharge may also be seen. Presentations that include pelvic or sacral pain suggest lateral extension of the tumor into pelvic nerve plexus by either the primary tumor or a pelvic node and are signs of advanced-stage disease. Likewise, flank pain from hydronephrosis from ureteral compression or deep venous thrombosis from iliac vessel compression suggests either extensive nodal disease or direct extension of the primary tumor to the pelvic sidewall. The most common finding of physical exam is a visible tumor on the cervix. Scans are not part of the formal clinical staging of cervical cancer yet are very useful in planning appropriate therapy. CT can detect hydronephrosis indicative of pelvic sidewall disease but is not accurate at evaluating other pelvic structures. Magnetic resonance imaging (MRI) is more accurate at estimating uterine extension and paracervical extension of disease into soft tissues typically bordered by broad and cardinal ligaments that support the uterus in the central pelvis. Positron emission tomography (PET) scan is the most accurate technique for evaluating the pelvis and more importantly nodal (pelvic, para-aortic, and scalene) sites for disease. Staging of cervix cancer Extent of Carcinoma Confined Disease Disease Invades bladder, tumor in-situ to cervix beyond cervix to pelvic rectum or but not to pelvic wall or metastasis wall or lower lower 1/3 1/3 of vagina vagina FIGURE 117-1 Anatomic display of the stages of cervix cancer defined by location, extent of tumor, frequency of presentation, and 5-year survival. This technique seems more prognostic and accurate than CT, MRI, or lymphangiogram, especially in the para-aortic region. Stage I cervical tumors are confined to the cervix, whereas stage II tumors extend into the upper vagina or paracervical soft tissue (Fig. 117-1). Stage III tumors extend to the lower vagina or the pelvic sidewalls, whereas stage IV tumors invade the bladder or rectum or have spread to distant sites. Very small stage I cervical tumors can be treated with a variety of surgical procedures. In young women desiring to maintain fertility, radical trachelectomy removes the cervix with subsequent anastomosis of the upper vagina to the uterine corpus. Larger cervical tumors confined to the cervix can be treated with either surgical resection or radiation therapy in combination with cisplatin-based chemotherapy with a high chance of cure. Larger tumors that extend regionally down the vagina or into the paracervical soft tissues or the pelvic sidewalls are treated with combination chemotherapy and radiation therapy. The treatment of recurrent or metastatic disease is unsatisfactory due to the relative resistance of these tumors to chemotherapy and currently available biological agents, although bevacizumab, a monoclonal antibody that is said to inhibit tumor-associated angiogenesis, has demonstrated clinically meaningful activity in the management of metastatic disease. Several different tumor types arise in uterine corpus. Most tumors arise in the glandular lining and are endometrial adenocarcinomas. Tumors can also arise in the smooth muscle; most are benign (uterine leiomyoma), with a small minority of tumors being sarcomas. The endometrioid histologic subtype of endometrial cancer is the most common gynecologic malignancy in the United States. In 2014, an estimated 52,630 women were diagnosed with cancer of the uterine corpus, with 8590 deaths from the disease. Development of these tumors is a multistep process, with estrogen playing an important early role in driving endometrial gland proliferation. Relative overexposure to this class of hormones is a risk factor for the subsequent development of endometrioid tumors. In contrast, progestins drive glandular maturation and are protective. Hence, women with high endogenous or pharmacologic exposure to estrogens, especially if unopposed by progesterone, are at high risk for endometrial cancer. Obese women, women treated with unopposed estrogens, or women with estrogen-producing tumors (such as granulosa cell tumors of the ovary) are at higher risk for endometrial cancer. In addition, treatment with tamoxifen, which has antiestrogenic effects in breast tissue but estrogenic effects in uterine epithelium, is associated with an increased risk of endometrial cancer. Events such as the loss of the PTEN tumor suppressor gene with activation and often additional mutations in the PIK-3CA/AKT pathways likely serve as secondary events in carcinogenesis. The Cancer Genome Atlas Research Network has demonstrated that endometrioid tumors can be divided into four subgroups: ultramutated, microsatellite instability hypermutated, copy number low, and copy number high subgroups. These groups have different natural histories; therapy for these subgroups may eventually be individualized. Serous tumors of the uterine corpus represent approximately 5–10% of epithelial tumors of the uterine corpus and possess distinct molecular characteristics that are most similar to those seen in serous tumors arising in the ovary or fallopian tube. Women with a mutation in one of a series of DNA mismatch repair genes associated with the Lynch syndrome, also known as hereditary nonpolyposis colon cancer (HNPCC), are at increased risk for endometrioid endometrial carcinoma. These individuals have germline mutations in MSH2, MLH1, and in rare cases PMS1 and PMS2, with resulting microsatellite instability and hypermutation. Individuals who carry these mutations typically have a family history of cancer and are at markedly increased risk for colon cancer and modestly increased risk for ovarian cancer and a variety of other tumors. Middle-aged women with HNPCC carry a 4% annual risk of endometrial cancer and a relative overall risk of approximately 200-fold as compared to age-matched women without HNPCC. The majority of women with tumors of the uterine corpus present with postmenopausal vaginal bleeding due to shedding of the malignant endometrial lining. Premenopausal women often will present with atypical bleeding between typical menstrual cycles. These signs typically bring a woman to the attention of a health care professional, and hence the majority of women present with early-stage disease with the tumor confined to the uterine corpus. Diagnosis is typically established by endometrial biopsy. Epithelial tumors may spread to pelvic or para-aortic lymph nodes. Pulmonary metastases can appear later in the natural history of this disease but are very uncommon at initial presentation. Serous tumors tend to have patterns of spread much more reminiscent of ovarian cancer with many patients presenting with disseminated peritoneal disease and sometimes ascites. Some women presenting with uterine sarcomas will present with pelvic pain. Nodal metastases are uncommon with sarcomas, which are more likely to present with either intraabdominal disease or pulmonary metastases. Most women with endometrial cancer have disease that is localized to the uterus (75% are stage I, Table 117-1), and definitive treatment typically involves a hysterectomy with removal of the ovaries and fallopian tubes. The resection of lymph nodes does not improve outcome but does provide prognostic information. Node involvement defines stage III disease, which is present in 13% of patients. Tumor grade and depth of invasion are the two key prognostic variables in early-stage tumors, and women with low-grade and/ or minimally invasive tumors are typically observed after definitive surgical therapy. Patients with high-grade tumors or tumors that are deeply invasive (stage IB, 13%) are at higher risk for pelvic recurrence or recurrence at the vaginal cuff, which is typically prevented by vaginal vault brachytherapy. Women with regional metastases or metastatic disease (3% of patients) with low-grade tumors can be treated with progesterone. Poorly differentiated tumors are typically resistant to hormonal manipulation and thus are treated with chemotherapy. The role of chemotherapy in the adjuvant setting is currently under investigation. Chemotherapy for metastatic disease is delivered with palliative intent. Drugs that effectively target and inhibit signaling of the AKT-mTOR pathway are currently under investigation. Five-year survival is 89% for stage I, 73% for stage II, 52% for stage III, and 17% for stage IV disease (Table 117-1). Gestational trophoblastic diseases represent a spectrum of neo plasia from benign hydatidiform mole to choriocarcinoma due to persistent trophoblastic disease associated most commonly with molar pregnancy but occasionally seen after normal gestation. The most common presentations of trophoblastic tumors are partial and complete molar pregnancies. These represent approximately 1 in 1500 conceptions in developed Western countries. The incidence widely varies globally, with areas in Southeast Asia having a much higher incidence of molar pregnancy. Regions with high molar pregnancy rates are often associated with diets low in carotene and animal fats. Trophoblastic tumors result from the outgrowth or persistence of placental tissue. They arise most commonly in the uterus but can also arise in other sites such as the fallopian tubes due to ectopic pregnancy. Risk factors include poorly defined dietary and environmental factors as well as conceptions at the extremes of reproductive age, with the incidence particularly high in females conceiving younger than age 16 or older than age 50. In older women, the incidence of molar pregnancy might be as high as one in three, likely due to increased risk of abnormal fertilization of the aged ova. Most trophoblastic neoplasms are associated with complete moles, diploid tumors with all genetic material from the paternal donor (known as parental disomy). This is thought to occur when a single sperm fertilizes an enucleate egg that subsequently duplicates the paternal DNA. Trophoblastic proliferation occurs with exuberant villous stroma. If pseudopregnancy extends out past the 12th week, fluid progressively accumulates within the stroma, leading to “hydropic changes.” There is no fetal development in complete moles. Partial moles arise from the fertilization of an egg with two sperm; 597 hence two-thirds of genetic material is paternal in these triploid tumors. Hydropic changes are less dramatic, and fetal development can often occur through late first trimester or early second trimester at which point spontaneous abortion is common. Laboratory findings will include excessively high hCG and high AFP. The risk of persistent gestational trophoblastic disease after partial mole is approximately 5%. Complete and partial moles can be noninvasive or invasive. Myometrial invasion occurs in no more than one in six complete moles and a lower portion of partial moles. The clinical presentation of molar pregnancy is changing in developed countries due to the early detection of pregnancy with home pregnancy kits and the very early use of Doppler and ultrasound to evaluate the early fetus and uterine cavity for evidence of a viable fetus. Thus, in these countries, the majority of women presenting with trophoblastic disease have their moles detected early and have typical symptoms of early pregnancy including nausea, amenorrhea, and breast tenderness. With uterine evacuation of early complete and partial moles, most women experience spontaneous remission of their disease as monitored by serial hCG levels. These women require no chemotherapy. Patients with persistent elevation of hCG or rising hCG after evacuation have persistent or actively growing gestational trophoblastic disease and require therapy. Most series suggest that between 15 and 25% of women will have evidence of persistent gestational trophoblastic disease after molar evacuation. In women who lack access to prenatal care, presenting symptoms can be life threatening including the development of preeclampsia or even eclampsia. Hyperthyroidism can also be seen. Evacuation of large moles can be associated with life-threatening complications including uterine perforation, volume loss, high-output cardiac failure, and adult respiratory distress syndrome (ARDS). For women with evidence of rising hCG or radiologic confirmation of metastatic or persistent regional disease, prognosis can be estimated through a variety of scoring algorithms that identify those women at low, intermediate, and high risk for requiring multiagent chemotherapy. In general, women with widely metastatic nonpulmonary disease, very elevated hCG, and prior normal antecedent term pregnancy are considered at high risk and typically require multiagent chemotherapy for cure. The management for a persistent and rising hCG after evacuation of a molar conception is typically chemotherapy, although surgery can play an important role for disease that is persistently isolated in the uterus (especially if childbearing is complete) or to control hemorrhage. For women wishing to maintain fertility or with metastatic disease, the preferred treatment is chemotherapy. Chemotherapy is guided by the hCG level, which typically drops to undetectable levels with effective therapy. Single-agent treatment with methotrexate or dactinomycin cures 90% of women with low-risk disease. Patients with high-risk disease (high hCG levels, presentation 4 or more months after pregnancy, brain or liver metastases, failure of methotrexate therapy) are typically treated with multiagent chemotherapy (e.g., etoposide, methotrexate, and dactinomycin alternating with cyclophosphamide and vincristine [EMA-CO]), which is typically curative even in women with extensive metastatic disease. Cisplatin, bleomycin, and either etoposide or vinblastine are also active combinations. Survival in high-risk disease exceeds 80%. Cured women may get pregnant again without evidence of increased fetal or maternal complications. 598 primary and metastatic tumors of the nervous system Lisa M. DeAngelis, Patrick Y. Wen Primary brain tumors are diagnosed in approximately 52,000 people each year in the United States. At least one-half of these tumors are 118 malignant and associated with a high mortality. Glial tumors account for about 30% of all primary brain tumors, and 80% of those are malignant. Meningiomas account for 35%, vestibular schwannomas 10%, and central nervous system (CNS) lymphomas about 2%. Brain metastases are three times more common than all primary brain tumors combined and are diagnosed in approximately 150,000 people each year. Metastases to the leptomeninges and epidural space of the spinal cord each occur in approximately 3–5% of patients with systemic cancer and are also a major cause of neurologic disability. APPROACH TO THE PATIENT: primary and metastatic tumors of the nervous system Brain tumors of any type can present with a variety of symptoms and signs that fall into two categories: general and focal; patients often have a combination of the two (Table 118-1). General or nonspecific symptoms include headache, with or without nausea or vomiting, cognitive difficulties, personality change, and gait disorder. Generalized symptoms arise when the enlarging tumor and its surrounding edema cause an increase in intracranial pressure or direct compression of cerebrospinal fluid (CSF) circulation leading to hydrocephalus. The classic headache associated with a brain tumor is most evident in the morning and improves during the day, but this particular pattern is actually seen in a minority of patients. Headaches are often holocephalic but can be ipsilateral to the side of a tumor. Occasionally, headaches have features of a typical migraine with unilateral throbbing pain associated with visual scotoma. Personality changes may include apathy and withdrawal from social circumstances, mimicking depression. Focal or lateralizing findings include hemiparesis, aphasia, or visual field defect. Lateralizing symptoms are typically subacute and progressive. A visual field defect is often unnoticed by the patient; its presence may only be revealed after it leads to an injury such as an automobile accident occurring in the blind visual field. Language difficulties may be mistaken for confusion. Seizures are a common presentation of brain tumors, occurring in about 25% of patients with brain metastases or malignant gliomas but can be the presenting symptom in up to 90% of patients with a low-grade glioma. All seizures that arise from a brain tumor will have a focal onset whether or not it is apparent clinically. Cranial MRI is the preferred diagnostic test for any patient suspected of having a brain tumor and should be performed with gadolinium contrast administration. Computed tomography (CT) scan should be reserved for those patients unable to undergo magnetic resonance imaging (MRI; e.g., pacemaker). Malignant brain tumors—whether primary or metastatic—typically enhance with gadolinium and may have central areas of necrosis; they are characteristically surrounded by edema of the neighboring white matter. Low-grade gliomas usually do not enhance with gadolinium and are best appreciated on fluid-attenuated inversion recovery (FLAIR) MRIs. Meningiomas have a characteristic appearance on MRI because they are duralbased with a dural tail and compress but do not invade the brain. Dural metastases or a dural lymphoma can have a similar appearance. Imaging is characteristic for many primary and metastatic tumors and sometimes will suffice to establish a diagnosis when the location precludes surgical intervention (e.g., brainstem glioma). Functional MRI is useful in presurgical planning to define eloquent sensory, motor, or language cortex. Positron emission tomography (PET) is useful in determining the metabolic activity of the lesions seen on MRI; MR perfusion and spectroscopy can provide information on blood flow or tissue composition. These techniques may help distinguish tumor progression from necrotic tissue as a consequence of treatment with radiation and chemotherapy or identify foci of high-grade tumor in an otherwise low-grade-appearing glioma. Neuroimaging is the only test necessary to diagnose a brain tumor. Laboratory tests are rarely useful, although patients with metastatic disease may have elevation of a tumor marker in their serum that reflects the presence of brain metastases (e.g., β human chorionic gonadotropin [β-hCG] from testicular cancer). Additional testing such as cerebral angiogram, electroencephalogram (EEG), or lumbar puncture is rarely indicated or helpful. Therapy of any intracranial malignancy requires both symptomatic and definitive treatments. Definitive treatment is based on the specific tumor type and includes surgery, radiotherapy, and chemotherapy. However, symptomatic treatments apply to brain tumors of any type. Most high-grade malignancies are accompanied by substantial surrounding edema, which contributes to neurologic disability and raised intracranial pressure. Glucocorticoids are highly effective at reducing perilesional edema and improving neurologic function, often within hours of administration. Dexamethasone has been the glucocorticoid of choice because of its relatively low mineralocorticoid activity. Initial doses are typically 12–16 mg/d in divided doses given orally or IV (both are equivalent). Although glucocorticoids rapidly ameliorate symptoms and signs, their long-term use causes substantial toxicity including insomnia, weight gain, diabetes mellitus, steroid myopathy, and personality changes. Consequently, a taper is indicated as definitive treatment is administered and the patient improves. Patients with brain tumors who present with seizures require antiepileptic drug therapy. There is no role for prophylactic antiepileptic drugs in patients who have not had a seizure. The agents of choice are those drugs that do not induce the hepatic microsomal enzyme system. These include levetiracetam, topiramate, lamotrigine, valproic acid, and lacosamide (Chap. 445). Other drugs, such as phenytoin and carbamazepine, are used less frequently because they are potent enzyme inducers that can interfere with both glucocorticoid metabolism and the metabolism of chemotherapeutic agents needed to treat the underlying systemic malignancy or the primary brain tumor. Venous thromboembolic disease occurs in 20–30% of patients with high-grade gliomas and brain metastases. Therefore, prophylactic anticoagulants should be used during hospitalization and in nonambulatory patients. Those who have had either a deep vein thrombosis or pulmonary embolus can receive therapeutic doses of anticoagulation safely and without increasing the risk for hemorrhage into the tumor. Inferior vena cava filters are reserved for patients with absolute contraindications to anticoagulation such as recent craniotomy. No underlying cause has been identified for the majority of primary brain tumors. The only established risk factors are exposure to ionizing radiation (meningiomas, gliomas, and schwannomas) and immunosuppression (primary CNS lymphoma). Evidence for an association with exposure to electromagnetic fields including cellular telephones, head injury, foods containing N-nitroso compounds, or occupational risk factors are unproven. A small minority of patients have a family history of brain tumors. Some of these familial cases are associated with genetic syndromes (Table 118-2). As with other neoplasms, brain tumors arise as a result of a multi-step process driven by the sequential acquisition of genetic alterations. These include loss of tumor-suppressor genes (e.g., p53 and phosphatase and tensin homolog on chromosome 10 [PTEN]) and amplification and overexpression of protooncogenes such as the epidermal growth factor receptor (EGFR) and the platelet-derived growth factor receptors (PDGFR). The accumulation of these genetic abnormalities results in uncontrolled cell growth and tumor formation. Important progress has been made in understanding the molecular pathogenesis of several types of brain tumors, including glioblastoma and medulloblastoma. Morphologically indistinguishable glioblastomas can be separated into four subtypes defined by molecular profiling: (1) classical, characterized by overactivation of the EGFR pathway; (2) proneural, characterized by overexpression of PDGFRA, mutations of the isocitrate dehydrogenase (IDH) 1 and 2 genes, and expression 599 of neural markers; (3) mesenchymal, defined by expression of mesenchymal markers and loss of NF1; and (4) neural, characterized by overactivity of EGFR and expression of neural markers. The clinical implications of these subtypes are under study. Medulloblastoma is the other primary brain tumor that has been highly analyzed, and four molecular subtypes have also been identified: (1) the Wnt subtype is defined by a mutation in β-catenin and has an excellent prognosis; (2) the SHH subtype has mutations in PTCH1, SMO, GLI2, or SUFU and has an intermediate prognosis; (3) group 3 has elevated MYC expression and has the worst prognosis; and (4) group 4 is characterized by isochromosome 17q. Targeted therapeutics are under development for some of the medulloblastoma subtypes, especially the SHH group. These are infiltrative tumors with a presumptive glial cell of origin. The World Health Organization (WHO) classifies astrocytomas into four prognostic grades based on histologic features: grade I (pilocytic astrocytoma, subependymal giant cell astrocytoma); grade II (diffuse astrocytoma); grade III (anaplastic astrocytoma); and grade IV (glioblastoma). Grades I and II are considered low-grade astrocytomas, and grades III and IV are considered high-grade astrocytomas. most common tumor of childhood. They occur typically in the cerebellum but may also be found elsewhere in the neuraxis, including the optic nerves and brainstem. Frequently they appear as cystic lesions Primary and Metastatic Tumors of the Nervous System aVarious DNA mismatch repair gene mutations may cause a similar clinical phenotype, also referred to as Turcot syndrome, in which there is a predisposition to nonpolyposis colon cancer and brain tumors. Abbreviations: AD, autosomal dominant; APC, adenomatous polyposis coli; AR, autosomal recessive; ch, chromosome; PTEN, phosphatase and tensin homologue; TSC, tuberous sclerosis complex. FIGURE 118-1 Fluid-attenuated inversion recovery (FLAIR) MRI of a left frontal low-grade astrocytoma. This lesion did not enhance. FIGURE 118-2 Postgadolinium T1 MRI of a large cystic left frontal glioblastoma. with an enhancing mural nodule. These are well-demarcated lesions that are potentially curable if they can be resected completely. Giant-cell subependymal astrocytomas are usually found in the ventricular wall of patients with tuberous sclerosis. They often do not require intervention but can be treated surgically or with inhibitors of the mammalian target of rapamycin (mTOR). grade ii astrocytomas These are infiltrative tumors that usually present with seizures in young adults. They appear as nonenhancing tumors with increased T2/FLAIR signal (Fig. 118-1). If feasible, patients should undergo maximal surgical resection, although complete resection is rarely possible because of the invasive nature of the tumor. Radiation therapy (RT) is helpful, but there is no difference in overall survival between RT administered postoperatively or delayed until the time of tumor progression. There is increasing evidence that chemotherapeutic agents such as temozolomide, an oral alkylating agent, can be helpful in some patients. The tumor transforms to a malignant astrocytoma in the majority of patients, leading to variable survival with a median of about 5 years. grade iii (anaPlastic) astrocytoma These account for approximately 15–20% of high-grade astrocytomas. They generally present in the fourth and fifth decades of life as variably enhancing tumors. Treatment is the same as for glioblastoma, consisting of maximal safe surgical resection followed by RT with concurrent and adjuvant temozolomide or by RT and adjuvant temozolomide alone. grade iv astrocytoma (glioBlastoma) Glioblastoma accounts for the majority of high-grade astrocytomas. They are the most common malignant primary brain tumor, with over 10,000 cases diagnosed each year in the United States. Patients usually present in the sixth and seventh decades of life with headache, seizures, or focal neurologic deficits. The tumors appear as ring-enhancing masses with central necrosis and surrounding edema (Fig. 118-2). These are highly infiltrative tumors, and the areas of increased T2/FLAIR signal surrounding the main tumor mass contain invading tumor cells. Treatment involves maximal surgical resection followed by partial-field external-beam RT (6000 cGy in thirty 200-cGy fractions) with concomitant temozolomide, followed by 6–12 months of adjuvant temozolomide. With this regimen, median survival is increased to 14.6 months compared to only 12 months with RT alone, and 2-year survival is increased to 27%, compared to 10% with RT alone. Patients whose tumor contains the DNA repair enzyme O6-methylguanine-DNA methyltransferase (MGMT) are relatively resistant to temozolomide and have a worse prognosis compared to those whose tumors contain low levels of MGMT as a result of silencing of the MGMT gene by promoter hypermethylation. Implantation of biodegradable polymers containing the chemotherapeutic agent carmustine into the tumor bed after resection of the tumor also produces a modest improvement in survival. Despite optimal therapy, glioblastomas invariably recur. Treatment options for recurrent disease may include reoperation, carmustine wafers, and alternate chemotherapeutic regimens. Reirradiation is rarely helpful. Bevacizumab, a humanized vascular endothelial growth factor (VEGF) monoclonal antibody, has activity in recurrent glioblastoma, increasing progression-free survival and reducing peritumoral edema and glucocorticoid use (Fig. 118-3). Treatment decisions for patients with recurrent glioblastoma must be made on an individual basis, taking into consideration such factors as previous therapy, time to relapse, performance status, and quality of life. Whenever feasible, patients with recurrent disease should be enrolled in clinical trials. Novel therapies undergoing evaluation in patients with glioblastoma include targeted molecular agents directed at receptor tyrosine kinases and signal transduction pathways; antiangiogenic agents, especially those directed at the VEGF receptors; chemotherapeutic agents that cross the blood-brain barrier more effectively than currently available drugs; gene therapy; immunotherapy; and infusion of radiolabeled drugs and targeted toxins into the tumor and surrounding brain by means of convection-enhanced delivery. The most important adverse prognostic factors in patients with high-grade astrocytomas are older age, histologic features of glioblastoma, poor Karnofsky performance status, and unresectable tumor. Patients whose tumor contains an unmethylated MGMT promoter resulting in the presence of the repair enzyme in tumor cells and resistance to temozolomide also have a worse prognosis. Gliomatosis Cerebri Rarely, patients may present with a highly infiltrating, nonenhancing tumor of variable histologic grade involving more than two lobes of the brain. These tumors may be indolent initially, but will eventually behave aggressively and have a poor outcome. Treatment involves RT and temozolomide chemotherapy. Oligodendrogliomas account for approximately 15–20% of gliomas. They are classified by the WHO into well-differentiated oligodendrogliomas (grade II) or anaplastic oligodendrogliomas (AOs) (grade III). Tumors with oligodendroglial components have distinctive pathologic FIGURE 118-3 Postgadolinium T1 MRI of a recurrent glioblastoma before (A) and after (B) administration of bevacizumab. Note the decreased enhancement and mass effect. features such as perinuclear clearing—giving rise to a “fried-egg” appearance—and a reticular pattern of blood vessel growth. Some tumors have both an oligodendroglial as well as an astrocytic component. These mixed tumors, or oligoastrocytomas (OAs), are also classified into well-differentiated OA (grade II) or anaplastic oligoastrocytomas (AOAs) (grade III). Grade II oligodendrogliomas and OAs are generally more responsive to therapy and have a better prognosis than pure astrocytic tumors. These tumors present similarly to grade II astrocytomas in young adults. The tumors are nonenhancing and often partially calcified. They should be treated with surgery and, if necessary, RT and chemotherapy. Patients with oligodendrogliomas have a median survival in excess of 10 years. AOs and AOAs present in the fourth and fifth decades as variably enhancing tumors. They are more responsive to therapy than grade III astrocytomas. Co-deletion of chromosomes 1p and 19q, mediated by an unbalanced translocation of 19p to 1q, occurs in 61–89% of patients with AO and 14–20% of patients with AOA. Tumors with the 1p and 19q co-deletion are particularly sensitive to chemotherapy with procarbazine, lomustine (cyclohexylchloroethylnitrosourea [CCNU]), and vincristine (PCV) or temozolomide, as well as to RT. Median survival of patients with AO or AOA is approximately 3–6 years, but 601 those with co-deleted tumors can have a median survival of 10–14 years if treated with RT and chemotherapy. Ependymomas are tumors derived from ependymal cells that line the ventricular surface. They account for approximately 5% of childhood tumors and frequently arise from the wall of the fourth ventricle in the posterior fossa. Although adults can have intracranial ependymomas, they occur more commonly in the spine, especially in the filum terminale of the spinal cord where they have a myxopapillary histology. Ependymomas that can be completely resected are potentially curable. Partially resected ependymomas will recur and require irradiation. The less common anaplastic ependymoma is more aggressive and is treated with resection and RT; chemotherapy has limited efficacy. Subependymomas are slow-growing benign lesions arising in the wall of ventricles that often do not require treatment. Gangliogliomas and pleomorphic xanthoastrocytomas occur in young adults. They behave as more indolent forms of grade II gliomas and are treated in the same way. Brainstem gliomas usually occur in children or young adults. Despite treatment with RT and chemotherapy, the prognosis is poor, with a median survival of only 1 year. Gliosarcomas contain both an astrocytic as well as a sarcomatous component and are treated in the same way as glioblastomas. Primary central nervous system lymphoma (PCNSL) is a rare non-Hodgkin lymphoma accounting for less than 3% of primary brain tumors. For unclear reasons, its incidence is increasing, particularly in immunocompetent individuals. PCNSL in immunocompetent patients usually consists of a diffuse large B cell lymphoma. PCNSL may also occur in immunocompromised patients, usually those infected with the human immunodeficiency virus (HIV) or organ transplant recipients on immunosuppressive therapy. PCNSL in immunocompromised patients is typically large cell with immunoblastic and more aggressive features. These patients are usually severely immunocompromised, with CD4 counts of less than 50/mL. The Epstein-Barr virus (EBV) frequently plays an important role in the pathogenesis of HIV-related PCNSL. Immunocompetent patients with PCNSL are older (median 60 years) compared to patients with HIV-related PCNSL (median 31 years). PCNSL usually presents as a mass lesion, with neuropsychiatric symptoms, symptoms of increased intracranial pressure, lateralizing signs, or seizures. On contrast-enhanced MRI, PCNSL usually appears as a densely enhancing tumor (Fig. 118-4). Immunocompetent patients have solitary lesions more often than immunosuppressed patients. Frequently there is involvement of the basal ganglia, corpus callosum, or periventricular region. Although the imaging features are often characteristic, PCNSL can sometimes be difficult to differentiate from high-grade gliomas, infections, or demyelination. Stereotactic biopsy is necessary to obtain a histologic diagnosis. Whenever possible, glucocorticoids should be withheld until after the biopsy has been obtained because they have a cytolytic effect on lymphoma cells and may lead to nondiagnostic tissue. In addition, patients should be tested for HIV and the extent of disease should be assessed by performing PET or CT of the body, MRI of the spine, CSF analysis, and slit-lamp examination of the eye. Bone marrow biopsy and testicular ultrasound are occasionally performed. PCNSL is more sensitive to glucocorticoids, chemotherapy, and RT than other primary brain tumors. Durable complete responses and long-term survival are possible with these treatments. High-dose methotrexate, a folate antagonist that interrupts DNA synthesis, produces response rates ranging from 35–80% and median Primary and Metastatic Tumors of the Nervous System FIGURE 118-4 Postgadolinium T1 MRI demonstrating a large bifrontal primary central nervous system lymphoma (PCNSL). The periventricular location and diffuse enhancement pattern are characteristic of lymphoma. survival of up to 50 months. The combination of methotrexate with other chemotherapeutic agents such as cytarabine increases the response rate to 70–100%. The addition of whole-brain RT to methotrexate-based chemotherapy prolongs progression-free survival but not overall survival. Furthermore, RT is associated with delayed neurotoxicity, especially in patients over the age of 60 years. As a result, full-dose RT is frequently omitted, but there may be a role for reduced-dose RT. The anti-CD20 monoclonal antibody rituximab has activity in PCNSL and is often incorporated into the chemotherapy regimen. For some patients, high-dose chemotherapy with autologous stem cell rescue may offer the best chance of preventing relapse. At least 50% of patients will eventually develop recurrent disease. Treatment options include RT for patients who have not had prior irradiation, re-treatment with methotrexate, as well as other agents such as temozolomide, rituximab, procarbazine, topotecan, and pemetrexed. High-dose chemotherapy with autologous stem cell rescue may have a role in selected patients with relapsed disease. PCNSL in immunocompromised patients often produces multiple ring-enhancing lesions that can be difficult to differentiate from metastases and infections such as toxoplasmosis. The diagnosis is usually established by examination of the CSF for cytology and EBV DNA, toxoplasmosis serologic testing, brain PET imaging for hypermetabolism of the lesions consistent with tumor instead of infection, and, if necessary, brain biopsy. Since the advent of highly active antiretroviral drugs, the incidence of HIV-related PCNSL has declined. These patients may be treated with whole-brain RT, high-dose methotrexate, and initiation of highly active antiretroviral therapy. In organ transplant recipients, reduction of immunosuppression may improve outcome. Medulloblastomas are the most common malignant brain tumor of childhood, accounting for approximately 20% of all primary CNS tumors among children. They arise from granule cell progenitors or from multipotent progenitors from the ventricular zone. Approximately 5% of children have inherited disorders with germline mutations of genes that predispose to the development of medulloblastoma. Gorlin syndrome, the most common of these inherited disorders, is due to mutations in the patched-1 (PTCH-1) gene, a key component in the sonic hedgehog pathway. Turcot syndrome, caused by mutations in the adenomatous polyposis coli (APC) gene and familial adenomatous polyposis, has also been associated with an increased incidence of medulloblastoma. Histologically, medulloblastomas are highly cellular tumors with abundant dark staining, round nuclei, and rosette formation (Homer-Wright rosettes). They present with headache, ataxia, and signs of brainstem involvement. On MRI they appear as densely enhancing tumors in the posterior fossa, sometimes associated with hydrocephalus. Seeding of the CSF is common. Treatment involves maximal surgical resection, craniospinal irradiation, and chemotherapy with agents such as cisplatin, lomustine, cyclophosphamide, and vincristine. Approximately 70% of patients have long-term survival but usually at the cost of significant neurocognitive impairment. A major goal of current research is to improve survival while minimizing long-term complications. A large number of tumors can arise in the region of the pineal gland. These typically present with headache, visual symptoms, and hydrocephalus. Patients may have Parinaud syndrome characterized by impaired upgaze and accommodation. Some pineal tumors such as pineocytomas and benign teratomas can be treated simply by surgical resection. Germinomas respond to irradiation, whereas pineoblastomas and malignant germ cell tumors require craniospinal radiation and chemotherapy. Meningiomas are diagnosed with increasing frequency as more people undergo neuroimaging for various indications. They are now the most common primary brain tumor, accounting for approximately 35% of the total. Their incidence increases with age. They tend to be more common in women and in patients with neurofibromatosis type 2. They also occur more commonly in patients with a past history of cranial irradiation. Meningiomas arise from the dura mater and are composed of neoplastic meningothelial (arachnoidal cap) cells. They are most commonly located over the cerebral convexities, especially adjacent to the sagittal sinus, but can also occur in the skull base and along the dorsum of the spinal cord. Meningiomas are classified by the WHO into three histologic grades of increasing aggressiveness: grade I (benign), grade II (atypical), and grade III (malignant). Many meningiomas are found incidentally following neuroimaging for unrelated reasons. They can also present with headaches, seizures, or focal neurologic deficits. On imaging studies they have a characteristic appearance usually consisting of a partially calcified, densely enhancing extraaxial tumor arising from the dura (Fig. 118-5). Occasionally they may have a dural tail, consisting of thickened, enhanced dura extending like a tail from the mass. The main differential diagnosis of meningioma is a dural metastasis. If the meningioma is small and asymptomatic, no intervention is necessary and the lesion can be observed with serial MRI studies. Larger, symptomatic lesions should be resected. If complete resection is achieved, the patient is cured. Incompletely resected tumors tend to recur, although the rate of recurrence can be very slow with grade I tumors. Tumors that cannot be resected, or can only be partially removed, may benefit from treatment with external-beam RT or stereotactic radiosurgery (SRS). These treatments may also be helpful in patients whose tumor has recurred after surgery. Hormonal therapy and chemotherapy are currently unproven. Rarer tumors that resemble meningiomas include hemangiopericytomas and solitary fibrous tumors. These are treated with surgery and RT but have a higher propensity to recur locally or metastasize systemically. These are generally benign tumors arising from the Schwann cells of cranial and spinal nerve roots. The most common schwannomas, termed vestibular schwannomas or acoustic neuromas, arise from the vestibular portion of the eighth cranial nerve and account for approximately 9% of primary brain tumors. Patients with neurofibromatosis type 2 have a high incidence of vestibular schwannomas that are frequently bilateral. Schwannomas arising from other cranial nerves, such as the trigeminal nerve (cranial nerve V), occur with much lower frequency. Neurofibromatosis type 1 is associated with an increased incidence of schwannomas of the spinal nerve roots. FIGURE 118-5 Postgadolinium T1 MRI demonstrating multiple meningiomas along the falx and left parietal cortex. Vestibular schwannomas may be found incidentally on neuroimaging or present with progressive unilateral hearing loss, dizziness, tinnitus, or less commonly, symptoms resulting from compression of the brainstem and cerebellum. On MRI they appear as densely enhancing lesions, enlarging the internal auditory canal and often extending into the cerebellopontine angle (Fig. 118-6). The differential diagnosis includes meningioma. Very small, asymptomatic lesions can be observed with serial MRIs. Larger lesions should be treated with surgery or SRS. The optimal treatment will depend on the size of the tumor, symptoms, and the patient’s preference. In patients with small vestibular schwannomas and relatively intact hearing, early surgical 603 intervention increases the chance of preserving hearing. FIGURE 118-6 Postgadolinium MRI of a right vestibular schwan-noma. The tumor can be seen to involve the internal auditory canal. PITUITARY TUMORS (CHAP. 401e) These account for approximately 9% of primary brain tumors. They can be divided into functioning and nonfunctioning tumors. Functioning tumors are usually microadenomas (<1 cm in diameter) that secrete hormones and produce specific endocrine syndromes (e.g., acromegaly for growth hormone–secreting tumors, Cushing syndrome for adrenocorticotropic hormone [ACTH]-secreting tumors, and galactorrhea, amenorrhea, and infertility for prolactin-secreting tumors). Nonfunctioning pituitary tumors tend to be macroadenomas (>1 cm) that produce symptoms by mass effect, giving rise to headaches, visual impairment (such as bitemporal hemianopia), and hypopituitarism. Prolactin-secreting tumors respond well to dopamine agonists such as bromocriptine and cabergoline. Other pituitary tumors usually require treatment with surgery and sometimes RT or radiosurgery and hormonal therapy. Craniopharyngiomas are rare, usually suprasellar, partially calcified, solid, or mixed solid-cystic benign tumors that arise from remnants of Rathke’s pouch. They have a bimodal distribution, occurring predominantly in children but also between the ages of 55 and 65 years. They present with headaches, visual impairment, and impaired growth in children and hypopituitarism in adults. Treatment involves surgery, RT, or a combination of the two. Dysembryoplastic Neuroepithelial Tumors (DNTs) These are benign, supratentorial tumors, usually in the temporal lobe. They typically occur in children and young adults with a long-standing history of seizures. Surgical resection is curative. Epidermoid Cysts These consist of squamous epithelium surrounding a keratin-filled cyst. They are usually found in the cerebellopontine angle and the intrasellar and suprasellar regions. They may present with headaches, cranial nerve abnormalities, seizures, or hydrocephalus. Imaging studies demonstrate extraaxial lesions with characteristics that are similar to CSF but have restricted diffusion. Treatment involves surgical resection. Dermoid Cysts Like epidermoid cysts, dermoid cysts arise from epithelial cells that are retained during closure of the neural tube. They contain both epidermal and dermal structures such as hair follicles, sweat glands, and sebaceous glands. Unlike epidermoid cysts, these tumors usually have a midline location. They occur most frequently in the posterior fossa, especially the vermis, fourth ventricle, and suprasellar cistern. Radiographically, dermoid cysts resemble lipomas, demonstrating T1 hyperintensity and variable signal on T2. Symptomatic dermoid cysts can be treated with surgery. Colloid Cysts These usually arise in the anterior third ventricle and may present with headaches, hydrocephalus, and, very rarely, sudden death. Surgical resection is curative, or a third ventriculostomy may relieve the obstructive hydrocephalus and be sufficient therapy. A number of genetic disorders are characterized by cutaneous lesions and an increased risk of brain tumors. Most of these disorders have an autosomal dominant inheritance with variable penetrance. NF1 is an autosomal dominant disorder with an incidence of approximately 1 in 2600–3000. Approximately one-half the cases are familial; the remainder are caused by new mutations arising in patients with unaffected parents. The NF1 gene on chromosome 17q11.2 encodes a protein, neurofibromin, a guanosine triphosphatase (GTPase)-activating protein (GAP) that modulates signaling through the ras pathway. Mutations of NF1 result in a large number of nervous system tumors Primary and Metastatic Tumors of the Nervous System 604 including neurofibromas, plexiform neurofibromas, optic nerve gliomas, astrocytomas, and meningiomas. In addition to neurofibromas, which appear as multiple, soft, rubbery cutaneous tumors, other cutaneous manifestations of NF1 include café-au-lait spots and axillary freckling. NF1 is also associated with hamartomas of the iris termed Lisch nodules, pheochromocytomas, pseudoarthrosis of the tibia, scoliosis, epilepsy, and mental retardation. NF2 is less common than NF1, with an incidence of 1 in 25,000– 40,000. It is an autosomal dominant disorder with full penetrance. As with NF1, approximately one-half the cases arise from new mutations. The NF2 gene on 22q encodes a cytoskeletal protein, merlin (moesin, ezrin, radixin-like protein) that functions as a tumor suppressor. NF2 is characterized by bilateral vestibular schwannomas in over 90% of patients, multiple meningiomas, and spinal ependymomas and astrocytomas. Treatment of bilateral vestibular schwannomas can be challenging because the goal is to preserve hearing for as long as possible. These patients may also have diffuse schwannomatosis that may affect the cranial, spinal, or peripheral nerves; posterior subcapsular lens opacities; and retinal hamartomas. This is an autosomal dominant disorder with an incidence of approximately 1 in 5000–10,000 live births. It is caused by mutations in either the TSC1 gene, which maps to chromosome 9q34 and encodes a protein termed hamartin, or the TSC2 gene, which maps to chromosome 16p13.3 and encodes the protein tuberin. Hamartin forms a complex with tuberin, which inhibits cellular signaling through the mTOR, and acts as a negative regulator of the cell cycle. Patients with tuberous sclerosis may have seizures, mental retardation, adenoma sebaceum (facial angiofibromas), shagreen patch, hypomelanotic macules, periungual fibromas, renal angiomyolipomas, and cardiac rhabdomyomas. These patients have an increased incidence of subependymal nodules, cortical tubers, and subependymal giant-cell astrocytomas (SEGA). Patients frequently require anticonvulsants for seizures. SEGAs do not always require therapeutic intervention, but the most effective therapy is with the mTOR inhibitors sirolimus or everolimus, which often decrease seizures as well as SEGA size. Brain metastases arise from hematogenous spread and frequently either arise from a lung primary or are associated with pulmonary metastases. Most metastases develop at the gray matter–white matter junction in the watershed distribution of the brain where intravascular tumor cells lodge in terminal arterioles. The distribution of metastases in the brain approximates the proportion of blood flow such that about 85% of all metastases are supratentorial and 15% occur in the posterior fossa. The most common sources of brain metastases are lung and breast carcinomas; melanoma has the greatest propensity to metastasize to the brain, being found in 80% of patients at autopsy Table 118-3). Other tumor Abbreviations: ESCC, epidural spinal cord compression; GIT, gastrointestinal tract; LM, leptomeningeal metastases. types such as ovarian and esophageal carcinoma rarely metastasize to the brain. Prostate and breast cancer also have a propensity to metastasize to the dura and can mimic meningioma. Leptomeningeal metastases are common from hematologic malignancies and also breast and lung cancers. Spinal cord compression primarily arises in patients with prostate and breast cancer, tumors with a strong propensity to metastasize to the axial skeleton. Brain metastases are best visualized on MRI, where they usually appear as well-circumscribed lesions (Fig. 118-7). The amount of perilesional edema can be highly variable, with large lesions causing minimal edema and sometimes very small lesions causing extensive edema. Enhancement may be in a ring pattern or diffuse. Occasionally, intracranial metastases will hemorrhage; although melanoma, thyroid, and kidney cancer have the greatest propensity to hemorrhage, the most common cause of a hemorrhagic metastasis is lung cancer because it accounts for the majority of brain metastases. The radiographic appearance of brain metastasis is nonspecific, and similar-appearing lesions can occur with infection including brain abscesses and also with demyelinating lesions, sarcoidosis, radiation necrosis in a previously treated patient, or a primary brain tumor that may be a second malignancy in a patient with systemic cancer. However, biopsy is rarely necessary for diagnosis in most patients because imaging alone in the appropriate clinical situation usually suffices. This is straightforward for the majority of patients with brain metastases because they have a known systemic cancer. However, in approximately 10% of patients, a systemic cancer may present with a brain metastasis, and if there is not an easily accessible systemic site to biopsy, then a brain lesion must be removed for diagnostic purposes. The number and location of brain metastases often determine the therapeutic options. The patient’s overall condition and the current or potential control of the systemic disease are also major determinants. Brain metastases are single in approximately one-half of patients and multiple in the other half. The standard treatment for brain metastases has been whole-brain radiotherapy (WBRT) usually administered to a total dose of 3000 cGy in 10 fractions. This affords rapid palliation, and approximately 80% of patients improve with glucocorticoids and RT. However, it is not curative. Median survival is only 4–6 months. More recently, SRS delivered through a variety of techniques including the gamma knife, linear accelerator, proton beam, and CyberKnife all can deliver highly focused doses of RT, usually in a single fraction. SRS can effectively sterilize the visible lesions and afford local disease control in 80–90% of patients. In addition, there are some patients who have clearly been cured of their brain metastases using SRS, whereas this is distinctly rare with WBRT. However, SRS can be used only for lesions 3 cm or less in diameter and should be confined to patients with only one to three metastases. The addition of WBRT to SRS improves disease control in the nervous system but does not prolong survival. Randomized controlled trials have demonstrated that surgical extirpation of a single brain metastasis followed by WBRT is superior to WBRT alone. Removal of two lesions or a single symptomatic mass, particularly if compressing the ventricular system, can also be useful. This is particularly useful in patients who have highly radioresistant lesions such as renal carcinoma. Surgical resection can afford rapid symptomatic improvement and prolonged survival. WBRT administered after complete resection of a brain metastasis improves disease control but does not prolong survival. Chemotherapy is rarely useful for brain metastases. Metastases from certain tumor types that are highly chemosensitive, such as germ cell tumors or small-cell lung cancer, may respond to chemotherapeutic regimens chosen according to the underlying malignancy. Increasingly, there are data demonstrating responsiveness of brain metastases to chemotherapy including small molecule–targeted therapy when the lesion possesses the target. This has been best illustrated in patients with lung cancer harboring EGFR mutations that sensitize them to EGFR inhibitors. Antiangiogenic agents such as bevacizumab may also prove efficacious in the treatment of CNS metastases. Leptomeningeal metastases are also identified as carcinomatous meningitis, meningeal carcinomatosis, or in the case of specific tumors, leukemic or lymphomatous meningitis. Among the hematologic malignancies, acute leukemia is the most common to metastasize to the subarachnoid space, and in lymphomas the aggressive diffuse lymphomas can metastasize to the subarachnoid space frequently as well. Among solid tumors, breast and lung carcinomas and melanoma most frequently spread in this fashion. Tumor cells reach the subarachnoid space via the arterial circulation or occasionally through retrograde flow in venous systems that drain metastases along the bony spine or cranium. In addition, leptomeningeal metastases may develop as a direct consequence of prior brain metastases and can develop in almost 40% of patients who have a metastasis resected from the cerebellum. Leptomeningeal metastases are characterized clinically by multilevel symptoms and signs along the neuraxis. Combinations of lumbar and cervical radiculopathies, cranial neuropathies, seizures, confusion, and encephalopathy from hydrocephalus or raised intracranial pressure can be present. Focal deficits such as hemiparesis or aphasia are rarely due to leptomeningeal metastases unless there is direct brain infiltration, and they are more often associated with coexisting brain lesions. New-onset limb pain in patients with breast cancer, lung cancer, or melanoma should prompt consideration of leptomeningeal spread. Leptomeningeal metastases are particularly challenging to diagnose because identification of tumor cells in the subarachnoid compartment may be elusive. MRI can be definitive in patients when there are clear tumor nodules adherent to the cauda equina or spinal cord, enhancing cranial nerves, or subarachnoid enhancement on brain imaging (Fig. 118-8). Imaging is diagnostic in approximately 75% of patients and is more often positive in patients with solid tumors. Demonstration of tumor cells in the CSF is definitive and often considered the gold standard. However, CSF cytologic examination is positive in only 50% of patients on the first lumbar puncture and still misses 10% after three CSF samples. CSF cytologic examination is most useful in hematologic malignancies. Accompanying CSF abnormalities include an elevated protein concentration and an elevated white count. Hypoglycorrhachia is noted in less than 25% of patients but is useful when present. Identification of tumor markers or molecular confirmation of clonal proliferation with techniques such as flow cytometry within the CSF can also be definitive when present. Tumor markers are usually specific to solid tumors, and chromosomal or molecular markers are most useful in patients with hematologic malignancies. New technologies, such as rare cell capture, may enhance identification of tumor cells in the CSF. The treatment of leptomeningeal metastasis is palliative because there is no curative therapy. RT to the symptomatically involved areas, such as skull base for cranial neuropathy, can relieve pain and sometimes improve function. Whole-neuraxis RT has extensive toxicity with myelosuppression and gastrointestinal irritation as well as limited effectiveness. Systemic chemotherapy with agents that can penetrate the blood-CSF barrier may be helpful. Alternatively, intrathecal chemotherapy can be effective, particularly in hematologic malignancies. This is optimally delivered through an intraventricular cannula (Ommaya reservoir) rather than by lumbar puncture. Few drugs can be delivered safely into the subarachnoid space, and they have a limited spectrum of antitumor activity, perhaps accounting for the relatively poor response to this approach. In addition, impaired CSF flow dynamics can compromise intrathecal drug delivery. Surgery has a limited role in the treatment of leptomeningeal metastasis, but placement of a ventriculoperitoneal shunt can relieve raised intracranial pressure. However, it compromises delivery of chemotherapy into the CSF. Primary and Metastatic Tumors of the Nervous System FIGURE 118-8 Postgadolinium MRI images of extensive leptomeningeal metastases from breast cancer. Nodules along the dorsal surface of the spinal cord (A) and cauda equina (B) are seen. Epidural metastasis occurs in 3–5% of patients with a systemic malignancy and causes neurologic compromise by compressing the spinal cord or cauda equina. The most common cancers that metastasize to the epidural space are those malignancies that spread to bone, such as breast and prostate. Lymphoma can cause bone involvement and compression, but it can also invade the intervertebral foramens and cause spinal cord compression without bone destruction. The thoracic spine is affected most commonly, followed by the lumbar and then cervical spine. Back pain is the presenting symptom of epidural metastasis in virtually all patients; the pain may precede neurologic findings by weeks or months. The pain is usually exacerbated by lying down; by contrast, arthritic pain is often relieved by recumbency. Leg weakness is seen in about 50% of patients, as is sensory dysfunction. Sphincter problems are present in about 25% of patients at diagnosis. FIGURE 118-9 Postgadolinium T1 MRI showing circumferential epidural tumor around the thoracic spinal cord from esophageal cancer. Diagnosis is established by imaging, with MRI of the complete spine being the best test (Fig. 118-9). Contrast is not needed to identify spinal or epidural lesions. Any patient with cancer who has severe back pain should undergo an MRI. Plain films, bone scans, or even CT scans may show bone metastases, but only MRI can reliably delineate epidural tumor. For patients unable to have an MRI, CT myelography should be performed to outline the epidural space. The differential diagnosis of epidural tumor includes epidural abscess, acute or chronic hematomas, and rarely, extramedullary hematopoiesis. Epidural metastasis requires immediate treatment. A randomized controlled trial demonstrated the superiority of surgical resection followed by RT compared to RT alone. However, patients must be able to tolerate surgery, and the surgical procedure of choice is a complete removal of the mass, which is typically anterior to the spinal canal, necessitating an extensive approach and resection. Otherwise, RT is the mainstay of treatment and can be used for patients with radiosensitive tumors, such as lymphoma, or for those unable to undergo surgery. Chemotherapy is rarely used for epidural metastasis unless the patient has minimal to no neurologic deficit and a highly chemosensitive tumor such as lymphoma or germinoma. Patients generally fare well if treated before there is severe neurologic deficit. Recovery after paraparesis is better after surgery than with RT alone, but survival is often short due to widespread metastatic tumor. RT can cause a variety of toxicities in the CNS. These are usually described based on their relationship in time to the administration of RT: acute (occurring within days of RT), early delayed (months), or late delayed (years). In general, the acute and early delayed syndromes resolve and do not result in persistent deficits, whereas the late delayed toxicities are usually permanent and sometimes progressive. Acute Toxicity Acute cerebral toxicity usually occurs during RT to the brain. RT can cause a transient disruption of the blood-brain barrier, resulting in increased edema and elevated intracranial pressure. This is usually manifest as headache, lethargy, nausea, and vomiting and can be both prevented and treated with the administration of glucocorticoids. There is no acute RT toxicity that affects the spinal cord. Early Delayed Toxicity Early delayed toxicity is usually apparent weeks to months after completion of cranial irradiation and is likely due to focal demyelination. Clinically it may be asymptomatic or take the form of worsening or reappearance of a preexisting neurologic deficit. At times a contrast-enhancing lesion can be seen on MRI/ CT that can mimic the tumor for which the patient received the RT. For patients with a malignant glioma, this has been described as “pseudoprogression” because it mimics tumor recurrence on MRI but actually represents inflammation and necrotic debris engendered by effective therapy. This is seen with increased frequency when chemotherapy, particularly temozolomide, is given concurrently with RT. Pseudoprogression can resolve on its own or, if very symptomatic, may require resection. A rare form of early delayed toxicity is the somnolence syndrome that occurs primarily in children and is characterized by marked sleepiness. In the spinal cord, early delayed RT toxicity is manifest as a Lhermitte symptom with paresthesias of the limbs or along the spine when the patient flexes the neck. Although frightening, it is benign, resolves on its own, and does not portend more serious problems. Late Delayed Toxicity Late delayed toxicities are the most serious because they are often irreversible and cause severe neurologic deficits. In the brain, late toxicities can take several forms, the most common of which include radiation necrosis and leukoencephalopathy. Radiation necrosis is a focal mass of necrotic tissue that is contrast enhancing on CT/MRI and may be associated with significant edema. This may appear identical to pseudoprogression but is seen months to years after RT and is always symptomatic. Clinical symptoms and signs include seizure and lateralizing findings referable to the location of the necrotic mass. The necrosis is caused by the effect of RT on cerebral vasculature with resultant fibrinoid necrosis and occlusion of the blood vessels. It can mimic tumor radiographically, but unlike tumor, it is typically hypometabolic on a PET scan and has reduced perfusion on perfusion MR sequences. It may require resection for diagnosis and treatment unless it can be managed with glucocorticoids. There are rare reports of improvement with hyperbaric oxygen or anticoagulation, but the usefulness of these approaches is questionable. Leukoencephalopathy is seen most commonly after WBRT as opposed to focal RT. On T2 or FLAIR MR sequences, there is diffuse increased signal seen throughout the hemispheric white matter, often bilaterally and symmetrically. There tends to be a periventricular predominance that may be associated with atrophy and ventricular enlargement. Clinically, patients develop cognitive impairment, gait disorder, and later urinary incontinence, all of which can progress over time. These symptoms mimic those of normal pressure hydrocephalus, and placement of a ventriculoperitoneal shunt can improve function in some patients but does not reverse the deficits completely. Increased age is a risk factor for leukoencephalopathy but not for radiation necrosis. Necrosis appears to depend on an as yet unidentified predisposition. Other late neurologic toxicities include endocrine dysfunction if the pituitary or hypothalamus was included in the RT port. An RT-induced neoplasm can occur many years after therapeutic RT for either a prior CNS tumor or a head and neck cancer; accurate diagnosis requires surgical resection or biopsy. In addition, RT causes accelerated atherosclerosis, which can cause stroke either from intracranial vascular disease or carotid plaque from neck irradiation. The peripheral nervous system is relatively resistant to RT toxicities. Peripheral nerves are rarely affected by RT, but the plexus is more Acute encephalopathy (delirium) Seizures Methotrexate (high-dose IV, IT) Methotrexate Cisplatin Etoposide (high-dose) Vincristine Cisplatin Asparaginase Vincristine Procarbazine Asparaginase 5-Fluorouracil (± levamisole) Nitrogen mustard Cytarabine (high-dose) Carmustine Nitrosoureas (high-dose or Dacarbazine (intraarterial or Abbreviations: IT, intrathecal; IV, intravenous; PRES, posterior reversible encephalopathy syndrome. vulnerable. Plexopathy develops more commonly in the brachial distribution than in the lumbosacral distribution. It must be differentiated from tumor progression in the plexus, which is usually accomplished with CT/MR imaging of the area or PET scan demonstrating tumor infiltrating the region. Clinically, tumor progression is usually painful, whereas RT-induced plexopathy is painless. Radiation plexopathy is also more commonly associated with lymphedema of the affected limb. Sensory loss and weakness are seen in both. Neurotoxicity is second to myelosuppression as the dose-limiting toxicity of chemotherapeutic agents (Table 118-4). Chemotherapy causes peripheral neuropathy from a number of commonly used agents, and the type of neuropathy can differ, depending on the drug. Vincristine causes paresthesias but little sensory loss and is associated with motor dysfunction, autonomic impairment (frequently ileus), and rarely cranial nerve compromise. Cisplatin causes large fiber sensory loss resulting in sensory ataxia but little cutaneous sensory loss and no weakness. The taxanes also cause a predominately sensory neuropathy. Agents such as bortezomib and thalidomide also cause neuropathy. Encephalopathy and seizures are common toxicities from chemotherapeutic drugs. Ifosfamide can cause a severe encephalopathy, which is reversible with discontinuation of the drug and the use of methylene blue for severely affected patients. Fludarabine also causes a severe global encephalopathy that may be permanent. Bevacizumab and other anti-VEGF agents can cause posterior reversible encephalopathy syndrome. Cisplatin can cause hearing loss and less frequently vestibular dysfunction. Immunotherapy with anti-CTLA-4 monoclonal antibodies, such as ipilimumab, can cause an autoimmune hypophysitis. Primary and Metastatic Tumors of the Nervous System Soft tissue and Bone Sarcomas and Bone Metastases Shreyaskumar R. Patel, Robert S. Benjamin Sarcomas are rare (<1% of all malignancies) mesenchymal neoplasms that arise in bone and soft tissues. These tumors are usually of meso-119e dermal origin, although a few are derived from neuroectoderm, and they are biologically distinct from the more common epithelial malignancies. Sarcomas affect all age groups; 15% are found in children <15 years of age, and 40% occur after age 55 years. Sarcomas are one of the most common solid tumors of childhood and are the fifth most common cause of cancer deaths in children. Sarcomas may be divided into two groups, those derived from bone and those derived from soft tissues. Soft tissues include muscles, tendons, fat, fibrous tissue, synovial tissue, vessels, and nerves. Approximately 60% of soft tissue sarcomas arise in the extremities, with the lower extremities involved three times as often as the upper extremities. Thirty percent arise in the trunk, the retroperitoneum accounting for 40% of all trunk lesions. The remaining 10% arise in the head and neck. Approximately 11,410 new cases of soft tissue sarcomas occurred in the United States in 2013. The annual age-adjusted incidence is 3 per 100,000 population, but the incidence varies with age. Soft tissue sarcomas constitute 0.7% of all cancers in the general population and 6.5% of all cancers in children. Malignant transformation of a benign soft tissue tumor is extremely rare, with the exception that malignant peripheral nerve sheath tumors (neurofibrosarcoma, malignant schwannoma) can arise from neurofibromas in patients with neurofibromatosis. Several etiologic factors have been implicated in soft tissue sarcomas. Environmental Factors Trauma or previous injury is rarely involved, but sarcomas can arise in scar tissue resulting from a prior operation, burn, fracture, or foreign body implantation. Chemical carcinogens such as polycyclic hydrocarbons, asbestos, and dioxin may be involved in the pathogenesis. Iatrogenic Factors Sarcomas in bone or soft tissues occur in patients who are treated with radiation therapy. The tumor nearly always arises in the irra