6 Diseases Your Lack Of Sleep Could Be Causing.


You may not be too concerned about the fact you aren’t sleeping well, but below are six diseases that your lack of sleep could be causing. For the below reasons, it may be time to consider a better sleep routine.

6-diseases-your-lack-of-sleep-could-be-causing

Lots of things can interfere with your sleep, like a late night at work or various things to get done around the house before you head to bed. However, continually depriving yourself of the recommended time asleep (six to eight hours) can mean you become very sick down the line.

Endangering Your Heart and Bones Are Serious Matters

Lack of sleep can lead to serious ongoing conditions that can not be undone simply by going to bed earlier. Therefore, instead of subjecting yourself to these conditions that can lead to death, start sleeping more now. Consider the following as your incentive:

stroke- higher blood pressure and a higher level of the chemicals that lead to stroke in the blood stream are both linked to lack of sleep on a regular basis.

diabetes – eating poorly is a result of lack of sleep, since you tend to eat more at meals and also lean toward junk food more than you would with a proper amount of sleep to back you up at meal times. Therefore, the risk of suffering from diabetes is increased.

osteoporosis – This and other conditions related to the bones can result from continued lack of sleep. This is due to the fact that mineral density in the bones seems to decrease when you continue to not get an average of six to eight hours of sleep per night as an adult.

Other Conditions Are Inconvenient and Downright Scary

The following conditions are serious inconveniences when you are trying to get work done or simply enjoy life during the day. Others, like breast cancer, can mean death just when you thought you beat it and have your whole life left ahead of you:

memory loss – when you are not sleeping, your brain is not at its full ability. Eventually, you will have serious memory loss that can become permanent if you don’t start getting some more sleep and quickly.

breast cancer – breast cancer sufferers actually are prone to a recurrence of the disease when they do not get sufficient sleep. So avoid having to deal with cancer again, when it was bad enough the first time, by getting more sleep.

Incontinence – getting up in the middle of the night usually results in a trip to the bathroom, for a reason to move around if nothing else. However, over time that trip to the bathroom becomes necessary and in turn makes sure you do not get enough sleep. The cycle feeds itself and you have the issues with your bladder to deal with during waking hours as well.

As you may notice, skipping a couple hours of sleep has a lot more serious consequences than simply being tired the next day. Therefore, it may be wise to do what you can but make sleeping the proper amount a priority to avoid more serious issues in the future.

Scientists develop ‘psychic robot’ that can predict our actions.


It knew I was going to write this story.

 

The robots are coming. They’re taking our jobs, driving our cars, and soon theymight even wage war on us. Well, at least they can’t read our minds yet, right? Oops, nope, that one’s out too.

A so-called ‘psychic robot’ has been developed by bioengineers in the US, using a mathematical algorithm that can supposedly predict what we’re about to do. The software doesn’t actually read human minds per se, but it can reportedly calculate our intentions based on our previous activity – even if a particular action is interrupted.

For example, you’re reaching for something on your desk, but your hand collides with an unexpected obstacle that prevents you from grabbing it. Another person watching would be able to guess your intended motion and trajectory, but could a robot?

To test the theory, researchers at the University of Illinois at Chicago experimented with this very scenario, tracking and analysing the movement of people’s hands as they reached for an object on a virtual desk – but had their movement interrupted by an opposing force.

They created an advanced algorithm that, much like a person, could calculate where the hands had intended to go – in essence, a kind of predictive software that can foresee physical actions and intentions based on what comes before.

It sounds particularly impressive when you consider the implications for technological applications in the real world. For example, semi-autonomous vehicle controls could help avoid accidents based on observing previous driving actions.

“If we hit a patch of ice and the car starts swerving, we want the car to know where we meant to go,” said Justin Horowitz, first author of the study published in PLOS ONE.

“It needs to correct the car’s course not to where I am now pointed, but [to] where I meant to go. The computer has extra sensors and processes information so much faster than I can react. If the car can tell where I mean to go, it can drive itself there. But it has to know which movements of the wheel represent my intention, and which are responses to an environment that’s already changed.”

There are almost limitless theoretical applications, but another suitable area could be smart prosthetics. For people who experience tremors, the algorithm might be able to intuit your intended movements and reduce physical shaking.

“We call it a psychic robot,” said Horowitz. “If you know how someone is moving and what the disturbance is, you can tell the underlying intent — which means we could use this algorithm to design machines that could correct the course of a swerving car or help a stroke patient with spasticity.”

Timely Prehospital Management of Stroke Victims Crucial for Patient Outcome .


Although prevention and early treatment have markedly reduced the morbidity and mortality of stroke, it remains a significant health and social burden. This past decade saw stroke drop from the third to the fourth leading cause of death in the United States, but it remains a leading cause of adult disability.1

1501JEMS_Stroke

The cost of stroke is crippling, both in direct costs and lost opportunity. The global circumstances of stroke are even more dire—in many countries, it’s the second leading cause of death, and the rates of stroke are projected to double by 2030.2 To make an impact on this growing societal epidemic, the healthcare community must continue to improve our prevention and overall management of stroke.3

Fortunately, professionals involved in stroke care can learn from other healthcare successes. For medical conditions where timely identification, transport and intervention may mean the difference between life or death, integrated systems of care, including prehospital care coordinated with regional hospital services, saves lives.4

Outcomes data clearly show the benefits of such systems of care for trauma and ST elevation myocardial infarction (STEMI). More recently, data from similar systems of care for stroke suggest improved outcomes and less morbidity.5,6

Biology of Stroke

The term “stroke” collectively refers to all acute cerebrovascular disease, including ischemic and hemorrhagic types. Both forms share many risk factors, including diabetes, hypertension, smoking and age, and typically produce focal neurologic symptoms that are important to recognize. Prevention strategies and early management principles are similar between the two forms of stroke, but definitive interventions are specific to each type. Regardless of cause, targeted therapies that improve patient outcomes must be delivered within minutes to hours from symptom onset in order to be maximally effective.

Ischemic strokes occur due to an arterial occlusion, which causes brain tissue to infarct from a lack of oxygen and glucose. (See Figure 1 below). Occlusions can occur from either local vessel narrowing, causing thrombotic strokes, or due to clots originating elsewhere like the heart, which embolize to the vessel and cause embolic strokes. Regardless of cause, brain tissue is extremely sensitive to ischemia and begins to suffer irreversible damage within minutes of onset. To minimize permanent disability, blood flow must be restored quickly and overall brain physiology carefully managed, including in the prehospital setting.

Figure 1: Ischemic vs. hemorrhagic stroke

CanStockPhoto/alila

 

In the U.S., hemorrhagic strokes occur much less frequently than ischemic strokes, accounting for about 20% of acute infarcts. However, in other countries such as China, over half of all strokes are hemorrhagic. Intracerebral hemorrhage (ICH) is a form of stroke that causes bleeding into the brain tissue itself.8

In older patients, ICH typically occurs due to amyloid deposition in smaller arteries, producing typically smaller, less disabling strokes. In younger patients, ICH is associated with hypertension, producing vascular damage in small blood vessels deep within the brain. Due to the hemorrhage size and location, it’s more often disabling and lethal.

Subarachnoid hemorrhage (SAH) typically occurs in younger patients due to a structural weakness in an artery producing an aneurysm in the vessel wall, which suddenly ruptures. Both forms of hemorrhagic stroke tend to be more severe than ischemic stroke.

Goals of Treatment

The goals of emergent stroke treatment are twofold: First, prevent or reverse the source of injury (limit hemorrhage growth in ICH or restore blood flow in ischemic stroke), and second, optimize the patient’s physiology as soon as possible to maximize chances for injured, but not infarcted, tissue to recover.

In ischemic stroke, the area of brain tissue that’s irreversibly injured is referred to as the “infarct” or “core.” The area of brain with decreased blood flow that hasn’t experienced irreversible damage is called the “penumbra.”

The penumbra is the target of directed interventions. Opening obstructed blood vessels with either thrombolytics, such as recombinant tissue plasminogen activator (rt-PA), or direct intra-arterial clot removal, may restore blood flow to the penumbra, limiting damage and reversing some if not all neurologic deficits.

The key to these interventions is time. If blood flow isn’t restored within the first several hours from symptom onset, the tissue will infarct, leaving little chance of improving outcomes.

Physiologic management in ischemic stroke parallels many strategies employed for traumatic brain injury and post-cardiac arrest. Maintaining cerebral blood flow, ensuring appropriate oxygenation and maintaining optimal serum glucose levels are key to maximizing brain tissue function and limiting secondary injury.6

In the prehospital setting, physiologic management is largely based on optimizing the ABCs—airway, breathing and circulation. Currently, specific targeted stroke interventions are typically reserved for the hospital setting once the stroke type is established and the patient’s comorbidities considered. Yet, recognizing the importance of time, ongoing studies may demonstrate that similar strategies can and should be deployed in the prehospital setting to maximize patient outcomes.

Stroke Systems of Care

In 1996, the Food and Drug Administration (FDA) approved intravenous rt-PA as the first treatment of acute ischemic stroke. Despite the approval, the healthcare community took a decade to realize that hospitals needed to organize internal and external stroke systems of care to optimally utilize the drug and to maximize the chance for favorable outcomes in all stroke patients, not just those who received rt-PA.9

The development of primary stroke centers (PSCs) was the first and most important step in establishing stroke systems. From the very beginning, prehospital care was a critical stakeholder in the process and an enthusiastic promoter of emergency stroke care. Similar to lessons from the U.S. trauma model, stroke healthcare professionals soon realized that more comprehensive stroke centers (CSCs) were required to diagnose and treat more severe ischemic strokes and all forms of hemorrhagic strokes.10

Services provided by CSCs are highly specialized and require a significant commitment of personnel and resources. (See Table 1 below.) Through various certifying organizations, roughly 1,600 of the 5,000 hospitals in the U.S. are certified as PSCs or CSCs. Recognizing that patients with suspected stroke may not be near a PSC or CSC, acute stroke-ready hospitals (ASRH) were established to provide the minimum service required to diagnose ischemic strokes and consider treatment with rt-PA.11,12

Table 1: stroke hospital capabilities and requirements 

Most ASRHs don’t have onsite neurologic expertise, so telemedicine systems providing real-time video and voice communication and teleradiology allow remote stroke experts to assess the patient and collaborate with onsite physicians to make treatment decisions. This organization makes a difference—certified stroke centers, especially those that treat large numbers of patients, are more likely to administer thrombolytics in a timely fashion to eligible patients.7,13,14 Although ED physicians can safely administer thrombolytics, dedicated stroke teams have better protocol compliance.15

Establishing individual stroke centers is necessary but not sufficient for optimizing stroke care. Again similar to trauma and STEMI care, regional resources must be integrated into a cohesive system that uniquely utilizes all local resources, including hospitals, prehospital care systems, dispatch centers and air medical services.5 (See Figure 2 below.)

Figure 2: Stroke systems of care

Engaging all stakeholders from the onset of stroke system development is essential to overcoming logistical and political barriers. Several states such as Florida, California, New York and South Carolina have used legislation to form task forces to establish stroke systems of care.16,17

Through these stroke systems of care, patient outcomes improve by providing optimal acute treatment and attention to overall stroke management and prevention strategies. Studies show that patients presenting to PSCs have better outcomes in general compared to non-stroke centers, and more severe strokes are better addressed at CSCs.18 As a result, CSCs are increasingly becoming integrated regional centers that provide immediate support to smaller, surrounding hospitals.5

Integrated stroke systems of care are a two-way street: not all patients can go to a CSC or PSC. Therefore, elevating the level of stroke care everywhere is key, and utilizing expertise at CSCs and PSCs can help provide critical feedback to healthcare providers who collaborate on public and healthcare provider education.

Regardless of stroke care capability, all hospitals within a region must have a stroke plan and a protocol for interfacing with other hospitals in the local stroke system.

Prehospital Stroke Management

Prehospital management begins when patients or bystanders recognize stroke signs and symptoms and call 9-1-1, but rapid assessment and transport are of little utility if the patients don’t arrive at the hospital within treatment windows.

The American Heart Association has developed the stroke chain of survival (detection, dispatch, delivery, door, data, decision, drug), where the initial links focus on stroke recognition and EMS engagement. Community education on stroke symptoms and early EMS access are critical.

Most recently, the pneumonic “think FAST”—facial droop, arm weakness, speech slurred and timeliness—is being used to educate the public on strokes. However, although some educational programs have successfully increased awareness of stroke symptoms, the majority of patients still miss the treatment window.19

9-1-1 Activation

Once 9-1-1 is activated, stroke recognition is essential. With the advent of emergency medical dispatch tools and the use of dispatch protocols, patients with stroke-like symptoms are more easily recognized by 9-1-1 operators. However, there’s still variability, with correct identification varying between 30% and 83%.19

Any patient presenting within a 6–8 hour window of symptom onset should still be considered a candidate for acute endovascular intervention and appropriate response configurations utilized. Use of protocols clearly helps determine dispatch prioritization, which is critical to early intervention.

There are many barriers to treatment, primarily due to delays in hospital arrival after symptom onset.5 Patients who don’t call 9-1-1, have a stroke history or mild symptoms, or who are ethnic minorities or live in rural communities, all have lower rates of treatment.7

Most importantly for prehospital providers, the early EMS notification of the receiving hospital will make timely treatment more likely. Therefore, barriers should be identified to ensure optimal care.

Prehospital Assessment

As with all initial assessments, the airway should be assessed in standard fashion, but note that stroke patients may have difficulty managing their secretions and could be prone to vomiting.

If possible, the head of the stretcher should be elevated to 30 degrees. If the patient’s symptoms worsen with the head of the bed elevated, place the patient’s head back to flat since the patient may require the higher blood pressure to perfuse the area of stroke.

Typically for most stroke patients, breathing isn’t substantially altered, but if it is, ventilator assistance is warranted. Hyperventilation should be avoided unless the patient’s presentation suggests impending herniation (i.e., hypertension, bradycardia, irregular respiratory pattern) and is approved by medical control.

Circulatory status is assessed with vital signs and ECG monitoring, as stroke patients are at risk for dysrhythmias. Reassessment is required as the patient’s condition may dramatically change en route.

After the primary survey and baseline vitals, performance of a validated stroke assessment tool aids in the recognition of possible stroke. There are a variety of prehospital scales and screens that are widely used, but the most common are the Cincinnati Prehospital Stroke Scale and the Los Angeles Motor Scale.20,21 (See Table 2 below.)

Table 2: Prehospital stroke scales

If any one of these is abnormal then there’s an 88% sensitivity for anterior circulation stroke.

The LAMS socre is closely correlated with the full National Institutes of Health stroke scale. LAMS > 4 carries an over seven-fold increase in risk for large vessel occlusion.

 

 

These screening tools  attempt to balance ease of use with accuracy in order to help identify the presence of neurologic impairment, but they have some limitations.

First, the gross motor exams utilized can miss subtle strokes. Conversely, most prehospital stroke scales fail to grade the severity of the stroke, which may have implications in selecting an appropriate destination facility. Second, these scales may suggest a stroke when another cause of the patient’s symptoms exist, conditions termed “mimics.” (See Table 3 below.)

Table 3: Conditions with stroke-like symptoms (mimics) and unique features

More comprehensive, graded exams may help to identify and quantify specific stroke characteristics that assist in determining triage and treatment options.22 These scales are also available online and as smartphone applications. Unfortunately, these scales are more time consuming and may be more difficult to remember than the earlier stroke assessment tools, but in conjunction with a good patient history, the newer scales can provide a clearer picture of the patient’s condition.

Stroke mimics may account for more than 20% of patients with neurologic symptoms being considered as an acute stroke in the prehospital setting.23

Although it’s impossible to exclude all stroke mimics in the prehospital setting, a basic understanding of mimics should prompt providers to ask pertinent questions of the patient or family, which may lead to more effective patient care. Regardless of the stroke tool used, providers should always consider stroke mimics in their differential but should err on the side of treating the patient as a stroke.

The patient’s medical history is critically important. Particular attention should be paid to potential stroke risk factors, such as atrial fibrillation, hypertension, diabetes, previous strokes, transient ischemic attacks, recent surgeries and smoking.6

One of the most important elements of the patient’s history is the time of symptom onset, which will dictate many treatment options. The time of onset is based on the last time the patient was known to be “normal” or at their baseline, as opposed to when the patient was found with the neurologic deficits.

Current guidelines support the use of rt-PA within 0–4.5 hours in carefully selected patients and endovascular therapies up to 8 hours from symptom onset in patients with more severe strokes.

It’s also important to document the patient’s baseline physical and mental state, especially for patients with previous neurologic, physical or cognitive deficits. To determine last “normal” time, ask the patient, family members, caregivers or bystanders.

If they’re unsure about a specific time, inquire about other time clues such as daily routines, TV shows or recent phone conversations.6

The process of stroke identification in the prehospital setting is constantly evolving as treatments become more advanced. Consider stroke severity and time from symptom onset when triaging a patient; this also helps to provide important prearrival information to the stroke team.

Rapid Initiation of Treatment & Transport

Once the assessment and history are complete, prehospital focus should be on rapid initiation of treatment and transport. On-scene time should be less than 15 minutes whenever possible and the patient should be treated with the same urgency as major trauma or STEMI.6

The management plan should include frequent reassessment and management of the ABCs, as well as vital signs and cardiac and pulse oximetry monitoring. (See Table 4 below.) If necessary, oxygen therapy should be applied to maintain an SpO2 above 94%, though supplemental oxygen isn’t recommended in nonhypoxic patients with acute ischemic stroke.6

Table 4: American Heart Association recommendations for prehospital management of potential stroke6

Finger-stick blood glucose testing should be performed in all patients with stroke-like symptoms, and hyper- or hypoglycemia should be corrected accordingly.

Other interventions are rarely required in the prehospital setting, unless the patient begins to decompensate with airway or ventilatory compromise, cardiac dysrhythmias or hemodynamic instability. Currently, there’s no evidence to support prehospital lowering of blood pressure in hypertensive patients with possible stroke, and in some cases lowering blood pressures to normal levels could exacerbate the patient’s symptoms. Prehospital administration of aspirin or other antithrombotic agents to these patients is also not supported by published studies at this time.

Transport and destination facilities are critical decisions in effective stroke treatment. As previously mentioned, patient outcomes are better if they’re treated at a stroke center, though in most cases bypassing the closest facility for a higher level of care shouldn’t extend transport time more than 20 minutes.

Use of helicopter EMS (HEMS) increases access to thrombolytics for patients residing in communities that lack specialty facilities and should be utilized when necessary.24–26

If the patient is too unstable for a prolonged transport and HEMS isn’t possible, transport to the closest facility for rapid assessment, stabilization and preparation for transfer to a stroke center.

Early EMS notification of the receiving hospital is critical and has been clearly shown to reduce ED times to definitive treatment.19 Reports should include last known normal time, stroke screen results, vital signs, blood glucose, current medications and any acute interventions.

ED Management

With prehospital notification, all the necessary components of the hospital-based stroke team can be at bedside prior to patient arrival. A rapid assessment of the ABCs on arrival allows for most patients to be taken directly to the CT scanner by EMS, significantly reducing imaging delays. Concurrent physical evaluation, diagnostic testing and medical history review substantially reduce door-to-needle time to less than the currently recommended 60 minutes.

A recent study of 58,353 patients treated with IV rt-PA clearly demonstrated the importance of time to treatment, finding that “among 1,000 treated patients, every 15-minute-faster acceleration of treatment was associated with 18 more patients having improved ambulation at discharge … and 13 more patients being discharged to a more independent environment.”27

Interhospital Transfers

The “drip and ship” practice—assessing a patient and initiating thrombolytics before transfer to a higher level of care—may be an appropriate treatment choice for patients presenting within the therapeutic window.5

EMS providers involved in such patient transfers should carefully monitor vital signs and neurologic exams. Maintaining blood pressure at least below 180/105 mmHg is required after thrombolytics. Clinical deterioration may indicate an intracranial hemorrhage.

Air medical transport has been shown to be safe and effective, including for those patients who’ve received thrombolytics. As with field transport of stroke patients, early hospital notification is critical. Preplanning of the transfer process is key to minimizing delays once a stroke patient requires transfer to a higher level of care.

The Future of Prehospital Stroke Care

Recognizing the need for timely interventions, more and more hospital-based strategies are being deployed in the prehospital setting, including diagnostic tools, directed therapies and physiologic management.

To further minimize delays to treatment, some centers are equipping ambulances with mobile CT scanners, video telemetry and, in some cases, neurologists. This model employed in a study by Audebert and colleagues in Berlin led to a reduction in call-to-needle time of 36 minutes.28

Similar models are being studied in Houston and Cleveland. This high-tech, high-resource approach may not be broadly applicable in more rural areas, but demonstrates the growing appreciation for incorporating the prehospital setting in acute treatment paradigms.

No drug or therapy administered in the prehospital setting has been shown to improve patient outcomes, but recent studies show early treatment is feasible.29

Summary

Stroke is a time-dependent emergency and prehospital involvement is crucial for maximizing patient outcomes. System development of regional resources into a cohesive system is the cornerstone of stroke care.

Early EMS activation, identification, management, and rapid transport and triage to the most appropriate stroke center will give the patient the best chance to make a full recovery.

Pectus excavatum.


Presentation:
Abdominal pain ?perforated viscus.

Patient Data:
Age: 52
Gender: Female
Case Discussion:
Pectus excavatum is a common, congenital deformity of the anterior chest wall. It results in easily recognisable chest x-ray findings:

blurring of right heart border
increased density of the inferomedial lung zone
horizontal posterior ribs
vertical anterior ribs (heart shaped)
displacement of heart towards the left
obliteration of the descending aortic interface
Differential diagnosis includes:

right middle lobe consolidation/atelectasis
left para-aortic soft tissue density (e.g. mass)
mediastinal mass due to deformation of the cardiomediastinal contour

Investigating an Alternative Approach to Cancer Immunotherapy: Eliminating Immune System-suppressing Treg Cells


The number of anticancer immunotherapeutics approved by the U.S. Food and Drug Administration (FDA) is rising rapidly. In fact, the four anticancer immunotherapeutics approved by the FDA in the 12 months covered by the recently released AACR Cancer Progress Report 2015—Aug. 1, 2014, to July 31, 2015—was the greatest number of anticancer immunotherapeutics approved by the agency in any 12-month period to date.

SnapA_02_PR_CPR15_FDAapp

Much of the media attention has focused on FDA-approved anticancer immunotherapeutics like pembrolizumab (Keytruda) and nivolumab (Opdivo), which are referred to as checkpoint inhibitors. As I have explained in a previouspost on this blog, these agents work by releasing brakes on immune cells called T cells, which have the natural potential to recognize and eliminate cancer cells. Research has shown, however, that triggering brakes on T cells is just one way in which tumors can evade destruction by the immune system.

A study just published in the AACR journal Clinical Cancer Research reported results of a phase I clinical trial in which researchers investigated targeting another mechanism by which tumors are thought to avoid cancer-fighting T cells. Specifically, the researchers sought to eliminate cells—called Treg cells—that can inhibit anticancer immune responses.

What are Treg cells and why are they a potential target for cancer treatment?

Regulatory T cells, so-called Treg cells, are a subset of T cells that are defined by expression of two proteins, CD4 and FoxP3. These cells are critical for keeping other immune cells in check; they help prevent the immune system from attacking normal cells and tissues, causing autoimmune disorders.

Preclinical studies have shown that Tregs can also prevent the immune system from attacking tumors and clinical research has shown that these cells accumulate in the tumors of some patients with cancer. These observations led to the idea that eliminating Tregs in patients with cancer might unleash the natural potential of the patient’s cancer-fighting T cells, and to several clinical trials testing this hypothesis.

The Clinical Cancer Research study

In the phase Ia clinical trial reported in Clinical Cancer Research, the researchers evaluated whether using a therapeutic antibody called KW-0761 (mogamulizumab), which targets the protein CCR4, would eliminate Tregs in the blood of patients with lung or esophageal cancer. KW-0761 has been approved in Japan for the treatment of relapsed or refractory adult T-cell leukemia/lymphoma.

Dr. Ueda.
Dr. Ueda.

One of the senior authors on the paper, Ryuzo Ueda, MD, PhD, professor in the Department of Tumor Immunology at Aichi Medical College in Nagoya, Japan, explained in a news release that they used KW-0761 because activated FoxP3+CD4+ Tregs that accumulate in tumor tissue have been shown to express CCR4 molecules on their surface.

The research team enrolled seven patients with non–small cell lung cancer and three patients with esophageal cancer in the clinical trial. After analyzing blood samples obtained before the first treatment with KW-0761 and then every four weeks, they found that the number of FoxP3+CD4+ Tregs in the blood of all patients was dramatically reduced following treatment with KW-0761.

There were no dose-limiting toxicities and most adverse events were grade 1 or grade 2, with skin-related adverse events occurring most frequently.

Dr. Nakayama.
Dr. Nakayama.

“We were pleased to see that infusion of even a small amount of the KW-0761 efficiently depleted Tregs from the peripheral blood for a long time [several months],” said the co-senior author on the study, Eiichi Nakayama, MD, PhD, a professor at Kawasaki University of Medical Welfare in Kurashiki, Japan. “Unfortunately, we observed only a modest induction of antitumor immune responses and no marked clinical responses with KW-0761 monotherapy.”

What happens next?

Nakayama explained that the research team is planning to investigate whether combining Treg depletion with other immunotherapies, such as checkpoint inhibitors, can augment the antitumor immune response in patients with cancer.

However, the authors of a commentary published in Clinical Cancer Research in June 2015 emphasize that we still have much to learn about the role of Tregs in the initiation, development, and progression of different cancer types. They note that some studies point to these cells as not always being “bad guys,” and potentially even helping prevent cancer from developing in some cases by keeping in check inflammation, which can drive cancer initiation.

Groundbreaking Cancer Immunotherapeutic Approved by FDA


The U.S. Food and Drug Administration (FDA) announced the highly anticipated approval of the cancer immunotherapeutic pembrolizumab (Keytruda) for the treatment of certain patients with metastatic melanoma, the most deadly form of skin cancer.

3D structure of a melanoma cell
3D structure of a melanoma cell 

Pembrolizumab is the first in a new class of cancer immunotherapeutics called PD-1 inhibitors to be approved by the FDA. While the FDA decision came earlier than initially expected, the excitement surrounding it has been palpable for a while because pembrolizumab, as well as other PD-1 inhibitors, has been yielding dramatic and durable responses for patients with metastatic melanoma. In fact, many patients are continuing to benefit from pembrolizumab more than one year after starting treatment.

As I discussed in an earlier blog post, “Cancer Immunotherapy: Breaking Through to the Standard of Care,” PD-1 inhibitors work by releasing the PD-1 brake on cancer-fighting immune cells called T cells. Once the PD-1 brake is released, the T cells are able to carry out their natural function and can destroy cancer cells.

Louis M. Weiner, MD, director of the Georgetown Lombardi Comprehensive Cancer Center and a spokesman for the American Association for Cancer Research, told the New York Times: “This is really opening up a whole new avenue of effective therapies previously not available. It allows us to see a time when we can treat many dreaded cancers without resorting to cytotoxic chemotherapy.”

The patients who will benefit from yesterday’s FDA approval are those with metastatic melanoma that does not respond, or has stopped responding, to another cancer immunotherapeutic, ipilimumab (Yervoy). Ipilimumab targets another T-cell brake, CTLA4. A substantial number of patients with metastatic melanoma are still benefiting from ipilimumab more than five years after starting treatment, and it is hoped that pembrolizumab and other PD-1 inhibitors will have a similar impact so that significant inroads can be made against metastatic melanoma – a disease that has an overall five-year survival rate of only 16 percent.

Metastatic melanoma is a cancer diagnosis projected to be received by more than 3,000 U.S. residents in 2014 alone. With PD-1 inhibitors also showing tremendous promise in clinical trials as a potential treatment for other types of cancer, in particular non-small cell lung cancer – a disease that more than 180,000 individuals in the United States are expected to be diagnosed with in 2014 – it is hoped that additional FDA approvals for this groundbreaking class of drugs lie in the near future.

FDA Approves First Immunotherapy-Companion Diagnostic Combo for Lung Cancer


On the heels of the U.S. Food and Drug Administration’s approval of acombination of immune checkpoint inhibitors for unresectable and metastatic melanoma comes yet another immunotherapy approval. This time it is pembrolizumab (Keytruda), an immune checkpoint inhibitor, being approved for patients with metastatic non-small cell lung cancer (NSCLC) that has progressed after other treatments, and whose tumors express the protein PD-L1.

Lung cancer cell
Illustration of lung cancer cell during cell division.

This is the first immunotherapeutic to be approved in conjunction with a companion diagnostic test, the PD-L1 IHC 22C3 pharmDx test, which can detect the presence of the protein PD-L1 in non-small cell lung tumors.

Pembrolizumab is the second drug in the class of immune checkpoint inhibitors to be approved for NSCLC, the first being nivolumab, which was approved for treating squamous NSCLC in March this year. Pembrolizumab works by blocking the PD-1/PD-L1 pathway that cancer cells sometimes engage in order to apply “brakes” on cancer-fighting T cells, preventing the T cells from doing their job. Blocking the PD-1/PD-L1 pathway releases the brakes on T cells and enables them to fight cancer cells.

The FDA’s approval is based on the results of a randomized clinical trial of 550 patients with advanced NSCLC. In a subgroup of patients who received pembrolizumab after their lung cancer progressed following chemotherapy or targeted therapy and whose tumors had PD-L1, the overall response rate was 41 percent, and the treatment effect lasted between 2.1 and 9.1 months.

Early results from this trial were presented in April at the AACR Annual Meeting 2015. At that time, describing the promising data from the study, Edward B. Garon, MD, associate professor of medicine at the David Geffen School of Medicine at the University of California, Los Angeles, said in a press release, “Neither the drug nor the biomarker test is approved for use in this setting at this time, but if I had a patient whose tumor had PD-L1 expression on at least half of the cells and if pembrolizumab was available, I think that I would find the data compelling to look at the drug as the treatment option for that patient.” The results of this study were published in the New England Journal of Medicine.

The Diagnostic Tests Dilemma

While immune checkpoint inhibitors are one of the most promising classes of drugs to be tested and approved in recent years, their use is not without challenges.

As individual pharmaceutical companies continue to develop diagnostic tests to identify patients most likely to benefit from the immune checkpoint inhibitor they are each developing, and as more and more therapeutics from this class of drugs are approved by the FDA for a range of cancer types, identifying the right drug in a reasonable timeframe and making it a cost-effective endeavor poses challenges for both the patient and his or her physician.

Recognizing the challenges and in an effort to develop some solutions, the FDA, the AACR, and the American Society of Clinical Oncology (ASCO) held a one-dayworkshop in March this year in Washington, D.C., titled “Complexities in Personalized Medicine: Harmonizing Companion Diagnostics across a Class of Targeted Therapies.”

A significant development to emerge from the workshop was a commitment from six companies to work together in the pre-competitive space and analytically characterize the performance of their individual PD-1/PD-L1 companion diagnostic test systems. The project is continuing to make strides, and results are expected to help build an evidence base for post-approval studies that will help inform patients, physicians, pathologists, and others on how best to use the tests to determine treatment decisions.

Entanglement: Gravity’s long-distance connection


When Albert Einstein scoffed at a “spooky” long-distance connection between particles, he wasn’t thinking about his general theory of relativity.

Einstein’s century-old theory describes how gravity emerges when massive objects warp the fabric of space and time. Quantum entanglement, the spooky source of Einstein’s dismay, typically concerns tiny particles that contribute insignificantly to gravity. A speck of dust depresses a mattress more than a subatomic particle distorts space.

Yet theoretical physicist Mark Van Raamsdonk suspects that entanglement and spacetime are actually linked. In 2009, he calculated that space without entanglement couldn’t hold itself together. He wrote a paper asserting that quantum entanglement is the needle that stitches together the cosmic spacetime tapestry.

Multiple journals rejected his paper. But in the years since that initial skepticism, investigating the idea that entanglement shapes spacetime has become one of the hottest trends in physics. “Everything points in a really compelling way to space being emergent from deep underlying physics that has to do with entanglement,” says John Preskill, a theoretical physicist at Caltech.

In 2012, another provocative paper presented a paradox about entangled particles inside and outside a black hole. Less than a year later, two experts in the field proposed a radical resolution: Those entangled particles are connected by wormholes — spacetime tunnels imagined by Einstein that nowadays appear as often in sci-fi novels as in physics journals. If that proposal is correct, then entanglement isn’t the spooky long-distance link that Einstein thought it was — it’s an actual bridge linking distant points in space.

Many researchers find these ideas irresistible. Within the last few years, physicists in seemingly unrelated specialties have converged on this confluence of entanglement, space and wormholes. Scientists who once focused on building error-resistant quantum computers are now pondering whether the universe itself is a vast quantum computer that safely encodes spacetime in an elaborate web of entanglement. “It’s amazing how things have been progressing,” says Van Raamsdonk, of the University of British Columbia in Vancouver.

Physicists have high hopes for where this entanglement-spacetime connection will lead them. General relativity brilliantly describes how spacetime works; this new research may reveal where spacetime comes from and what it looks like at the small scales governed by quantum mechanics. Entanglement could be the secret ingredient that unifies these supposedly incompatible views into a theory of quantum gravity, enabling physicists to understand conditions inside black holes and in the very first moments after the Big Bang.

Holograms and soup cans

Van Raamsdonk’s 2009 insight didn’t materialize out of thin air. It’s rooted in the math of the holographic principle, the idea that the boundary enclosing a volume of space can contain all the information about what’s inside. If the holographic principle applied to everyday life, then a nosy employee could perfectly reconstruct the inside of a coworker’s office cubicle — piles of papers, family photos, dust bunnies in the corner, even files on the computer’s hard drive — just by looking at the cubicle’s outer walls. It’s a counterintuitive idea, considering walls have two dimensions and a cubicle’s interior has three. But in 1997, Juan Maldacena, a string theorist then at Harvard, perceived an intriguing example of what the holographic principle could reveal about the universe (SN: 11/17/07, p. 315).

He started with anti-de Sitter space, which resembles the universe’s gravity-dominated spacetime but also has some quirky attributes. It is curved in such a way that a flash of light emitted at any location eventually returns to where it started. And while the universe is expanding, anti-de Sitter space neither stretches nor contracts. Because of these features, a chunk of anti-de Sitter spacetime with four dimensions (three spatial, one time) can be surrounded by a three-dimensional boundary.

Maldacena considered a cylinder of anti-de Sitter spacetime. Each horizontal slice of the cylinder represented the state of its space at a given moment, while the cylinder’s vertical dimension represented time. Maldacena surrounded his cylinder with a boundary for the hologram; if the anti-de Sitter space were a can of soup and its contents, then the boundary was the label.

Just as nobody would mistake a Campbell’s label for the actual soup, the boundary seemingly shared nothing in common with the cylinder’s interior. The boundary “label,” for instance, observed the rules of quantum mechanics, with no gravity. Yet gravity described the space inside containing the “soup.” Maldacena showed, though, that the label and the soup were one and the same; the quantum interactions on the boundary perfectly described the anti-de Sitter space it enclosed. “They are two theories that seem completely different but describe exactly the same thing,” Preskill says.

Maldacena added entanglement to the holographic equation in 2001. He considered the space within two soup cans, each containing a black hole. Then he created the equivalent of a tin can telephone by connecting the black holes with a wormhole — a tunnel through spacetime first proposed by Einstein and Nathan Rosen in 1935. Maldacena looked for a way to create the equivalent of that spacetime connection on the cans’ labels. The trick, he realized, was entanglement.

Like a wormhole, quantum entanglement links entities that share no obvious relationship. The quantum world is a fuzzy place: An electron can seemingly be spinning up and down simultaneously, a state called superposition, until a measurement provides a definitive answer. But if two electrons are entangled, then measuring the spin of one enables an experimenter to know what the spin of the other will be — even though the partner electron is still in a superposition state. This quantum link remains if the electrons are separated by meters, kilometers or light-years.

QUANTUM SKEPTICS A New York Times article on May 4, 1935, highlighted Einstein’s concerns about quantum mechanics, especially its feature now known as entanglement. Today physicists are exploring links between entanglement and Einstein’s general theory of relativity.

NEW YORK TIMES/WIKIMEDIA COMMONS

Maldacena demonstrated that by entangling particles on one can’s label with particles on the other, he could perfectly describe the wormhole connection between the cans in the language of quantum mechanics. In the context of the holographic principle, entanglement is equivalent to physically tying chunks of spacetime together.

Inspired by this entanglement-spacetime link, Van Raamsdonk wondered just how large a role entanglement might play in shaping spacetime. He considered the blandest quantum soup-can label he could think of: a blank one, which corresponded to an empty disk of anti-de Sitter space. But he knew that because of quantum mechanics, empty space is never truly empty. It is filled with pairs of particles that blink in and out of existence. And those fleeting particles are entangled.

So Van Raamsdonk drew an imaginary line bisecting his holographic label and then mathematically severed the quantum entanglement between particles on one half of the label and those on the other. He discovered that the corresponding disk of anti-de Sitter space started to split in half. It was as if the entangled particles were hooks that kept the canvas of space and time in place; without them, spacetime pulled itself apart. As Van Raamsdonk decreased the degree of entanglement, the portion connecting the diverging regions of space got thinner, like the rubbery thread that narrows as a chewed wad of gum is pulled apart. “It led me to suggest that the origin of having space at all is having this entanglement,” he says.

That was a bold claim, and it took a while for Van Raamsdonk’s paper, published in General Relativity and Gravitation in 2010, to garner serious attention. The spark came in 2012, when four physicists at the University of California, Santa Barbara wrote a paper challenging conventional wisdom about the event horizon, a black hole’s point of no return.

Insight behind a firewall

In the 1970s, theoretical physicist Stephen Hawking showed that pairs of entangled particles — the same kinds Van Raamsdonk later analyzed on his quantum boundary — can get split up at the event horizon. One falls into the black hole, and the other escapes as what’s known as Hawking radiation. The process gradually saps the mass of a black hole, ultimately leading to its demise. But if black holes disappear, then so would the record of everything that ever fell inside. Quantum theory maintains that information cannot be destroyed.

By the 1990s several theoretical physicists, including Stanford’s Leonard Susskind, had proposedresolutions of the issue. Sure, they said, matter and energy fall into a black hole. But from the perspective of an outside observer, that stuff never quite makes it past the event horizon; it seemingly teeters on the edge. As a result, the event horizon becomes a holographic boundary containing all the information about the space inside the black hole. Eventually, as the black hole shrivels away, that information will leak out as Hawking radiation. In principle, the observer could collect the radiation and piece together information about the black hole’s interior.

In their 2012 paper, Santa Barbara physicists Ahmed Almheiri, Donald Marolf, James Sully and Joseph Polchinski claimed something was wrong with that picture. For an observer to assemble the puzzle of what’s inside a black hole, they noted, all the individual puzzle pieces — the particles of Hawking radiation— would have to be entangled with each other. But each Hawking particle also has to be entangled with its original partner that fell into the black hole.

Unfortunately, there is not enough entanglement to go around. Quantum theory dictates that the entanglement required to link all the particles outside the black hole precludes those particles from also linking up with particles inside the black hole. Compounding the problem, the physicists found that severing one of those entanglements would create an impenetrable wall of energy, called a firewall, at the event horizon (SN: 5/31/14, p. 16).

Many physicists doubted that black holes actually vaporize everything trying to enter. But the mere possibility that firewalls exist had disturbing implications. Previously, physicists had wondered what the space inside a black hole looked like. Now they weren’t sure whether black holes even had an inside. “It was kind of humbling,” Preskill says.

Susskind was not so much humbled as restless. He had spent years trying to show that information wasn’t lost inside a black hole; now he was just as convinced that the firewall idea was wrong, but he couldn’t prove it. Then one day he received a cryptic email from Maldacena: “It had very little in it,” Susskind says, “except for ER = EPR.” Maldacena, now at the Institute for Advanced Study in Princeton, N.J., had thought back to his 2001 paper on interconnected soup cans and wondered whether wormholes could resolve the entanglement mess raised by the firewall problem. Susskind quickly jumped on the idea.

In a paper in the German journal Fortschritte der Physik in 2013, Maldacena and Susskind argued that a wormhole — technically, an Einstein-Rosen bridge, or ER — is the spacetime equivalent of quantum entanglement. (EPR stands for Einstein, Boris Podolsky and Rosen, authors of the 1935 paper that belittled entanglement.) That means that every particle of Hawking radiation, no matter how far away it is from where it started, is directly connected to a black hole’s interior via a shortcut through spacetime. “Through the wormhole, the distant stuff is not so distant,” Susskind says.

Susskind and Maldacena envisioned gathering up all the Hawking particles and smushing them together until they collapse into a black hole. That black hole would be entangled, and thus connected via wormhole, with the original black hole. That trick transformed a confusing mess of Hawking particles — paradoxically entangled with both a black hole and each other — into two black holes connected by a wormhole. Entanglement overload is averted, and the firewall problem goes away.

Not everyone has jumped aboard the ER = EPR bandwagon. Susskind and Maldacena admit they have more work to do to prove the equivalence of wormholes and entanglement. But after pondering the implications of the firewall paradox, many physicists agree that the spacetime inside a black hole owes its existence to entanglement with radiation outside. That’s a major insight, Preskill says, because it also implies that the entire universe’s spacetime fabric, including the patch on which we reside, is a product of quantum spookiness.

Cosmic computer

It’s one thing to say the universe constructs spacetime through entanglement; it’s another to show how the universe does it. The trickier of those assignments has fallen on Preskill and colleagues, who have come to view the cosmos as a colossal quantum computer. For two decades scientists have worked on building quantum computers that use information encoded in entangled entities, such as photons or tiny circuits, to solve problems intractable on traditional computers, such as factoring large numbers. Preskill’s team is using knowledge gained in that effort to predict how particular features inside a soup can would be depicted on the entanglement-filled label.

Quantum computers work by exploiting components that are in superposition states as data carriers — they can essentially be 0s and 1s at the same time. But superposition states are very fragile. Too much heat, for example, can destroy the state and all the quantum information it carries. These information losses, which Preskill compares to having pages torn out of a book, seem inevitable.

But physicists responded by creating a protocol called quantum error correction. Instead of relying on one particle to store a quantum bit, scientists spread the data among multiple entangled particles. A book written in the language of quantum error correction would be full of gibberish, Preskill says, but its entire contents could be reconstructed even if half the pages were missing.

Quantum error correction has attracted a lot of attention in recent years, but now Preskill and his colleagues suspect that nature came up with it first. In the June Journal of High Energy Physics, Preskill’s team showed how the entanglement of multiple particles on a holographic boundary perfectly describes a single particle being pulled by gravity within a chunk of anti-de Sitter space. Maldacena says this insight could lead to a better understanding of how a hologram encodes all the details about the spacetime it surrounds.

Physicists admit that their approximations have a long way to go to match reality. While anti-de Sitter space offers physicists the advantage of working with a well-defined boundary, the universe doesn’t have a straightforward soup-can label. The spacetime fabric of the cosmos has been expanding since the Big Bang and continues to do so at an increasing clip. If you shoot a pulse of light into space, it won’t turn around and come back; it will just keep going. “It is not clear how to define a holographic theory for our universe,” Maldacena wrote in 2005. “There is no convenient place to put the hologram.”

Yet as crazy as holograms, soup cans and wormholes sound, they seem to be promising lenses in the search for a way to meld quantum spookiness with spacetime geometry. In their paper on wormholes, Einstein and Rosen discussed possible quantum implications but didn’t make a connection to their earlier entanglement paper. Today that link may help reconcile quantum mechanics and general relativity in a theory of quantum gravity. Armed with such a theory, physicists could dig into mysteries such as the state of the infant universe, when matter and energy were packed into an infinitesimally small space. “We don’t really know the answers yet by any means,” Preskill says. “But we’re excited to find a new way of looking at things.”

 

Watch the video. URL: https://youtu.be/6yfWdb-JOA8

Adidas May Have Solved The Problem Of Uncomfortable Shoes


hoe shopping can feel like a series of compromises. “This pinches a little, but maybe that’s okay.” “It’s a little roomy in the toe.” “If only adults could still wear velcro.”

Adidas is working on a solution.

The company on Wednesday unveiled “Futurecraft 3D,” an experimental initiative intended to perfect the midsole — that main, bottom part of your shoe that provides cushioning and support.

“Imagine walking into an Adidas store, running briefly on a treadmill and instantly getting a 3D-printed running shoe — this is the ambition of the Adidas 3D-printed midsole,” the company said in a press release.

Like Lexus’ cardboard car, this is really more of a marketing concept than something that you’ll feasibly be able to enjoy yourself in the near future. But Adidas, which partnered with 3D-printing company Materialise to create the experimental kicks, does appear to hope this is the start of something real.

“Futurecraft 3D is a prototype and a statement of intent,” Eric Liedtke, head of global brands at Adidas, said in the press release.

Take a look at the shoe below:

  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS
  • ADIDAS

 

Depression Hurts, Your Bones Included


Growing evidence suggests that depression, one of the most common diseases of the brain, is so powerful it can actually erode bones in the body.

Depression Hurts, Your Bones Included

Our bones are constantly remodeling themselves – they build themselves up and break themselves down over and over again. Depression is like a severe and prolonged state of stress on bones that may weaken them, making osteoporosis more likely. Depression causes blood pressure and heart rate to increase and also causes the brain to produce dangerously high levels of hormones – it has also been suggested that specific hormonal changes associated with depression lead to bone loss.

Depression not only affects your brain and behavior—it affects your entire body, and that includes your bone health and risk of developing osteoporosis.

Someone suffering from depression might experience bouts ofinsomnia, loss of appetite, and overall lethargy, and these are all contributors to poor bone health. Studies show that older people with depression are more likely to have low bone mass than older people who aren’t depressed, and low bone mass is the biggest indicator of osteoporosis.

Younger women with depression may also be at risk for osteoporosis. One study found that among women who have not yet reached menopause, those with mild depression have less bone mass than those who aren’t depressed. Men who are depressed seem to lose bone even more rapidly and to a greater extent than women, however since bone density in men is greater to begin with, their risk of fracture may be slightly more forgiving than in women.

Medication

While currently available depression treatments are generally well tolerated and safe, some medications, including some antidepressants, anticonvulsants, and lithium, can increase your risk for osteoporosis. Certain medications can also increase your risk of falling, which is dangerous if you already have osteoporosis. The class of antidepressants known as SSRIs may be associated with higher rates of bone loss in older women.

A recent study funded by the NIH showed an association between SSRI use and hipbone loss in older women. In patients with depression and those on SSRIs, attention should be directed to the heightened risk of fragility fracture – a broken bone that’s caused by a fall from a standing height or less, indicating an underlying bone disorder like osteoporosis.

Lifestyle

If you have osteoporosis, you may need to make lifestyle changes, and these changes may actually increase your risk of depression. For example:

– To prevent falls that could cause already fragile bones to fracture or break, you may not be able to take part in some activities you once enjoyed.
– Weakened bones may make it harder to perform everyday tasks, and you could lose some of your independence.
– You may feel nervous about going to crowded places, such as malls or movie theaters, for fear of falling and breaking a bone.

But it can go the other way, too. Exercise is an important part of osteoporosis treatment, particularly activities in which you support your weight on your feet. These activities help to strengthen bones and muscles that can prevent falls and can also boost your mood and improve your depression.

People with depression and low bone mass need to try very hard to adopt bone health strategies, including use of supplements, quitting smoking, limiting alcohol consumption to fewer than two glasses a day, and participating in weight-bearing exercises and fall-prevention programs. Because earlyosteoporosis is primarily a silent disease, knowing that even mild depression can lead to bone loss years before fractures occur is of major clinical importance. Orthopedic surgeons should be aware of the association between depression and osteoporosis as well as the higher rate of bone loss in patients on SSRI medication for their depression. Treating depression can help you manage your osteoporosis and improve your overall health. Recovery from depression takes time but treatments are effective.