Connect the quantum dots for a full-colour image


Nanocrystal display could be used in high-resolution, low-energy televisions.

quantum dots

Ink stamps have been used to print text and pictures for centuries. Now, engineers have adapted the technique to build pixels into the first full-colour ‘quantum dot’ display — a feat that could eventually lead to televisions that are more energy-efficient and have sharper screen images than anything available today.

Engineers have been hoping to make improved television displays with the help of quantum dots — semiconducting crystals billionths of a metre across — for more than a decade. The dots could produce much crisper images than those in liquid-crystal displays, because quantum dots emit light at an extremely narrow, and finely tunable, range of wavelengths.

The colour of the light generated depends only on the size of the nanocrystal, says Byoung Lyong Choi, an electronic engineer at the Samsung Advanced Institute of Technology in Yongin, South Korea. Quantum dots also convert electrical power to light efficiently, making them ideal for use in energy-saving lighting and display devices.

Easier said than done

Attempts to commercialize the technology have been hampered because it is difficult to make large quantum-dot displays without compromising the quality of the image. The dots are usually layered onto the material used to make the display by spraying them onto the surface — a technique similar to that of an ink-jet printer. But the dots must be prepared in an organic solvent, which “contaminates the display, reducing the brightness of the colours and the energy efficiency”, says Choi.

Choi and his colleagues have now found a way to bypass this obstacle, by turning to a more old-fashioned printing technique — details of which appear today in Nature Photonics1. The team used a patterned silicon wafer as an ‘ink stamp’ to pick up strips of dots made from cadmium selenide, and press them down onto a glass substrate to create red, green and blue pixels without using a solvent.

The idea may sound simple, but getting it to work was not easy, Choi explains. “It took us three years to get the details right, such as changing the speed and the pressure of the stamp to get a 100% transfer.”

The team has now produced a 10-centimetre full-colour display. The pixels ware brighter and more efficient than in quantum dot displays made by rival methods, says Choi. For example, “the maximum brightness of the red pixels is about 50% better,” he says. The maximum power efficiency for the red pixels is about 70% better.

Around the bend

Bending the screen did not greatly affect the display’s performance, which means that the displays can be rolled up for portability, or used to make flexible lighting, says Choi.

Paul O’Brien, an inorganic chemist who studies quantum dots at the University of Manchester, UK, commends the group’s achievement. He notes that quantum dots are “robust”, so their efficiency will not quickly degrade. “For televisions, where you want a long lifetime, quantum dots are appealing,” he adds.

Seth Coe-Sullivan, the chief technology officer of QD Vision, a company in Watertown, Massachusetts, that produces devices with lighting based on quantum dots, notes that Choi and his team’s method is cheap. “We all have our eyes on making large-screen televisions, and this fabrication technique seems to be cost-effective,” he says.

But Coe-Sullivan adds that it may take some time to commercialize quantum-dot displays for big items. “I can imagine that we will have small cell-phone displays using this technology within around three years,” he says. “For the rest, there may be more of a wait.”

source: nature nanotechnogy

antiepileptic drug and risk of suicide


After reviewing 199 placebo-controlled trials of antiepileptic drugs (AEDs) in 2008, the US Food and Drug Administration (FDA) issued a warning that usage of the drugs could result in heightened risk of suicidality. They also added a public health advisory requiring that “health care professionals should notify patients and caregivers of the potential for an increase in the risk of suicidal thoughts or behaviors so that patients may be closely observed.” The meta-analysis found that the absolute risk of suicidal thinking or behaviour was 0.43% in those taking AEDs versus 0.24% in the group on placebo. That equates to one additional case of suicidal thoughts or behaviour for every 500 patients taking an AED.

The limitations of randomised trials in evaluating uncommon adverse effects are well known. Short duration of follow-up and lack of accurate data on outcomes are reasons why large observational studies offer an important alternative perspective. In this case control study, Arana and colleagues make use of a UK database of primary care attenders to provide an analysis of risk by indication for prescription. They found that AEDs were not associated with an increased risk of attempted or completed suicide among patients with epilepsy (OR 0.59, 95% CI 0.35 to 0.98) or bipolar disorder (1.13, 95% CI 0.35 to 3.61). However, there was an increased risk in depressed patients (OR 1.65, 95% CI 1.24 to 2.10) and notably in those without epilepsy, depression or bipolar disorder (2.57, 95% CI 1.78 to 3.71). A plausible explanation for this latter finding is that AEDs may have been prescribed for chronic pain, which itself is associated with an increased risk of suicide.

This study indicates that the underlying illness provides a greater risk of suicidal behaviour than the prescription of AEDs. Advisory warnings may promote monitoring for suicidal thoughts, but this must be set against the risk of people with epilepsy declining or stopping treatment, as was the case with the FDA warnings on antidepressants.1 Unlike depression, there are no ready alternative treatments for epilepsy.

source:BMJ

Is testicular microlithiasis associated with testicular cancer?


Testicular microlithiasis refers to small clusters of calcium seen on an ultrasound examination of the testicles. A growing number of studies have shown a relationship between testicular microlithiasis and testicular cancer. However, it remains uncertain whether having testicular microlithiasis is an independent risk factor for testicular cancer.

Testicular microlithiasis is uncommon and has many possible causes, such as infection and injury. Most studies of testicular microlithiasis have evaluated men who have had testicular ultrasounds done for some other reason, such as swelling, pain or infertility. In these studies, there appears to be a small association between microlithiasis and testicular cancer. But there’s not enough evidence to be certain that the microlithiasis caused cancer.

Few studies of healthy men with no symptoms have been conducted. But results indicate that testicular microlithiasis is much more common than is testicular cancer. This has led researchers to believe that microlithiasis is unlikely to increase an otherwise healthy man’s risk of testicular cancer.

If testicular microlithiasis is noted on an ultrasound done for some other reason, your doctor may recommend that you do regular testicular self-exams and make an appointment if you find any unusual lumps. If you have other risk factors for testicular cancer, your doctor may recommend close follow-up with annual testicular ultrasound scans.

source: Mayo house call

Do stem cells cause gastric cancer?


Although the link between Helicobacter pylori infection and gastric cancer is well established, new research suggests that stem cells play an important role in the development of this malignant disease. JeanMarie Houghton and colleagues recently showed that H. pylori-induced inflammation in mice caused the migration of stem cells originating from bone marrow to the stomach, where they subsequently developed into gastric tumours.1Previous evidence suggests that bone-marrow-derived cells have a reparative function on being recruited to areas of injury or inflammation. The idea that these cells might also play a role in the development of cancer revisits a concept that arose partly from the observation in the 1970s that only 1% of leukemia cells grow into colonies in vitro, an ability that later earned these cells the label “cancer stem cells.”2 Houghton and colleagues’ research suggests that similar stem cells may give rise to gastric cancer, a finding that presents a new way of thinking about the pathogenesis of a disease that is the second leading cause of cancer-related deaths worldwide, killing nearly 600 000 people each year.

 

Figure. Bone-marrow-derived stem cell that has differentiated into a gastric epithelial cell. Reprinted, with permission, from Houghton et al.1 © 2004 American Association for the Advancement of Science. 


 

 

 

 

 

 

 

 

 

H. pylori is one of the most common chronic bacterial infections worldwide. The bacterium is the only one known to consistently tolerate the acidic environment of the stomach. Without treatment, H. pylori infection can persist for many years, causing chronic inflammation.

Up to 80% of patients with gastric cancer have a current or past H. pylori infection.3 This has led the World Health Organization to classify H. pylori as a group 1 carcinogen (www.cie.iarc.fr/monoeval/crthgr01.html). How the bacterium contributes to the development of cancer is still not entirely clear, although bacterial proteins, the immune response and hormonal responses have all been implicated. In addition, current research is beginning to link inflammation to the formation of tumours, with the inflammation-induced protein NF-kB emerging as a key factor.4

Are stem cells to blame?

Houghton and colleagues’ work suggests an unexpected alternative to the inflammation theory. The research group focused on the idea that bone-marrow-derived cells move into areas of chronic injury or inflammation to effect repairs. What long-term consequence this recruitment has on chronic inflammation is largely unknown. Houghton and colleagues wondered if these stem cells could be involved in the development of gastric cancer.

To study this question, Houghton and colleagues used a strain of mice (C57BL/6) and a relative of H. pylori (H. felis) that together form a well-established model of gastric cancer in humans. They irradiated the mice to destroy their natural bone-marrow-derived cells, replacing them with transgenic cells that would express easily detectable markers. Six to 8 weeks after the infection of these transgenic mice with H. felis, Houghton and colleagues began to detect bone-marrow-derived cells migrating into the stomach lining, presumably to repair the damage caused by the bacteria; by 20 weeks, the labelled cells were differentiating into cells with the characteristics of stomach epithelial cells.

But the authors found that these differentiating cells looked odd and behaved abnormally: they began to elongate, branch, crowd together and become distorted, and their growth rate started to accelerate. After 52 weeks, the mice were in the early stages of developing gastric cancer, and the tumours that subsequently formed stained positively for markers that indicated that the cells indeed came from the bone marrow.

Stem cells are often touted as holding great promise for novel therapies for problems ranging from myocardial infarction to Alzheimer’s disease and parkinsonism. However, a recent revival of an old concept that stem cells may be the cellular origin of cancer is bringing a note of caution to the idea of using them to repair organs. (Cohnheim and Durante hypothesized 150 years ago that cancer might arise from embryonic stem cells, since the 2 types of tissue resemble each other.5) This revival got a jumpstart in 1997, when Bonnet and Dick6 from the University of Toronto made dilutions of leukemia cells to reveal that approximately one in a million had the ability to reproduce the disease, suggesting the presence of a “cancer stem cell.” Similarly, Houghton and colleagues’ work suggests that bone-marrow- derived cells are cancer stem cells for gastric tumours.

However, this one study does not provide definitive proof, and more work is needed to show that the bone-marrow-derived cells in their model did indeed differentiate rather than merely fusing with epithelial cells. Furthermore, there is as yet no way to determine whether bone-marrow-derived cells cause gastric cancer in humans, since no markers are available. Nevertheless, these findings are a warning against the premature development of stem cell therapies and are bound to spark debate on the pathogenesis of gastric cancer.

source: canadian journal of medicine

Prevention of stroke in patients with patent foramen ovale


Patent foramen ovale is found in 24% of healthy adults and 38% of patients with cryptogenic stroke. This ratio and case reports indicate that patent foramen ovale and stroke are associated, probably because of paradoxical embolism. In healthy people with patent foramen ovale, embolic events are not more frequent than in controls, and therefore no primary prevention is needed. However, once ischaemic events occur, the risk of recurrence is substantial and prevention becomes an issue. Acetylsalicylic acid and warfarin reduce this risk to the same level as in patients without patent foramen ovale. Patent foramen ovale with a coinciding atrial septal aneurysm, spontaneous or large right-to-left shunt, or multiple ischaemic events potentiates the risk of recurrence. Transcatheter device closure has therefore become an intriguing addition to medical treatment, but its therapeutic value still needs to be confirmed by randomised-controlled trials.

Monte Carlo simulation


Risk analysis is part of every decision we make. We are constantly faced with uncertainty, ambiguity, and variability. And even though we have unprecedented access to information, we can’t accurately predict the future. Monte Carlo simulation lets you see all the possible outcomes of your decisions and assess the impact of risk, allowing for better decision making under uncertainty.

What is Monte Carlo simulation?
Monte Carlo simulation is a computerized mathematical technique that allows people to account for risk in quantitative analysis and decision making. The technique is used by professionals in such widely disparate fields as finance, project management, energy, manufacturing, engineering, research and development, insurance, oil & gas, transportation, and the environment.

Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action.. It shows the extreme possibilities—the outcomes of going for broke and for the most conservative decision—along with all possible consequences for middle-of-the-road decisions.

The technique was first used by scientists working on the atom bomb; it was named for Monte Carlo, the Monaco resort town renowned for its casinos. Since its introduction in World War II, Monte Carlo simulation has been used to model a variety of physical and conceptual systems.

How Monte Carlo simulation works
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—aprobability distribution—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.

By using probability distributions, variables can have different probabilities of different outcomes occurring.  Probability distributions are a much more realistic way of describing uncertainty in variables of a risk analysis.  Common probability distributions include:

Normal – Or “bell curve.”  The user simply defines the mean or expected value and a standard deviation to describe the variation about the mean.  Values in the middle near the mean are most likely to occur.  It is symmetric and describes many natural phenomena such as people’s heights.  Examples of variables described by normal distributions include inflation rates and energy prices.

Lognormal – Values are positively skewed, not symmetric like a normal distribution.  It is used to represent values that don’t go below zero but have unlimited positive potential.  Examples of variables described by lognormal distributions include real estate property values, stock prices, and oil reserves.

Uniform – All values have an equal chance of occurring, and the user simply defines the minimum and maximum.  Examples of variables that could be uniformly distributed include manufacturing costs or future sales revenues for a new product.

Triangular – The user defines the minimum, most likely, and maximum values.  Values around the most likely are more likely to occur.  Variables that could be described by a triangular distribution include past sales history per unit of time and inventory levels.

PERT- The user defines the minimum, most likely, and maximum values, just like the triangular distribution.  Values around the most likely are more likely to occur.  However values between the most likely and extremes are more likely to occur than the triangular; that is, the extremes are not as emphasized.  An example of the use of a PERT distribution is to describe the duration of a task in a project management model.

Discrete – The user defines specific values that may occur and the likelihood of each.  An example might be the results of a lawsuit: 20% chance of positive verdict, 30% change of negative verdict, 40% chance of settlement, and 10% chance of mistrial.

During a Monte Carlo simulation, values are sampled at random from the input probability distributions.  Each set of samples is called an iteration, and the resulting outcome from that sample is recorded.  Monte Carlo simulation does this hundreds or thousands of times, and the result is a probability distribution of possible outcomes.  In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen.  It tells you not only what could happen, but how likely it is to happen.

Monte Carlo simulation provides a number of advantages over deterministic, or “single-point estimate” analysis:

  • Probabilistic Results. Results show not only what could happen, but how likely each outcome is.
  • Graphical Results. Because of the data a Monte Carlo simulation generates, it’s easy to create graphs of different outcomes and their chances of occurrence.  This is important for communicating findings to other stakeholders.
  • Sensitivity Analysis. With just a few cases, deterministic analysis makes it difficult to see which variables impact the outcome the most.  In Monte Carlo simulation, it’s easy to see which inputs had the biggest effect on bottom-line results.
  • Scenario Analysis: In deterministic models, it’s very difficult to model different combinations of values for different inputs to see the effects of truly different scenarios.  Using Monte Carlo simulation, analysts can see exactly which inputs had which values together when certain outcomes occurred.  This is invaluable for pursuing further analysis.
  • Correlation of Inputs. In Monte Carlo simulation, it’s possible to model interdependent relationships between input variables.  It’s important for accuracy to represent how, in reality, when some factors goes up, others go up or down accordingly.

An enhancement to Monte Carlo simulation is the use of Latin Hypercube sampling, which samples more accurately from the entire range of distribution functions.

Markov models in medical decision making: a practical guide.


Markov models are useful when a decision problem involves risk that is continuous over time, when the timing of events is important, and when important events may happen more than once. Representing such clinical settings with conventional decision trees is difficult and may require unrealistic simplifying assumptions. Markov models assume that a patient is always in one of a finite number of discrete health states, called Markov states. All events are represented as transitions from one state to another. A Markov model may be evaluated by matrix algebra, as a cohort simulation, or as a Monte Carlo simulation. A newer representation of Markov models, the Markov-cycle tree, uses a tree representation of clinical events and may be evaluated either as a cohort simulation or as a Monte Carlo simulation. The ability of the Markov model to represent repetitive events and the time dependence of both probabilities and utilities allows for more accurate representation of clinical settings that involve these issues.

Rapid fluid removal during dialysis is associated with cardiovascular morbidity and mortality


Patients receiving hemodialysis have high rates of cardiovascular morbidity and mortality that may be related to the hemodynamic effects of rapid ultrafiltration. Here we tested whether higher dialytic ultrafiltration rates are associated with greater all-cause and cardiovascular mortality, and hospitalization for cardiovascular disease. We used data from the Hemodialysis Study, an almost-7-year randomized clinical trial of 1846 patients receiving thrice-weekly chronic dialysis. The ultrafiltration rates were divided into three categories: up to 10 ml/h/kg, 10-13 ml/h/kg, and over 13 ml/h/kg. Compared to ultrafiltration rates in the lowest group, rates in the highest were significantly associated with increased all-cause and cardiovascular-related mortality with adjusted hazard ratios of 1.59 and 1.71, respectively. Overall, ultrafiltration rates between 10-13 ml/h/kg were not associated with all-cause or cardiovascular mortality; however, they were significantly associated among participants with congestive heart failure. Cubic spline interpolation suggested that the risk of all-cause and cardiovascular mortality began to increase at ultrafiltration rates over 10 ml/h/kg regardless of the status of congestive heart failure. Hence, higher ultrafiltration rates in hemodialysis patients are associated with a greater risk of all-cause and cardiovascular death.

source: Kidney International

Optimizing Antimicrobial Therapy in Sepsis and Septic Shock


This article reviews principles in the rational use of antibiotics in sepsis and septic shock and presents evidence-based recommendations for optimal antibiotic therapy. Every patient with sepsis and septic shock must be evaluated at presentation before the initiation of antibiotic therapy. However, in most situations, an abridged initial assessment focusing on critical diagnostic and management planning elements is sufficient. Intravenous antibiotics should be administered as early as possible, and always within the first hour of recognizing severe sepsis and septic shock. Broad-spectrum antibiotics must be selected with one or more agents active against likely bacterial or fungal pathogens and with good penetration into the presumed source. Antimicrobial therapy should be reevaluated daily to optimize efficacy, prevent resistance, avoid toxicity, and minimize costs. Consider combination therapy in Pseudomonas infections, and combination empiric therapy in neutropenic patients. Combination therapy should be continued for no more than 3 to 5 days and deescalation should occur following availability of susceptibilities. The duration of antibiotic therapy typically is limited to 7 to 10 days; longer duration is considered if response is slow, if there is inadequate surgical source control, or in the case of immunologic deficiencies. Antimicrobial therapy should be stopped if infection is not considered the etiologic factor for a shock state.

source: sciencedirect

Rapid acute treatment of agitation in individuals with schizophrenia: multicentre, randomised, placebo-controlled study of inhaled loxapine.


There is a need for a rapid-acting, non-injection, acute treatment for agitation. AIMS: To evaluate inhaled loxapine for acute treatment of agitation in schizophrenia. METHOD: This phase III, randomised, double-blind, placebo-controlled, parallel-group study (ClinicalTrials.gov number NCT00628589) enrolled 344 individuals who received one, two or three doses of inhaled loxapine (5 or 10 mg) or a placebo. Lorazepam rescue was permitted after dose two. The primary efficacy end-point was change from baseline in Positive and Negative Syndrome Scale-Excited Component (PANSS-EC) 2 h after dose one. The key secondary end-point was Clinical Global Impression-Improvement scale (CGI-I) score 2 h after dose one.
RESULTS: Inhaled loxapine (5 and 10 mg) significantly reduced agitation compared with placebo as assessed by primary and key secondary end-points. Reduced PANSS-EC score was evident 10 min after dose one with both 5 and 10 mg doses. Inhaled loxapine was well tolerated, and the most common adverse events were known effects of loxapine or minor oral effects common with inhaled medications.
CONCLUSIONS: Inhaled loxapine provided a rapid, well-tolerated acute treatment for agitation in people with schizophrenia.

source: British Journal of psychiatry