Could power plants of the future produce zero emissions?


Could power plants of the future produce zero emissions? http://www.bbc.co.uk/news/business-24225901

Angry Birds fires into the classroom


Angry Birds fires into the classroom http://www.bbc.co.uk/news/technology-24228473

What Most Doctors Won’t Tell You About Colds and Flus.


 

The next time you experience a cold or the flu, remember this: rather than take conventional drugs to suppress uncomfortable symptoms, it’s better for your health to allow the cold or flu to run its course while you get plenty of physical and emotional rest.

Conventional medicine and the pharmaceutical industry would have you believe that there is no “cure” for the common cold, that you should protect yourself against the flu with a vaccine that is laden with toxic chemicals, and that during the midst of a cold or flu, it is favorable to ease your discomfort with a variety of medications that can suppress your symptoms.

Unfortunately, all three of these positions indicate a lack of understanding of what colds and flus really are, and what they do for your body.

Colds and flus are caused by viruses. So to understand what colds and flus do at a cellular level, you have to understand what viruses do at a cellular level.

Do you remember learning about cellular division in grade seven science class? Each of your cells are called parent cells, and through processes of genetic duplication (mitosis) and cellular division (cytokinesis), each of your parent cells divides into two daughter cells. Each daughter cell is then considered a parent cell that will divide into two more daughter cells, and so on.

Viruses are different from your cells in that they cannot duplicate themselves through mitosis and cytokinesis. Viruses are nothing but microscopic particles of genetic material, each coated by a thin layer of protein.

Due to their design, viruses are not able to reproduce on their own. The only way that viruses can flourish in your body is by using the machinery and metabolism of your cells to produce multiple copies of themselves.

Once a virus has gained access into one of your cells, depending on the type of virus involved, one of two things can happen:
The virus uses your cell’s resources to replicate itself many times over and then breaks open (lyses) the cell so that the newly replicated viruses can leave in search of new cells to infect. Lysis effectively kills your cell.

The virus incorporates itself into the DNA of your cell, which allows the virus to be passed on to each daughter cell that stems from this cell. Later on, the virus in each daughter cell can begin replicating itself as described above. Once multiple copies of the virus have been produced, the cell is lysed.

Both possibilities lead to the same result: eventually, the infected cell can die due to lysis.

Here is the key to understanding why colds and flus, when allowed to run their course while you rest, can be good for you:

By and large, the viruses that cause the common cold and the flu infect mainly your weakest cells; cells that are already burdened with excessive waste products and toxins are most likely to allow viruses to infect them. These are cells that you want to get rid of anyway, to be replaced by new, healthy cells.

So in the big scheme of things, a cold or flu is a natural event that can allow your body to purge itself of old and damaged cells that, in the absence of viral infection, would normally take much longer to identify, destroy, and eliminate.

Have you ever been amazed by how much “stuff” you could blow out of your nose while you had a cold or the flu? Embedded within all of that mucous are countless dead cells that your body is saying good bye to, largely due to the lytic effect of viruses.

So you see, there never needs to be a cure for the common cold, since the common cold is nature’s way of keeping you healthy over the long term. And so long as you get plenty of rest and strive to stay hydrated and properly nourished during a cold or flu, there is no need to get vaccinated or to take medications that suppress congested sinuses, a fever, or coughing. All of these uncomfortable symptoms are actually ways in which your body works to eliminate waste products and/or help your body get through a cold or flu. It’s fine to use over-the-counter pain medication like acetaminophen if your discomfort becomes intolerable or if such meds can help you get a good night’s rest. But it’s best to avoid medications that aim to suppress helpful processes such as fever, coughing, and a runny nose.

It’s important to note that just because colds and flus can be helpful to your body doesn’t mean that you need to experience them to be at your best. If you take good care of your health and immune system by getting plenty of rest and consistently making health-promoting dietary and lifestyle choices, your cells may stay strong enough to avoid getting infected by viruses that come knocking on their membranes. In this scenario, you won’t have enough weak and extraneous cells to require a cold or the flu to work its way through your body to identify and lyse them.

Curious about how to differentiate the common cold and the flu? Here is an excellent summary of the differences from cbc.ca:
A cold usually comes on gradually — over the course of a day or two. Generally, it leaves you feeling tired, sneezing, coughing and plagued by a running nose. You often don’t have a fever, but when you do, it’s only slightly higher than normal. Colds usually last three to four days, but can hang around for 10 days to two weeks.

Flu, on the other hand, comes on suddenly and hits hard. You will feel weak and tired and you could run a fever as high as 40 C. Your muscles and joints will probably ache, you will feel chilled and could have a severe headache and sore throat. Getting off the couch or out of bed will be a chore. The fever may last three to five days, but you could feel weak and tired for two to three weeks.

One final note on this topic: because the common cold and the flu are both caused by viruses, antibiotics are not necessary. People who take antibiotics while suffering with a cold or flu often feel slightly better because antibiotics have a mild anti-inflammatory effect. But this benefit is far outweighed by the negative impact that antibiotics have on friendly bacteria that live throughout your digestive tract. In this light, if you really need help with pain management during a cold or flu, it is usually better to take a small dose of acetaminophen than it is to take antibiotics.

Sources: drbenkim.com & realfarmacy.com

 

 

 

 

 

 

Cochlear Implants — Science, Serendipity, and Success.


The Lasker–DeBakey Clinical Medical Research Award, announced September 9, recognizes the contributions of three pioneers of cochlear implantation: Graeme Clark, Ingeborg Hochmair, and Blake Wilson. Their collective efforts have transformed the lives of hundreds of thousands of people who would otherwise be deaf.

Deafness impairs quality of life by relentlessly dismantling the machinery of human communication. Ludwig van Beethoven, plagued by deafness, wrote in 1802, “For me there can be no relaxation in human society; no refined conversations, no mutual confidences. I must live quite alone and may creep into society only as often as sheer necessity demands. . . . Such experiences almost made me despair and I was on the point of putting an end to my life.”

The feelings of hopelessness, despair, and even shame that attended profound hearing loss lingered well into the late 20th century. The few who sought medical help were told there was nothing to be done for them. Today, the World Health Organization estimates that 360 million people worldwide are living with disabling hearing loss; as the population ages, the global burden of disease attributable to deafness will increase, and means of alleviating the disability will assume ever-increasing importance.

Profound hearing loss affects people of all ages. For children, hearing is central to neurocognitive development, since sound deprivation early in life degrades the multiplicity of neural circuits that are responsible for information processing, especially those involved in the acquisition of speech and language.1 In addition, deafness impairs other key cognitive functions, such as scanning, retrieving, and manipulating verbal information — impairment that contributes to the low language levels typically achieved in people who are deaf from childhood. Since the ability to write a language depends largely on hearing its phonologic content, literacy rates among deaf children have remained intransigently low, despite the best efforts of educators. Low literacy leads to poor educational outcomes, limited employment opportunities, and restricted participation in society. For many, sign language becomes the only means of communication. Not surprisingly, deaf adolescents and young adults feel marginalized and need more psychological support than their hearing peers.

Adults who develop profound deafness are often embarrassed by their disability and feel forced to withdraw from social exchanges with family and friends. For many of these adults, deafness may result in unemployment, imposing an additional psychosocial burden. Among the elderly, profound deafness compromises independent living, as many deaf seniors become too apprehensive to live alone. Moreover, deafness impairs cortical processing in the aging brain, especially under cognitive load, and is associated with an increased risk of dementia.

The challenge of restoring hearing to people who are too deaf to benefit from hearing aids was formidable and required an extraordinary, decades-long research endeavor. In the healthy ear, sound is collected by the external ear and amplified by the middle ear. The hair cells of the inner ear act as mechanoelectric transducers, converting acoustic energy into electrical activity that is carried to the brain through the auditory nerves. This transduction process is complex, requiring selective, time-critical contributions from thousands of hair cells and auditory nerve fibers. In profound deafness, the hair cells are lost, and acoustic signals therefore cannot generate electrical activity in the auditory system. Could the auditory nerves be stimulated directly so as to bypass the inner ear and deliver a meaningful representation of the speech signal?

The earliest clinical attempt at such stimulation took place in Paris in 1957, when a surgeon directly stimulated the auditory nerve, causing a patient to temporarily experience crude auditory percepts. A patient brought this experiment to the attention of Dr. William House of Los Angeles, who immediately saw its potential. In the early 1960s, House successfully implanted single-channel devices to stimulate the auditory nerve through the cochlea.2 House was roundly criticized for his work: neurophysiologists condemned it as naive and misguided — how could a handful of wires delivering crude electrical currents replace the function of thousands of hair cells and tens of thousands of auditory neurons? Clinical colleagues questioned his motives, feared the risk of meningitis, and distanced themselves so as not to sully their reputations. And the deaf community launched angry protests at what was seen as a peremptory attack on deaf culture.

More laboratory experiments were clearly needed to bridge the huge intellectual and technological chasms facing early investigators, to pave the way for the transformational change that profoundly deaf patients needed. Biologic safety was of paramount importance, and exhaustive histopathological studies were needed to assess the safety of long-term stimulation and to inform the design of future electrode arrays. The optimization of electrical stimulation required detailed neurophysiological and psychophysical studies to guide clinical application. And many scientists doubted the sustainability of long-term stimulation, since some animal models suggested that the auditory nerve underwent retrograde degeneration due to deafness.

Early patients spent countless, laborious hours in laboratories connected to stacks of speech processors; making these processors wearable without losing computational power was an early imperative and represented an enormous engineering challenge. It became clear in the 1980s that multichannel systems allowing stimulation at multiple sites within the cochlea were needed for speech recognition. The 1990s heralded major advances in speech-encoding strategies for cochlear implants, offering speech recognition without lipreading to the majority of recipients. The realization that children who had been born deaf could also derive substantial benefit, with some developing speech and language trajectories similar to those of their hearing peers, was transformational for childhood deafness, making mainstream schooling a viable option for many deaf children.

Throughout the development of cochlear implants, the manufacturing challenges were monumental — including ensuring that the implanted electronics packages were permanently hermetically sealed, fabricating complex electrode arrays for deep insertion into the tortuous cochlea, and meeting the stringent regulatory requirements for implanted biomedical devices.

Current cochlear-implant systems are worn at ear level and contain many features of earlier prototypes. They include an implanted portion with receiver electronics attached to an electrode array placed within the cochlea, plus external components comprising a microphone, a speech processor, and a transmitter coil. Bilateral cochlear implantation, now routine treatment in many countries, permits recipients to better understand speech in the midst of noise and to localize sound. Contemporary systems and surgical techniques allow any islands of residual hearing to be preserved, enabling electrical and acoustical hearing to be effectively combined; this combination permits better speech understanding in multitalker settings, identification of the speaker’s sex, and better reception of tonal languages.

Cochlear implants have their limitations: they do not restore normal hearing, outcomes vary among patients, performance is considerably degraded by ambient noise, and music perception is limited. It’s hoped that continuing research will uncover better ways of delivering the fine-structure content of the speech signal, creating more effective channels of stimulation with less electrical overlap, reducing trauma to cochlear structures through pharmacologic means, and enhancing brain responsiveness by removing molecular inhibition of plasticity.1

These current efforts are founded on research carried out by Lasker awardees Graeme Clark, Ingeborg Hochmair, and Blake Wilson. Clark, an otolaryngologist, contributed an entire portfolio of rigorously conducted biologic and psychophysical experiments that underpinned clinical practice and informed the design of a clinical device3; Hochmair, an electrical engineer, contributed engineering brilliance and innovation, establishing her own company to hasten the perilous journey from bench to bedside4; and Wilson, a speech scientist, oversaw a giant leap forward in speech encoding for implants that ingeniously manipulated the timing and place of stimulation so as to minimize distortion and channel interaction.5These three scientists had the grit to pick “impossible” projects and the courage to remain steadfast in the face of failure and criticism. Above all, they remained incurably passionate about achieving victory over one of humanity’s most prevalent disabilities. They have brought sound where there was silence and hope where despair prevailed. Though they fully deserve the Lasker Award, their greatest accolade is the gratitude of 300,000 implant recipients around the world to whom they’ve given the gift of hearing.

Source: NEJM

Diverse Sources of C. difficile Infection Identified on Whole-Genome Sequencing.


BACKGROUND

It has been thought that Clostridium difficile infection is transmitted predominantly within health care settings. However, endemic spread has hampered identification of precise sources of infection and the assessment of the efficacy of interventions.

METHODS

From September 2007 through March 2011, we performed whole-genome sequencing on isolates obtained from all symptomatic patients with C. difficile infection identified in health care settings or in the community in Oxfordshire, United Kingdom. We compared single-nucleotide variants (SNVs) between the isolates, using C. difficileevolution rates estimated on the basis of the first and last samples obtained from each of 145 patients, with 0 to 2 SNVs expected between transmitted isolates obtained less than 124 days apart, on the basis of a 95% prediction interval. We then identified plausible epidemiologic links among genetically related cases from data on hospital admissions and community location.

RESULTS

Of 1250 C. difficile cases that were evaluated, 1223 (98%) were successfully sequenced. In a comparison of 957 samples obtained from April 2008 through March 2011 with those obtained from September 2007 onward, a total of 333 isolates (35%) had no more than 2 SNVs from at least 1 earlier case, and 428 isolates (45%) had more than 10 SNVs from all previous cases. Reductions in incidence over time were similar in the two groups, a finding that suggests an effect of interventions targeting the transition from exposure to disease. Of the 333 patients with no more than 2 SNVs (consistent with transmission), 126 patients (38%) had close hospital contact with another patient, and 120 patients (36%) had no hospital or community contact with another patient. Distinct subtypes of infection continued to be identified throughout the study, which suggests a considerable reservoir of C. difficile.

CONCLUSIONS

Over a 3-year period, 45% of C. difficile cases in Oxfordshire were genetically distinct from all previous cases. Genetically diverse sources, in addition to symptomatic patients, play a major part in C. difficiletransmission.

Source: NEJM

 

Robotic Leg Control with EMG Decoding in an Amputee with Nerve Transfers.


The clinical application of robotic technology to powered prosthetic knees and ankles is limited by the lack of a robust control strategy. We found that the use of electromyographic (EMG) signals from natively innervated and surgically reinnervated residual thigh muscles in a patient who had undergone knee amputation improved control of a robotic leg prosthesis. EMG signals were decoded with a pattern-recognition algorithm and combined with data from sensors on the prosthesis to interpret the patient’s intended movements. This provided robust and intuitive control of ambulation — with seamless transitions between walking on level ground, stairs, and ramps — and of the ability to reposition the leg while the patient was seated.

Source: NEJM

Study Shows Why Risk Warnings Are Ineffective for Kids.


Story at-a-glance

  • Focusing on the negative consequences of high-risk behaviors in teens is not likely to reduce such behaviors, a new study revealed
  • Teens tend to discount their likelihood of experiencing negative life events, like being in a car accident, even when they’re told about the actual risk
  • If you want to get a message across to your teen, the study suggests that using a positive association is likely to be the most effective
  • The teenage years shortly after puberty coincide with some of the greatest risk-taking behaviors among teens; monitoring your child’s whereabouts and friends while establishing a close family bond can help your child get through these high-risk years

·         Teens are among the most likely to engage in high-risk behaviors like careless driving, binge drinking, unprotected sex and drug abuse. Campaigns aimed at curbing these behaviors often focus on the negative consequences that can come of them, like getting into a car accident or getting lung disease from smoking.

·         New research suggests, however, that these risk warnings are falling on deaf ears or, rather, are simply not impacting the younger members of society. Why? As you might remember, and as research has now shown, teenagers tend to believe that they’re invincible…

teenage-warning

·         The Good-News-Bad-News Effect

·         Humans have a tendency to believe they’re more likely to experience positive events than negative ones, a phenomenon known as the ‘good-news-bad-news effect.’

·         It seems this is especially pronounced in teens, who not only tend to discount their likelihood of experiencing negative life events, like being in a car accident, but still tend to discount them even when they’re told about the actual risk.

·         The study, which involved young people between the ages of 9 and 26, showed that those of younger ages did not change their beliefs about their risk of negative life events even after being shown real statistics for such events.1 The authors noted:

·         “In the ages tested (9-26 y), younger age was associated with inaccurate updating of beliefs in response to undesirable information regarding vulnerability. In contrast, the ability to update beliefs accurately in response to desirable information remained relatively stable with age.”

·         It seems, in other words, that teens simply do not believe they will succumb to negative consequences associated with risky behaviors, even if the facts suggest otherwise. As reported by Medical News Today:2

·         “Even when they became aware of the risks, the younger participants were less likely to learn from the information showing that the future could be worse than expected. …the new findings help explain why kids are not able to learn from bad news in order to apply it to future events.”

·         Positive Messages May Be More Powerful for Teens

·         If you want to get a message across to your teen, the study suggests that using a positive association is likely to be the most effective. For example, rather than reminding your teen that excess alcohol is damaging to their health, teach them that avoiding alcohol will help them reach their peak fitness level and excel at sports. As the study’s lead author said: 3

·         “Our findings show that if you want to get young people to better learn about the risks associated with their choices, you might want to focus on the benefits that a positive change would bring rather than hounding them with horror stories.”

·         In light of these findings, it may be a good thing that the US Food and Drug Administration’s (FDA) campaign to put graphic images of people dying from smoking-related disease on cigarette packages has been abandoned. It also raises concerns that warnings to teens about prescription drug abuse are also being ignored…

·         Teens May Not Take the Risks of Prescription Drug Abuse Seriously

·         One in four teens has misused a prescription drug at least once in their lifetime, according to survey results from the partnership at Drugfree.org and the MetLife Foundation.4 This represents a 33 percent increase in the past five years!

·         Even though prescription drugs can lead to slowed breathing, dangerously high blood pressure, irregular heart rhythms and death if too much is taken, many teens regard them as a ‘safe’ way to get high. In many cases, parents only add to this assumption,

·         Not only because they may take multiple prescription drugs themselves but also, as the survey reported, because close to one-third of parents believe prescription stimulants can improve their teen’s academic performance.

·         Sadly, some teens pay for this one “bad” decision to abuse prescription drugs with their lives. Drug fatalities more than doubled among teens and young adults between 2000 and 2008, and these drug-induced fatalities are not being driven by illegal street drugs but rather by prescription drug abuse.

·         In this case, it’s important to sit down and talk to your teen about the dangers of taking prescription drugs just “for fun.” Far from being “safer” than illegal street drugs, they can sometimes kill in just one pill. However, given the study findings, you may also want to try a positive approach, such as focusing on other ways to have fun with friends and how avoiding recreational use of such drugs shows respect for their body and mind.

·         Is Your Teen Engaging in High-Risk Behaviors? Here’s What ReallyHelps

·         The teenage years shortly after puberty coincide with some of the greatest risk-taking behaviors among teens. As written in Slate Magazine:5

·         “During the years of greatest risk-taking, which peak somewhere around the age of 16 and during which the presence of peers greatly increases risk-taking, the adolescent brain is like a car with a powerful accelerator (the sensation- and peer-seeking social-emotional system) and weak brakes (the risk-containing cognitive-control system).”

·         Yet, studies have shown that educational programs in schools, pledges not to engage in risky behaviors and even reasoning with your child are not effective ways to change behaviors in teens.6 Yelling at your teen, especially if it includes harsh words, name-calling or other put-downs, is also counterproductive and likely to make your child even more disobedient, according to new research.7 So what’s a parent to do? Following are proven ways to help see your teenager safely through the highest-risk years:8

·         1. Know Who, What, Where, When and Why

·         Simply monitoring your teen, including knowing who he is with, what he is doing and when he’ll be home, greatly reduces risky behaviors like sexual activity and drug abuse. It may even be that the reason why boys tend to engage in more high-risk activities than girls is because parents tend to keep closer tabs on their daughters than their sons.

·         This also ensures you’ll know your child’s friends, which is important because peer influences cannot be underestimated at this age. If your child is associating with risk-taking friends, he’s more likely to engage in the behaviors as well. Encourage your teen to have his friends over to your house, where you can casually keep an eye on them.

·         2. Instill Traditional Values in Your Child

·         Starting early, show your child the importance of family time, taking pride in schoolwork and being involved in community and extracurricular activities. Family traditions and rituals like holiday meals and even running weekly errands help establish strong family bonds and reduce risk-taking in teens.

·         3. Help Your Teen Develop Competencies

·         The extended development of a skill, such as playing a musical instrument or taking care of horses, establishes a way for your child to be positively involved with an activity and, ideally, also their peers. Such structured activities, such as rehearsals, practices and recitals are typically under the supervision of an adult and help your establish protective influences around your child.

·         4. Build the Parent-Child Relationship

·         A child who feels loved, wanted, listened to and close to their parents is much less likely to engage in risky behaviors. Likewise for children whose parents are home at key times of the day – before and after school, at dinner and at bedtime. Avoid being either too strict or too lenient with your child and establish consistent expectations while being open to compromise and letting things go when you can.

An update on the use and investigation of probiotics in health and disease.


Abstract

Probiotics are derived from traditional fermented foods, from beneficial commensals or from the environment. They act through diverse mechanisms affecting the composition or function of the commensal microbiota and by altering host epithelial and immunological responses. Certain probiotic interventions have shown promise in selected clinical conditions where aberrant microbiota have been reported, such as atopic dermatitis, necrotising enterocolitis, pouchitis and possibly irritable bowel syndrome. However, no studies have been conducted that can causally link clinical improvements to probiotic-induced microbiota changes. Whether a disease-prone microbiota pattern can be remodelled to a more robust, resilient and disease-free state by probiotic administration remains a key unanswered question. Progress in this area will be facilitated by: optimising strain, dose and product formulations, including protective commensal species; matching these formulations with selectively responsive subpopulations; and identifying ways to manipulate diet to modify bacterial profiles and metabolism.

Source: BMJ

Natural capsaicinoids improve swallow response in older patients with oropharyngeal dysphagia.


Abstract

Objective There is no pharmacological treatment for oropharyngeal dysphagia (OD). The aim of this study was to compare the therapeutic effect of stimulation of oropharyngeal transient receptor potential vanilloid type 1 (TRPV1) with that of thickeners in older patients with OD.

Design A clinical videofluoroscopic non-randomised study was performed to assess the signs of safety and efficacy of swallow and the swallow response in (1) 33 patients with OD (75.94±1.88 years) while swallowing 5, 10 and 20 ml of liquid (20.4 mPa.s), nectar (274.4 mPa.s), and pudding (3930 mPa.s) boluses; (2) 33 patients with OD (73.94±2.23 years) while swallowing 5, 10 and 20 ml nectar boluses, and two series of nectar boluses with 150 μM capsaicinoids and (3) 8 older controls (76.88±1.51 years) while swallowing 5, 10 and 20 ml nectar boluses.

Results Increasing bolus viscosity reduced the prevalence of laryngeal penetrations by 72.03% (p<0.05), increased pharyngeal residue by 41.37% (p<0.05), delayed the upper esophageal sphincter opening time and the larynx movement and did not affect the laryngeal vestibule closure time and maximal hyoid displacement. Treatment with capsaicinoids reduced both, penetrations by 50.% (p<0.05) and pharyngeal residue by 50.% (p<0.05), and shortened the time of laryngeal vestibule closure (p<0.001), upper esophageal sphincter opening (p<0.05) and maximal hyoid and laryngeal displacement.

Conclusion Stimulation of TRPV1 by capsaicinoids strongly improved safety and efficacy of swallow and shortened the swallow response in older patients with OD. Stimulation of TRPV1 might become a pharmacologic strategy to treat OD.

Source: BMJ

A novel method for determining the difficulty of colonoscopic polypectomy.


Abstract

Introduction Endoscopists are now expected to perform polypectomy routinely. Colonic polypectomy varies in difficulty, depending on polyp morphology, size, location and access. The measurement of the degree of difficulty of polypectomy, based on polyp characteristics, has not previously been described.

Objective To define the level of difficulty of polypectomy.

Methods Consensus by nine endoscopists regarding parameters that determine the complexity of a polyp was achieved through the Delphi method. The endoscopists then assigned a polyp complexity level to each possible combination of parameters. A scoring system to measure the difficulty level of a polyp was developed and validated by two different expert endoscopists.

Results Through two Delphi rounds, four factors for determining the complexity of a polypectomy were identified: size (S), morphology (M), site (S) and access (A). A scoring system was established, based on size (1–9 points), morphology (1–3 points), site (1–2 points) and access (1–3 points). Four polyp levels (with increasing level of complexity) were identified based on the range of scores obtained: level I (4–5), level II (6–9), level III (10–12) and level IV (>12). There was a high degree of interrater reliability for the polyp scores (interclass correlation coefficient of 0.93) and levels (κ=0.888).

Conclusions The scoring system is feasible and reliable. Defining polyp complexity levels may be useful for planning training, competency assessment and certification in colonoscopic polypectomy. This may allow for more efficient service delivery and referral pathways.

Discussion

There is recognised variability in polypectomy techniques.12–18 It is assumed that the choice of technique used for the removal of a particular polyp is determined by the polyp’s characteristics, that is, size, morphology, site and access (eg, endoscopic mucosal resection for a flat, 2 cm, right-sided polyp). These polyp-dependent variables influence the difficulty of a polypectomy procedure. However, polypectomy is also dependent on factors other than polyp characteristics, such as the endoscopist’s technical ability, scope stability, patient characteristics and the wider endoscopy team. Recent work has explored the assessment of polypectomy skills in more detail.19 The purpose of this study was to define and devise an easily reproducible scoring system that quantifies polyp characteristics and therefore links them to polypectomy levels of difficulty, which may inform training and competency assessment.

The Munich Polypectomy Study4 analysed 4000 snare polypectomies across 13 institutions and performed multivariate regression analysis to determine risk factors for polyp-related complications. The study results demonstrated that polyp size and right-sided location were associated with a higher complication rate. The authors concluded that polyps larger than 1 cm in the right colon or 2 cm in the left colon carried an increased risk of complications. Applying these cut-offs to this study, using our scoring system, right-sided lesions greater than 1 cm in size or left-sided lesions greater than 2 cm in size would score a minimum of 8 points. According to the Munich study, anything above this cut-off would qualify as high risk. Similarly, any polyp that scores above 8 points in this study would be deemed a relatively difficult (difficulty level III) polyp. It is expected that the majority of BCS colonoscopists should be able to manage level III polyps competently because of the high frequency of finding these lesions. If they did not have this level of competency then they would either be removing lesions they should not attempt, or too frequently referring on to another operator resulting in additional procedures.

The assigning of scores to polyps, and creation of levels, may help endoscopists decide when not to attempt to remove a particularly challenging polyp. The aim of this work is not to discourage endoscopists operating at a particular level to attempt more complex polypectomy, but to highlight the increased risks of such lesions. This may help to streamline endoscopic referral services and reduce complications.

The scoring system and polyp levels were validated by two specialist endoscopists. This could possibly have skewed the scoring towards an expert level of ability. As an example, both experts assigned a 3 cm sessile, left-sided polyp with easy access (giving a score of 11), to level III. However, it is acknowledged that not all colonoscopists would be able to manage a lesion of this size and morphology competently. Whether or not a particular endoscopist opts to perform polypectomy on this type of lesion may depend on other individual or situation-specific factors, such as experience, technical ability, the competence of the supporting team and the availability of equipment. The scoring system may then serve as a guide alongside the above-mentioned factors. It is acknowledged that it is not applicable under all circumstances for all endoscopists, but may help define standards for each level. Furthermore, large-scale, prospective validation by a wider range of endoscopists is required to strengthen the reliability of this scoring system.

There was a high degree of interrater agreement among the two expert endoscopists with regard to polyp scores as well as overall polyp levels. This demonstrates that the experts generally agreed on the expected level of competency required for each polypectomy difficulty ‘level’. The experts agreed on the classification of level I and II polyps; however, for the more difficult lesions, there was disagreement in two cases, which were rated as level III by expert 1 and level IV by expert 2. This variation in assigning levels may be explained by differences in the experts’ individual experience or approach to polypectomy. However, it highlights the fact that individual judgement should be used in conjunction with the polyp level on a case to case basis. The assignment of polypectomy levels may have an application for endoscopists operating at different levels of training, for example, all endoscopists performing flexible sigmoidoscopy should be able to remove level I polyps safely, whereas a BCS endoscopist may be expected to remove a level III polyp competently, exercising judgement as to whether a level IV polyp might need referral to a tertiary centre. This would require a detailed discussion among the endoscopic community.

The high interrater agreement for the scores assigned to each polyp illustrates that the scoring system is feasible and reproducible, and may help target training and assessment of polypectomy skills at different levels. However, we acknowledge that the two UK-based endoscopy experts in this study remain a highly selected group, which may have skewed their perception of what constitutes a difficult polypectomy. Further validation of this tool with a wider range of national and international endoscopists would enhance its applicability.

This study is the first to report a simple scoring system to determine the difficulty level of a polyp. It defines and quantifies easily measurable characteristics that determine the difficulty of a particular polypectomy. This, in turn, may help to stratify polypectomy ‘service levels’ and allocate resources to reflect the four levels of difficulty. Advanced, complex, or large sessile lesions generally require subspecialty endoscopic management to achieve complete and safe excision. They may require advanced endoscopic skills, specialised equipment, extra procedural time and a more experienced supporting team. They should thus be managed by specialists with the relevant expertise in the right environment. The choice between a surgical or endoscopic approach may depend on local expertise but the development of a network of specialist endoscopic teams may enable a wider choice for patients. A large Australian study20 has shown that when difficult or advanced lesions are managed by a tertiary endoscopic service, substantial cost savings can be realised with limited morbidity and no mortality when compared with surgery. Validation of the scoring system and polyp levels on a wider scale, and comparison with outcome data, may increase awareness in the endoscopic community and ultimately help improve polypectomy outcomes.

Key messages

·         What is already known on this topic

·         There are recognised differences in the difficulty level of polypectomy, based on polyp characteristics.

·         What this study adds

·         This is the first study which attempts to quantify the difficulty of polypectomy, using polyp characteristics.

·         Impact on clinical practice

·         The SMSA scoring system has wide utility for endoscopists and may help to stratify difficulty levels of polypectomy.

Source: BMJ