AI Might Actually Enforce All Our Stupid Laws, Expert Warns


Getty / Futurism

Image by Getty / Futurism

With artificial intelligence having already changed the way we work — or at least the way bosses think work should be done — for the worse, one computer scientist is warning that it could get much, much worse if robots begin being used to enforce the law in all its stupidity.

“If you’re not worried about the utter extinction of humanity,” AI researcher Eliezer Yudowksy wrote in a tweet, “consider this scarier prospect: An AI reads the entire legal code — which no human can know or obey — and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.”

To be fair, Yudowsky is often accused of being an “AI doomer,” a meme-laden term for the type of oft-deserved expert alarmism one sees regularly in climate research but also, increasingly, in the AI space as well.

Then again, his point is salient not only because AI lawyers and algorithmic law enforcement tools have long been a thing — just recall the gross consequences of policing via facial recognition, for instance — but also because it makes a lot more sense than some of the other applications the tech’s being used for.

Whereas having AI write articles or create art is almost always going to fail the smell test because creativity is subjective and distinctly human, laws have far less heart and are purposefully strict as a means of governing humanity’s purportedly ungovernable nature (or so law lovers have been saying since time immemorial).

Some laws, of course, are undeniably good: the ones against murder, against harming children and animals, and against, say, your landlord deciding on a whim to lock you out of your house and never let you back in.

But as anyone who read Cracked.com back in the day will tell you, there are quite a few laws out there that either don’t make sense or are so antiquated, the concept of enforcing them is goofy. From the many, many sodomy laws on the books around the globe to the weird, random ones that must have funny stories behind them,

Though the concept of AI-inflected law seems particularly salient with a full year of ChatGPT mania behind us, algorithms have been used for all kinds of legal tasks for years now. Take, for instance, the many consumer-focused AI tools and products that can help simplify “legalese” into digestible English and help one figure out how to sue or be sued. One of these tools was so well-received, it won its creators an innovation award from Hofstra University last year.

On the flip side of that same coin, there is the all-too-possible prospect that AI might not only be used to help regular people navigate the law, but also help those who take it upon themselves to enforce them do so with the unflinching execution of the Terminator. That’s a genuinely terrifying thought: an AI that obstinately enforces every trivial rule you didn’t even know you broke, with the goal of drowning you in stress, paperwork, and legal bills.

When one naysayer suggested that an AI trained on the legal code would be a “net good” because “then we will simplify the law to one that makes sense and not one where literally everyone is a criminal,” Yudowsky had a pretty perfect retort: “if humanity was capable of doing that we’d have done it already.”

There is, of course, a non-zero chance that the noted researcher is just doing his doom thing and that nobody is going to feed the legal code into a legal AI that may then decide it is the arbiter for all law.

“If you think that’s a dumb scenario, by all means go back to worrying about the utter extinction of humanity!” Yudowsky exclaimed.

Responsible AI – a human right?


In the midst of the excitement about emerging AI-enabled technologies, find out how Ericsson is working to safeguard the integrity of tomorrow’s consumers and guide us all to an age of responsible AI.

Mikael Anneroth

Mikael Anneroth

man working on screens

Artificial intelligence, once classed as “expert systems” back in the 1980s, will come of age in an era of 5G connectivity – making autonomous, disrupting technologies part and parcel of our everyday life. New business models will emerge, and change will be accelerated across our societies at increasing speeds. Most of this change will be for the good of humanity, such as enabling us to be more efficient, enhancing our senses and use of scarce recourses, tackling climate change or generally just helping us to make better decisions. However, as is often reported in the media, there is also a risk that these AI-enabled systems could be misused, either intentionally or unintentionally, to the detriment of humanity.

Building trust in technology through responsible AI

That’s why, at Ericsson, we are driving the notion of responsible AI. By this we mean that we need to be aware of the impact that AI-enabled systems might have while being implemented in different contexts. But also, that the AI systems themselves need to be programmed to act responsibly and fair within their boundaries for a sustainable and trustworthy outcome. In doing so, we plan to mitigate possible adverse effects of AI and help to build trust in the technology itself.

Today, we use machine learning and AI to support the operation and maintenance of our systems. Through new technologies, we can automate fault prevention and network optimization that leads to increased reliability and trustworthiness of the networks. By analyzing network traffic and mobility patterns, operators can better serve their subscribers with tailored services and products. Today’s networks carry enormous amounts of data. In handling this data, we have a responsibility to make sure that it is accurate, with preserved end-user privacy and safeguarded against threat. To do this we rely on a complex eco-system of algorithms that must be designed for transparency and explicability, and trained so as to eliminate possible bias. These are just some of the things we, as a company, must continue to address when developing AI-enabled communication systems.

Find out how artificial intelligence is helping Ericsson to manage more intelligent network operations.

The challenges of artificial intelligence

In trying to understand the challenges, several companies including Ericsson, are investigating both the potential of AI technologies and possible unintentional effects. At a more overarching plane, we at Ericsson have identified six major challenges in this area:

  • Transparency and explainability: If AI systems are opaque and unable to explain how or why certain results are presented, this lack of transparency will undermine trust in the system. In which ways can autonomous systems explain themselves?
  • Security and privacy: Access to vast amounts of data will enable AI systems to identify patterns beyond human capabilities. In this there is a risk that the privacy of individuals could be breached. How can we as individuals secure and comprehend the use of data derived from our activities online or in real life?
  • Personal and public safety: Deploying autonomous systems (e.g. self-driving cars, UAVs or robotics) across public or industrial arenas could pose a risk of harm. How can we ensure human safety?
  • Bias and discrimination: Even if technology is neutral, it will only do what we program (and teach) it to do. Thus, it will be influenced by human and cognitive bias or skewed, incomplete learning data sets. How do we make sure that the use of AI systems does not discriminate in unintended ways?
  • Automation and human control: Trust in systems that both support and offload current work tasks have the risk of undermining our knowledge of those skills. This will make it more difficult to judge the correctness and outcome of these systems and, in the end, make it impossible for human interception. How can we ensure human control of AI systems?
  • Accountability and regulation: With the introduction of new AI-driven systems, expectations on responsibility and accountability will increase. Who is responsible for use and potential misuse of AI systems?

Identifying pitfalls of AI

Looking at these challenges, it is easy to see that there is a need to address them from a more ethical point of view. What is right and what is wrong? Independent of our answer to this question, the decision process for us includes not only information and knowledge but to the same extent our values and preferences. This means that the answer might not be the same for two individuals, yet AI-enabled decision support systems might be used globally across the world – adding to the complexity of judging the correctness of the outcome from a value perspective. How can we design normative applications that are truly global?

Going deeper into these challenges, we already know the potential pitfalls, but we still do not know the value and relevance. If we really intend to mitigate undesired outcomes and create new technologies which are truly resilient, it’s important that we go deeper in exploring control over technology, our understanding of how (and why), privacy and integrity on several levels, fair treatment, physical safety, unintentional adverse impact, intentional misuse, personal freedom, bias of developers and in-learning algorithms, management of data and consent and, finally, the accountability and liability of technology.

Artificial intelligence and human rights

As a company, Ericsson is committed to respecting the UN Universal Declaration of Human Rights and implementing the UN guiding principles (UNGP) on Business and Human Rights. This includes actively addressing the potential adverse impact of our technology. So, how does this commitment address the challenges described above? Well, by examining the risks to human rights, we can see that the UN declaration actually already addresses several of the challenges for AI systems. These include harming the individual’s right to life, eroding their right to dignity, intruding on right to privacy, curtailing the freedom of expression and thought, unfair treatment and unequal opportunities, discrimination due to biases, uneven distribution of benefit and arbitrary interference in an individual’s life. If we map these principles side by side with the challenges described for AI systems, we can see several similarities. Some actually correlate directly with the challenges described above, while some make reference to possible effects of intentional misuse or unintentional effects of the technology.

In the light of this small exercise we can conclude that if we address the ethical challenges of artificial intelligence systems, our solutions will also respect the human rights as declared by the UN. Not all of these challenges are topical for a company like Ericsson. Nevertheless, our goal is to understand the way by which we can identify those that are, and carefully implement ways to address areas like transparency, explicability, bias in machine learning or data privacy in order to minimize any negative effects of using artificial intelligence in our systems and products.

AI bias and human rights: Why ethical AI matters


Recent examples of gender and cultural algorithmic bias in AI technologies remind us what is at stake when AI abandons the principles of inclusivity, trustworthiness and explainability. With AI becoming increasingly prevalent in our daily lives, it begs the question: Without ethical AI, just how at risk are our human rights?

Mikael Anneroth

Mikael Anneroth

AI bias

Ever been to an interview and not got the job?  If you have, then you’ve probably also been haunted by the ‘why’ question that follows.

Were you not qualified enough? Not good enough on the day? Or maybe it was something about you they just didn’t like – where you’re from, the words you use, or the clothes you wear.

Sadly, for many of us, those questions don’t end there. Today, reports still suggest that many are refused a job because of the color of their skin, the God they worship, their age, gender, sexual preference, social standing and more.

From human bias to AI bias

We live in a world of human bias. Each day, whether we like it or not, every decision we take is colored by our own biases based on years of our unique conditioning. These biases can muddy our ability to learn and reason in a way which is fair, indiscriminate, and based on rationale. They can create a discriminatory chain reaction.

Today, as we move into a world where code is being embedded into many facets of our daily lives, and algorithms – not humans – hold more sway in deciding if we get the job, get the loan, get the scholarship, get arrested, get to travel, and get just about everything else – will the same risks of

bias and prejudice remain? In other words, can emerging AI-powered systems finally liberate us from thousands of years of human bias?

The answer to this question depends on how the world chooses to develop and deploy AI technologies. Without the principles of ethical AI, which includes aspects such as explainabilitiy, safe AI, security, privacy, fairness, and human agency oversight, there is a strong risk that our future societies will not only continue to project historical human biases, but furthermore, will run the risk of exacerbating those biases. Why? Because AI technologies are ultimately modelled, specified, and overseen by people – with all their flaws. As such and unconsciously, it is inevitable that we carry our biases into those systems we create.

While it may be impossible to ride AI systems of human bias, we can take every precaution to minimize its effects, such as through careful selection of training data, conscious data governance, and a diverse workforce that covers a whole range of inputs and offers a fair representation of our social structures.

Elena Fersman, Head of Ericsson’s Global AI Accelerator, sums this up brilliantly in her blog post on the importance of balance in AI: “One of the things that fascinate me most is that AI technology is inspired by humans and nature. This means that whatever humans found to be successful in their lives and in evolutionary processes can be used when creating new algorithms. Diversity, inclusion, balance, and flexibility are very important here as well, with respect to data and knowledge, and diverse, organizations are for sure better equipped for creating responsible algorithms. In the era of big data, let’s make sure we don’t discriminate the small data.”

The root causes of AI bias

It sounds fairly straightforward, right? However, the harsh realities of today’s world can very often make the theoretical harder to achieve than it should be. The unfortunate truth is that automated systems can often be propped up by data sets built on thousands of low-paid labor hours and crowdsourced data – and as reports would suggest, very often by men. And without effective data governance or algorithmic hygiene, this can cause problems.

In 2017, AI researchers Kate Crawford and Trevor Paglen delved into the world of categorizing crowd work to explore if and how human bias was creeping into AI systems. In the process, their Excavating AI project, which examined how people were being labeled on the public image database ‘ImageNet’ – used as a dataset for many AI systems – found classificatory terms that were not only judgmental, but openly misogynist, racist, and ableist:

“You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a ‘slattern, slut, slovenly woman, trollop.’ A young man drinking beer is categorized as an ‘alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.’ A child wearing sunglasses is classified as a ‘failure, loser, non-starter, unsuccessful person.’ Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?”

AI bias

What does it mean to fully trust a technology? Read more on how fast-growing techneeds to align with humans’ ethical principles if it’s to be embraced by society.Click here

AI bias examples

Let’s look at a real example where AI bias can and has infringed our human rights, and let’s return to our initial question of bias in hiring and recruitment.

Today, hundreds of bluechip companies worldwide have turned to algorithm-based ‘emotional AI’ hiring platforms to augment and lower the financial burden of their recruitment processes. While such AI-based systems may indeed be the fairest and most unbiased means of recruitment, there have been some widely reported examples of what can happen when it goes wrong.

This includes one particular example of women applicants being disproportionately rejected based on years of biased data in a male-dominated sector, as described by Noreena Hertz in her book ‘The Lonely Century’:

“In practice, stripped of my full, complex humanity I had to impress a machine whose black-box algorithmic workings I could never know. Which of my ‘data points’ was it focusing on and which was it weighting the most heavily? What formula was it using to assess me and was it fair? The challenge with machine learning is that even if the most obvious sources of bias are accounted for, what about less obvious, neutral-seeming data points that one might not even consider could be biased?”

Addressing AI bias – where to begin?

AI technologies are maturing and increasingly being deployed across our societies. According to a recent Ericsson AI Industry Lab report, an average 49 percent of AI and analytics decision makers said they planned to complete their transformation journey to AI by the end of 2020. In the same study, it became apparent that the biggest obstacle to the deployment of AI technologies is not the technologies themselves, but rather people. 87 percent of respondents said they faced more people/culture challenges than tech or organizational challenges. And interestingly, of the top ten most critical challenges faced by organizations, more than half relate to people and cultural challenges. This includes deterrence factors such as employees preferring to stick to tried and tested methods, employees being afraid that they will lose their jobs, and many generally not understanding the technology or being open to change.

Essentially, it all comes down to a lack of understanding and a fear of relinquishing control. To bridge those misconceptions, we need AI which is transparent, understandable and explainable. We need AI which humans can trust. So how do we get from here to there?

1. Regulating a more ethical AI

In April 2021, the European Commission set a significant precedent in this area by launching its first ever legal framework on AI as well as a new Coordinated Plan with Member States which it says will “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.”

The new risk-based approach will set strict requirements for AI systems based on a pre-defined level of risk. It also places an immediate ban on AI systems which are considered to “be a threat to the safety, livelihood and rights of people” – including “systems that manipulate human behavior, circumvent users’ free will and allow social scoring by governments”.

Further adding to this momentum, in June 2021, Australia launched a similar AI ethics framework which it says will “guide businesses, governments and other organisations to responsibly design, develop and use AI.”

Yet while emerging ethical AI frameworks, such as those mentioned above, offer a robust and sustainable framework for future AI development, they are not necessarily a silver bullet solution. Instead, tech companies, governments, businesses and activist groups all play a role in developing and delivering AI which is ethical and inclusive. This was underlined in the European Parliament’s 2020 resolution on civil liability for AI which states that it is never the AI system itself which is liable, but rather the range of actors across the whole value chain who create, maintain or control the risk associated with the AI system.

2. Company and organizational engagement

With frameworks in place, businesses who wish to adopt AI technologies must soon demonstrate that they can implement the necessary requirements in their day-to-day operations and products.

This Ericsson blog post on ethics and AI lays down seven steps as to how organizations can begin to build trust in AI technologies in addition to aligning with the necessary regulation and standards. This includes methodologies such cultural and educational programs, risk assessments and third-party audit programs which, as the author says, organizations should already be looking to roll out today: “The trajectory for this work is moving towards a prevention, detection and response framework similar to those already in place for other ethics and compliance programs, such as anti-corruption, and prevention of tax evasion frameworks. Trustworthiness is emerging as a dominant prerequisite for AI and companies must take a proactive stance. If they don’t, we face a risk of regulatory uncertainty or over-regulation that will impede the uptake of AI, and subsequently societal growth.”

3. Rights and activist groups

We’re at the start of a long journey into new forms of social machinery, where the relationship between people and technology will continuously be redefined. To make sure that our societies remain on the right side of the ethical compass, it is critical that AI remains human-centric, where human agency is guaranteed, and our fundamental rights remain sacrosanct.

Civil rights and activist groups will undoubtedly have a key role to play on that journey, to continuously challenge the discourse and amplify the voices of those who are most adversely affected by new technologies.

Joy Buolamwini, AI researcher and contributor to the recent Netflix documentary Coded Bias, says that a new politics of refusal is needed to help steer new technology in the right direction: “One of the questions we should be asking in the first place is if the technology is necessary or if there are alternatives, and after we have asked that if the benefits outweigh the harm, we also need to do algorithmic hygiene. Algorithmic hygiene looks at who these systems work for and who it doesn’t. There is actually continuous oversight for how they are used.”

AI bias and ethical AI – what next?

One of the players leading the research into ethical AI is Ericsson Research, particularly the components of AI explainability, safety and verification – as expertly summed up in this blog post on trustworthy AI.

The technology ecosystem is still taking its first steps on a long journey and, as always, the first step is the most defining. The choices and investments we make today – whether that be regulatory, research-based or across product development and deployment – could ultimately define the world we strive to create tomorrow. And as a technology which will invariably impact all of us, we all have a stake in how AI is designed, developed and deployed – from researchers to regulators, and activists to journalists.

‘Cancer connected with nuclear disasters develops in Yemeni children


A girl cries next to her mother covering her face as they flee from an airstrike on an army weapons depot in Yemen's capital Sanaa. File photo. © Mohamed al-Sayaghi
The humanitarian catastrophe in Yemen, where people are dying of famine, lack of medical supplies and chemical weapons, may amount to genocide if the international community doesn’t act, says Kim Sharif of Human Rights for Yemen.

Katherine Johnson, the NASA Mathematician Who Advanced Human Rights with a Slide Rule and Pencil


NASA chief Charles Bolden recalls the historic trajectory of the “human computer” who played a key role in the Apollo 11 moon landing, and as a female African-American in the 1960s, shattered stereotypes in the process.

katherine-johnson

When I was growing up, in segregated South Carolina, African-American role models in national life were few and far between. Later, when my fellow flight students and I, in training at the Naval Air Station in Meridian, Mississippi, clustered around a small television watching the Apollo 11 moon landing, little did I know that one of the key figures responsible for its success was an unassuming black woman from West Virginia: Katherine Johnson. Hidden Figures is both an upcoming book and an upcoming movie about her incredible life, and, as the title suggests, Katherine worked behind the scenes but with incredible impact.

When Katherine began at NASA, she and her cohorts were known as “human computers,” and if you talk to her or read quotes from throughout her long career, you can see that precision, that humming mind, constantly at work. She is a human computer, indeed, but one with a quick wit, a quiet ambition, and a confidence in her talents that rose above her era and her surroundings.
“In math, you’re either right or you’re wrong,” she said. Her succinct words belie a deep curiosity about the world and dedication to her discipline, despite the prejudices of her time against both women and African-Americans. It was her duty to calculate orbital trajectories and flight times relative to the position of the moon—you know, simple things. In this day and age, when we increasingly rely on technology, it’s hard to believe that John Glenn himself tasked Katherine to double-check the results of the computer calculations before his historic orbital flight, the first by an American. The numbers of the human computer and the machine matched.

With a slide rule and a pencil, Katherine advanced the cause of human rights and the frontier of human achievement at the same time. Having graduated from high school at 14 and college at 18 at a time when African-Americans often did not go beyond the eighth grade, she used her amazing facility with geometry to calculate Alan Shepard’s flight path and took the Apollo 11 crew to the moon to orbit it, land on it, and return safely to Earth.

I was so proud of Katherine as I sat with hundreds of other guests in the East Room of the White House and watched as she received the Presidential Medal of Freedom from President Obama last year. Katherine’s great mind and amazing talents advanced our freedoms at the most basic level—the freedom to pursue the biggest dreams we can possibly imagine and to step into any room in the country and take a seat at the table because our expertise and excellence deserve it. Katherine, now 97, took her seat without fanfare. As far as not being equal was concerned, she said, “I didn’t have time for that. My dad taught us ‘you are as good as anybody in this town, but you’re no better.’ ” I’d posit that Katherine was better—not only at math but also at applying her talents with the precision and beauty possible only in mathematics. She achieved the perfect parabola—casting herself to the stars and believing she could chart the journey home.

Why do Rastafarians use marijuana in their religion?


Image: Why do Rastafarians use marijuana in their religion?

Rastafarians are associated with reggae music, dreadlocks, Bob Marley and of course marijuana. Rastas often refer to weed as “The Holy Herb” and consider it to be sacred. Do Rastas smoke marijuana just to get high, or does it have some other meaning in their culture and religion?

The Rastafari religion is stereotyped as having members who are constantly stoned and that the whole movement is, in fact, just an excuse to smoke a lot of pot. In fact, it is seen by many as a cover for nothing more than a bunch of drug users and drug smugglers.

Rastafarians – what their religion teaches them about marijuana

Marijuana’s use as part of religious ceremonies is not new. The practice goes back for thousands of years in a variety of cultures. For example, in India and Nepal, traveling monks have used marijuana for centuries, and other religious groups have also used marijuana or viewed the substance as sacred, including the ancient Chinese, ancient Germanic pagans and Hindus. Many Rastafarians believe that cannabis originated in Africa and that it is part of their African culture that they are reclaiming.

Rastafarians feel that marijuana is important for their understanding of self, the universe and God. The use of cannabis is part of what the Rastafari refer to as “reasoning sessions” where members join up and are encouraged to interact and discuss life according to the Rasta perspective. Rastafarians reject materialism, oppression and sensual pleasures, called “Babylon.” In fact, they see the marijuana plant as the “Tree of Life” mentioned in the Bible and often quote scriptures that support their beliefs. For example, at Revelation 22:2, the phrase “the leaves of the tree [of life] were for the healing of the nations” refers to the marijuana plant, according to them. While marijuana use forms part of their beliefs, it is not compulsory for a Rastafarian to smoke it.

Rastas fight for the right to use marijuana as part of their religion

In South Africa, Rastafarian lawyer Gareth Prince has been challenging legislation that outlaws dagga (South African word for “marijuana”), notes LegalBrief.co.za, citing a report in The Mercury. Prince has that requested certain sections of the Drugs and Drugs Trafficking Act and the Criminal Procedure Act be declared invalid, among other things.

Prince himself faces criminal charges in the Khayelitsha Regional Court for dagga possession, dealing and cultivation. He questioned to what extent the government could dictate what people ate, drank and smoked. The case has been postponed.

In the US, government and corporate propaganda has caused marijuana to be seen as a dangerous drug that should be illegal, although many states have now legalized the plant for medicinal use and some states for recreational use. A massive number of people in the US have been sentenced to prison for possession ofhealth-promoting marijuana, even in cases where they have claimed that they use the substance for religious or spiritual reasons.

Emotional toxicity of austerity eroding mental health, say 400 experts


“Malign” welfare reforms and severe austerity measures are having a detrimental effect on Britons’ psychological and emotional wellbeing, hundreds of psychotherapists, counselors and mental health practitioners have warned.

Reuters / Dylan Martinez

An open letter, published by the Guardian on Friday, said the “profoundly disturbing” implications for Britons wrought by the coalition’s austerity policies have been ignored in the general election campaign so far.

The group of signatories, made up of therapists, psychotherapists and mental health experts, said Britain has seen a “radical shift” in the mental state of ordinary people since the coalition came to power.

They warned people are plagued by increasing inequality and poverty as a result of the government’s austerity policies, and this reality is generating distress across the nation.

The 400 signatories, from all corners of Britain, said the government’s welfare reforms have caused emotional and mental trauma to Britons – forcing families to relocate against their will and burdening disabled, ill and unemployed benefit claimants with an intimidating benefits regime.

On a broader level, they warned British society has been ruptured by a neoliberal dogma that has serious socio-economic impacts.

British society has been “thrown completely off balance by the emotional toxicity of neoliberal thinking”and the grueling effects of this ideology are particularly visible in therapists’ consulting rooms, they said.

“This letter sounds the starting-bell for a broadly based campaign of organizations and professionals against the damage that neoliberalism is doing to the nation’s mental health,” they added.

Fit to Work: A call for reform

The letter was particularly critical of the government’s benefits sanctions scheme, which has been condemned by human rights advocates across the state as unjust, ill-conceived, ineffective and inhumane.

In particular, the mental health experts said the government’s proposed policy of linking social security benefits to the receipt of “state therapy” is utterly unacceptable.

The measure, casually coined “get to work therapy,” was first mooted by Chancellor for the Exchequer George Osborne during his last budget.

But the letter’s signatories, all of whom are experts in the field of mental health, argue it is counter-productive, “anti-therapeutic” and damaging.

Although the government’s much criticized Fit for Work program will no longer be managed by disgraced contractor Atos, the letter said the new company set to manage the nation’s work capability assessments is an “ominous replacement.”

The mental health experts called upon the sector’s key professional bodies to “wake up to these malign developments” and categorically denounce this “so-called therapy” as destructive.

The signatories called upon Britain’s political parties running for election, particularly Labour, to offer a resolute pledge to “urgently review” these regressive practices and prove their “much trumpeted commitment to mental health” if they enter government.

Among the groups represented by the signatories were Britain’s Alliance for Counselling and Psychotherapy, Psychotherapists and Counsellors for Social Responsibility, Disabled People Against Cuts, Psychologists Against Austerity, the Journal of Public Mental Health, and a range of academic institutions including Goldsmiths, Birkbeck, the University of London, the University of Amsterdam, Manchester Metropolitan University, the University of Brighton and others.

Although the coalition claims austerity is essential if the nation’s high levels of debt are to be eradicated and the disastrous economic legacy of the previous Labour government is to be addressed, progressive economists argue otherwise.

According to UK think tank the New Economics Foundation, austerity is a smokescreen for advancing a neoliberal agenda characterized by privatization, outsourcing and radical socio-economic reforms.

The think tank suggests Britain’s social and economic ills stem from an economic crisis created by banks and paid for by ordinary taxpayers.

It says Britain desperately requires a shift from the tired austerity narrative that dominates mainstream British politics, and must move towards more progressive and sustainable economic policies that will free the nation from casino capitalism, boom-bust cycles and the erosion of the welfare state.

A spokesman for the Conservative Party told RT the party believes mental health should be treated in the same manner as physical health.

“But for too long, that was not the case – so we legislated for parity of esteem, meaning they’ll be treated with equal priority,” he said.

“Our long-term economic plan means we’ve been able to increase spending on the NHS by £12.9 billion. This has meant that we can put £400 million into improving access to psychological therapies.”

“We are also investing £1.25 billion into funding service improvement, particularly for children. And from April 2016 we are introducing the first waiting time standards for mental health treatments so no one should have to wait longer than 18 weeks for talking therapies.”

A spokesperson for Labour said mental health “is the biggest unaddressed health challenge of our age.”

“It’s essential that we give mental health the priority it deserves if we are to thrive as a nation and ensure the NHS remains sustainable for the future,” he said.

He argued it was Labour that forced the coalition government to “write parity of esteem between physical and mental health into law,” and that the party is committed to implementing this policy if elected in May.

The spokesman pledged Labour will bring an end to the “scandal of the neglect of child mental health.”

“It is simply not right that when three quarters of adult mental illnesses begin in childhood, children’s mental health services get just six per cent of the mental health budget,” he said.

Inmate’s family sues Ohio after ‘agonizing’ execution with untested drug protocol — RT USA


 

Reuters / HandoutConvicted killer Dennis McGuire struggled noticeably for his life during a lengthy lethal injection procedure in Ohio on Thursday, and now his family plans to sue the state for violating his Constitutional rights.

A press conference is scheduled for Friday, where the executed man’s children, Amber and Dennis McGuire, and their attorneys will argue the state violated their father’s right to be free of “cruel and unusual punishment.”

In what amounted to an unusually long time for a lethal injection, it took McGuire about 25 minutes to die after being injected with an untested combination of drugs that had never been used before in an execution in the United States.

For about 10 minutes, the controversial cocktail of midazolam and hydromorphone resulted in McGuire “struggling and gasping loudly for air, making snorting and choking sounds that lasted for at least 10 minutes, with his chest heaving and his fist clenched. Deep, rattling sounds emanated from his mouth,” as reported by the Columbus Dispatch.

Soon after McGuire’s death, his attorney Allen Bohnert called the execution “a failed, agonizing experiment by the state of Ohio.”

“The court’s concerns expressed earlier this week have been confirmed,” Bohnert added, according to the Associated Press. “And more importantly, the people of the state of Ohio should be appalled at what was done here today in their names.”

Last week, Bohnert tried to argue that McGuire was at risk of “agony and terror” since the new drug combination could cut off his air supply as he died, but the plea ultimately failed as judges ruled in favor of the state.

The use of midazolam, in particular, has been called into question in the past, as critics believe it leaves inmates aware of their surroundings and in extreme pain as they die.

Dennis McGuire.(AFP Photo / Ohio Department of Rehabilitation and Correction)Dennis McGuire.(AFP Photo / Ohio Department of Rehabilitation and Correction)

“I watched his stomach heave,” said Amber McGuire in a statement, according to the Dispatch. “I watched him try to sit up against the straps on the gurney. I watched him repeatedly clench his fist. It appeared to me he was fighting for his life but suffocating.”

McGuire was originally convicted of raping and killing a pregnant Joy Stewart back in 1994. His pleas for clemency had been denied, and Stewart’s family issued the following statement on the situation surrounding McGuire’s death.

“There has been a lot of controversy regarding the drugs that are to be used in his execution, concern that he might feel terror, that he might suffer. As I recall the events preceding her death, forcing her from the car, attempting to rape her vaginally, sodomizing her, choking her, stabbing her, I know she suffered terror and pain. He is being treated far more humanely than he treated her.”

The behavior of Ohio and other states that condone the death penalty have come under fire since most of the companies that traditionally manufacture the drugs used in lethal injections – generally based in Europe and which are against capital punishment – have halted sales to state correctional departments.

In an effort to replace diminishing supplies of sedatives and paralytics, many states have begun experimenting with alternative drug mixtures, including products typically used to euthanize animals.

As the AP noted, Bohnert has urged Ohio Governor John Kasich to place a moratorium on executions following McGuire’s death. According to the Dispatch, at least one judge, Gregory L. Frost of the U.S. District Court in Cincinnati, cast suspicion on the state’s behavior concerning executions in 2013.

“Ohio has been in a dubious cycle of defending often indefensible conduct, subsequently reforming its protocol when called on that conduct, and then failing to follow through on its own reforms,” he wrote in an unrelated case last year.

 

Death row inmates now executed with drug cocktail used to euthanize animals.


San Quentin Prison execution chamber, US (AFP Photo)

Compounding pharmacies, which create specialized pharmaceutical product meant to fit the needs of a patient, have begun producing the drugs for state authorities.

But because of the lack of transparency around the production process – one compounding pharmacy was responsible for a fatal meningitis outbreak in 2012 because of poor hygiene – prisoners argue that risky drug cocktails put them at risk of being subjected to “cruel and unusual punishment,” which is prohibited under the US Constitution.

Earlier this month three Texas-based death row prisoners filed a lawsuit arguing this type of pharmacy is “not subject to stringent FDA regulations” and is “one of the leading sources for counterfeit drugs entering the US,” the lawsuit reads, as quoted by AFP.

“There is a significant chance that [the pentobarbital] could be contaminated, creating a grave likelihood that the lethal injection process could be extremely painful, or harm or handicap plaintiffs without actually killing them,” it adds.

“Nobody really knows the quality of the drugs, because of the lack of oversight,” Denno told AFP.

Michael Yowell, who was convicted of murdering his parents 15 years ago, was executed in Texas Wednesday. He became the first inmate to be executed in Texas with pentobarbital since European nations halted production for this purpose. His lawyers unsuccessfully tried to stop him from being killed, saying the compounded factors in pentobarbital make the drug unpredictable and there have not been enough trials to guarantee the death is painless.

The states in question may find an applicable replacement for the short-term but, Denno argued, this development could be an indication that capital punishment is on the wane.

“How many times in this country can they change the way they execute?” she said. “There were more changes in lethal injections in the last 5 years than in the 25 preceding years.”