“The disease-centred model suggests that psychiatric drugs work because they reverse, or partially reverse, the disease or abnormality that gives rise to the symptoms of a particular psychiatric disorder. Thus ‘antipsychotics’ are thought to help to counteract the biological abnormalities that produce the symptoms of psychosis or schizophrenia… the ‘drug-centred’ model suggests that far from correcting an abnormal state, as the disease model suggests, psychiatric drugs induce an abnormal or altered state. Psychiatric drugs are psychoactive substances, like alcohol and heroin… The drug-centred model suggests that the psychoactive effects produced by some drugs can be useful therapeutically in some situations. They don’t do this in the way the disease-centred model suggests by normalising brain function. They do it by creating an abnormal or altered brain state that suppresses or replaces the manifestations of mental and behavioural problems.”
A summary of an introduction by Margaret Archer and colleagues:
Critical realism is a mess, but there are four themes in the literature. The first is ontological realism: there is some sort of reality out there existing independently of people’s minds. The second is epistemic relativism: our knowledge of reality is conditional on particular contexts, e.g., standpoints, theories, communities, conflicts of interest. The third is judgmental rationality: it is possible to decide whether one theory is better than another at explaining some phenomenon. The fourth is ethical naturalism: although “is” does not imply “ought”, the two are not independent; empirical research can help us determine what values we should hold.
I submitted a series of Freedom of Information (FOI) requests to Treasury, Department of Health, and NHS England, asking:
(i) Who is responsible for decisions made in relation to mental health care budgets?
(ii) How are budgetary decisions made, including evidence of how, in calculating the total health budget, mental health needs have been taken into consideration?
Treasury and DH both replied citing s35 of the FOI act. Releasing discussion of options available, Treasury argued, might inhibit future “rigorous and candid assessments of options available” . DH replied similarly: “Premature disclosure of information protected under section 35 could prejudice good working relationships, the neutrality of civil servants” .
NHS England did reveal something of their decision making processes, naming Paul Baumann, Chief Financial Officer for NHS England, as responsible for budgets, and citing a technical document , the technical annex of which  sketches an estimate of likely growth in mental health costs over the coming years.
But Treasury and DH’s responses indicate that other factors have been taken into consideration that are not currently in the public domain. A rigorous debate about options, involving the people who need mental health services as well as those who provide them, requires transparency.
I am therefore writing to ask for more information concerning the reasoning behind decisions made. In particular, what discussion has there been of the following?
(i) The effectiveness of mental healthcare treatments and support, in comparison to physical health care;
(ii) The costs of the various treatments; and
(iii) The potential for reducing costs, e.g., by employing lower band staff or increasing involvement of voluntary services.
It is important that reasoning on these issues is made public so they can be openly debated.
The UK government promised a “drive towards an equal response to mental and physical health” in England as part of a five-year plan. Two years later and there is little sign that any progress has been made. Calls to improve mental health services peaked this month when 20 years’ worth of former health secretaries wrote an open letter criticising the government for “warm words” but no action.
There is a consensus that more funding should reach mental health care. But what should be funded and exactly how? From April 2017, payments to adult mental health services must be linked to the quality and outcomes of care provided. National guidance published by NHS England and NHS Improvementclaims that doing so will improve care, “ensuring value for money and the best use of limited resources”. But there is worrying evidence that doing so might have little impact and, at worst, actually be harmful to services.
How will payment for performance work?
The money flows are complex. Here is a picture showing key parts of the system.
At the top end is the Treasury, which determines how much money health care receives, alongside all other public services. The Treasury does not directly determine how much money goes to mental health, however – it receives advice from below in the hierarchy so it can calculate a total including all other areas of health.
Payment for performance will be at this final stage between commissioner and provider, and will be agreed locally between them. National guidance on how to implement the approach suggests that the chosen targets should be achievable yet stretching; informed by clinicians and people with experience of mental health problems; avoid creating an adversarial relationship between commissioners and providers; and should be used for the “reinforcement of positive behaviour”.
Oxford Health NHS Foundation Trust is provided as an example in the guidance. A fifth of its income will be linked to performance, which will include ensuring that people “improve their level of functioning”, determined using two measures.
One is the Mental Health Recovery Star, which tracks the progress of people who use mental health services by their ability to manage their mental health and feelings of hopefulness. This measure is completed jointly by people who use mental health services and staff providing care (such as psychiatrists, psychologists or nurses).
The other measure is a checklist rated only by staff which is used to track changes in symptoms such as depression and self-injury. The service has also promised its commissioners that it will ensure people live longer.
Does payment for performance improve services?
A recent systematic review of research found no evidence of impact when payment was linked to health outcomes, such as how long people live – which makes Oxford Health’s choice of outcomes puzzling. There was a small benefit when payment was linked to what services actually did, for example, providing cancer screening or recording whether someone smokes, as this was much easier for services to control than were the consequences of care.
Given national advice to involve people who use mental health services in decisions about outcomes chosen, it is also curious that the recovery star has been chosen. An increasingly influential group who use mental health services, called Recovery in the Bin, singled out the measure as “redundant, unhelpful, and blunt”, and suggested an alternative focusing more on the social causes of mental distress which are often ignored in outcomes.
Putting high-stakes targets on measures tends to mean that the measures stop measuring what they are supposed to measure because people cheat to achieve the targets. The effect is so common that it has a name: Goodhart’s law. For example, ambulance services had a target to get to the patient in eight minutes for life-threatening emergencies. This led to a third of services fiddling their timings towards the target.
There are various subtle ways to cheat outcome measures in mental health, such as by not bothering people who drop out of services with questionnaires to complete. People who drop out are less likely to have benefited from treatment, so excluding their answers from data analyses will improve a service’s apparent outcomes. Given the complexity of people’s experiences and predicaments, reducing them to scores on questionnaires can feel absurd, so it might be easy to justify this kind of gaming if it results in more funding which could improve the care provided. It seems especially easy for measures completed by staff who are under pressure from management to tick the right boxes.
Outcomes measures have an important role to play in understanding and improving the care people receive and should be tracked as part of care, but linking them to payment risks demoralising staff and making the measures meaningless. This seems a dangerous path to take given the state mental health services are in. A better solution might lie further upstream at the Treasury when it decides how much money is available for mental health.
A council in England was recently reprimanded for running an advertising campaign against begging. In a series of posters displayed throughout Nottingham, the city council claimed that “beggars aren’t what they seem”, that begging “funds the misuse of drugs” and that money given to beggars would go “down the drain” or “up in smoke”.
The UK Advertising Standard Authority (ASA) upheld complaints about Nottingham City Council’s campaign, saying that it reinforced negative stereotypes against vulnerable people, and portrayed all beggars as “disingenuous and undeserving” people who would use direct donations irresponsibly. The council was ordered not to display the ads in their current form again, and to avoid using potentially offensive material in the future.
But the council defended the campaign, arguing that the “hard-hitting” posters were necessary to “discourage members of the public from giving money to people who beg” on the basis that doing so would likely fund “life-threatening drug or alcohol addictions”. The posters encouraged people to donate money to local charities instead, using the hashtag #givesmart.
Although the council cited a blog post from a local charity in support of its claims, it’s clear that both the advertising watchdog and members of the public need to see more evidence that such campaigns prevent harm, rather than cause it.
So, how could local authorities avoid such a misstep in the future?
For one thing, if the aim is to prevent the harms of drug and alcohol addiction, the council could follow existing health recommendations. The National Institute for Health and Care Excellence (NICE) – the body providing advice on best-practice for health and social care in England – makes a range of recommendations for helping people with alcohol addiction, for example. This includes following an evidence-based treatment manual and charting each person’s progress to review the effectiveness of different treatments. For homeless people, it recommends residential care for up to three months – it says nothing about trying to limit the amount of money that people receive.
But perhaps the council is keen to curb begging for other reasons: because it wants to satisfy members of the general public who find it a nuisance – if this were the reason it would be deeply troubling. Or perhaps it has a rationale for how cutting money to people begging might somehow treat those who have drug and alcohol problems and not cause anyone harm.
In any case, the council needs to be transparent about its aims and the evidence it has about the potential impacts of such campaigns so that an informed debate is possible.
Evaluating the evidence
There are many factors to take into account when evaluating the benefits and detriments of an ad campaign like this one. For instance, it would be useful to know how much money is given to people begging, how many of those people have alcohol or drug problems and how many seek out, or are given, support by local charities.
We would also need some hypotheses; for example, that the campaign will cause donations to local charities to rise, or drug and alcohol difficulties to fall among people who beg. These could be tested by tracking donations, or conducting surveys with people who beg both before and after the intervention, while taking account of any other factors that might have led to change.
Of course, the outcomes of such research can vary greatly, depending on whose perspectives you include. For example, Camden and Islington councils once asked locals their views on diverted giving (donating to charity, rather than directly to people in need). While 36% were positive, only 2% of people who were actually begging thought it was a good idea.
Deciding who to include in studies is a perennial problem in social research, especially when evaluation reports present rich details of people’s lives. Nottingham Council included three brief case summaries in their reply to the ASA’s judgment. Here’s one of them:
A man and a woman, who had previously been the subject of a Criminal Anti-Social Behaviour Order (CRASBO), were not homeless but travelled in to the city centre to beg for cash to fund their drug and alcohol addictions. The man would act as a look-out for his partner while she begged in shop doorways.
It is unclear what criteria the council used to choose their examples, but other research offers a different perspective on what it’s like to beg. One study, conducted in Scotland in the late 1990s, reported on a range of difficult decisions that people had to make, for instance choosing between begging and crime:
My bru [social security] money ran out and I had nae money. I have got a criminal record, so the choice was go back tae being a criminal and dae crookin’ and that or dae beggin’ and no get the jail. I am sick of the jail and that, so I decided tae dae beggin’.
They also reported what it felt like to beg – it seems plausible that people begging in Nottingham will have similar experiences:
They just look down on you like you’re dirt … like there was one time this guy says ‘you’re homeless, you’re dirt, you don’t have to be there, get a job’
Complex social and behavioural questions such as this can easily result in a complicated web of causes and effects. But mathematical tools such as causal networks may help: these can be designed and analysed using special software, which enables researchers to visualise the relationships between different factors in a diagram.
Example network of causal relationships.
Each of the circles and arrows has a mathematical meaning: researchers can constrain the networks using data collected from studies, or try out invented scenarios to explore the consequences of different policies before any is implemented. All evaluations of complex interventions will make assumptions and have limitations; these diagrams can be used to make those assumptions explicit, and sound out where more research is needed.
Of course, all this is just a brief sketch of the complexities involved. Given the information released so far, it is unclear how deeply the council considered the potential harms or benefits of this campaign. Perhaps using causal networks to explain how they thought it would work, and what adverse effects had been accounted for, would help to reassure the public. It’s vital that local authorities make use of research to understand the unintended impacts of policy – especially when it affects the most vulnerable people in society.
Over the past few decades there has been a shift away from discrete categories to more dimensional ways of thinking about identity, experiences, beliefs, feelings, and activities. The Kinsey scale for sexual orientation, various political compasses, dimensional approaches to mental health difficulties and neurodiversity are some examples. So the idea is that you are neither straight nor gay, left or right, healthy or ill. There are dimensions and features which cut across the categories and vary in intensity. Psychological therapies are often discussed in terms of categories, e.g., CBT versus psychodynamic versus ACT. It has been recognised that there is much overlap in techniques used across the various brands, and taxonomies have been developed to try to dismantle brands. However, an enduring categorical distinction is between professional and non-professional. I’m curious to know what happens if you blur the distinctions further and think instead in terms of how people converse with each other, listen, empathise, and offer practical help. The professional dimension is orthogonal to the various ways of helping, focussed more on how reliable and accountable someone is.