fbpx

The wellbeing cost-effectiveness of StrongMinds and Friendship Bench: Combining a systematic review and meta-analysis with charity-related data (Nov 2024 Update)

by , , , , , , , , , and | November 2024

Mental health disorders like depression and anxiety are common and significantly impact wellbeing, yet mental healthcare remains underfunded in low-income countries. Psychotherapy is an effective treatment that can be delivered cheaply by lay counsellors. This in-depth report evaluates the cost-effectiveness of two charities providing such therapy in Africa: Friendship Bench and StrongMinds. We estimate that Friendship Bench has a cost-effectiveness of 49 WELLBYs per $1,000 donated ($21 per WELLBY), and StrongMinds has a cost-effectiveness of 40 WELLBYs per $1,000 ($25 per WELLBY). Our results show that both charities are 5-6 times more cost-effective than cash transfers at improving subjective wellbeing. This is the fourth iteration of our analysis, which includes new data and refined methods. Our results are similar to the last version of the report, and we conclude that these two organisations are the most cost-effective charities (which are also well-evidenced) we have evaluated to date.
This is the summary of the report. Click the button above to read the pdf of the full report (83 pages) and its appendix (165 pages).

Mental health disorders like depression and anxiety are common and severely impact subjective wellbeing. Mental healthcare is poorly funded in low income countries, making it a largely neglected problem. Fortunately, a low cost solution exists. Psychotherapy effectively treats depression and anxiety, and it can be delivered relatively cheaply by lay (i.e., non-specialist) counsellors.

This report presents an in-depth cost-effectiveness evaluation of two charities delivering such lay-delivered talk psychotherapy in Africa: Friendship Bench and StrongMinds. This forms part of our broader work to assess the cost-effectiveness of interventions and charities based on their impact on subjective wellbeing, measured in terms of wellbeing-adjusted life years (WELLBYs). One WELLBY is equivalent to a 1-point increase on a 0-10 wellbeing scale for one person over one year.

We focus on subjective wellbeing because it is what ultimately matters in determining if someone’s life is going well. By using wellbeing as a common outcome, it allows to make apples-to-apples comparisons between very different interventions.

We report the cost-effectiveness of these interventions in terms of WELLBYs per $1,000 donated to the organisation (‘WBp1k’), and, conversely, the cost for each organisation to produce one WELLBY. We estimate that:

  • Friendship Bench has a cost-effectiveness of 49 WBp1k, or $21 per WELLBY.
  • StrongMinds has a cost-effectiveness of 40 WBp1k, or $25 per WELLBY.

We have estimated the cost-effectiveness of GiveDirectly to be only 7.55 WBp1k (i.e., $132 per WELLBY) using a meta-analysis (McGuire et al., 2022a). GiveDirectly is an NGO which provides cash transfers to very poor households. We take cash transfers as a useful benchmark because they are a straightforward, plausibly cost-effective intervention with a solid evidence base.

Our results show that both psychotherapy interventions are roughly 5-6x more cost-effective than cash transfers at improving people’s subjective wellbeing. (For more detailed and updated charity comparisons, see our charity evaluations page.)

This is the fourth iteration of our analysis, reflecting several years of research and refinement to ensure rigorous and reliable evaluations. These updates are not routine, but driven by new data and methodological improvements that strengthen our confidence in the findings, ensuring that donors and decision-makers receive the most accurate, actionable insights available. We explain how this version builds on the previous ones at the end of the summary.

Our conclusion that these two organisations are the most cost-effective and well-evidenced charities we have evaluated to date has not changed since the last version.

In the rest of this summary, we briefly present our methodology, detailed results for the charities, and a history of the different versions of this analysis. For those interested in diving deeper into the technical rigour behind our conclusion, the rest of the report offers comprehensive explanations of the methods and findings. For methodologically-minded readers, we also include an extensive 165 page appendix to give the fine details of our analysis. We encourage readers whose questions are not addressed in the summary to consult the full report and/or appendix, as we have likely addressed similar concerns there (see buttons at the top of this page).

Methods

For each charity, we have three sources of evidence we can use to estimate the effect of the programme:

  • Our own expanded and improved meta-analysis of 84 randomised controlled trials (RCTs) of psychotherapy in low and middle income countries (LMICs).
  • RCTs of programmes related to the charities (4 for Friendship Bench and 1 for StrongMinds).
  • Monitoring and Evaluation (‘M&E’) pre-post data from the charities themselves.

Each of these sources presents a qualitatively distinct, but potentially informative, piece of evidence to draw upon.

This is how we analysed each source of evidence. We:

  1. Estimated the initial effect and duration, in order to calculate the total effect for the recipient over time.
  2. Adjusted the total effect to account for concerns about:
    • internal validity (e.g., publication bias)
    • external validity (e.g., the relevance of the evidence to how the charity delivers the  programme in practice).
  3. Estimated household spillovers to estimate the overall benefit for the recipient and their household.

We then calculate a final effect estimate for each charity by combining the three estimates from different evidence sources, using informed subjective weights. Finally, we calculate the cost-effectiveness by pairing the estimated effect for each charity with the estimated cost to deliver the intervention.

We also consider the following elements in determining our confidence in our cost-effectiveness estimates:

  • Depth of analysis.
  • Quality of evidence: We assess quality of evidence according to an adapted version of the ‘GRADE’ criteria, a widely-used and rigorous tool for assessing evidence quality across healthcare and research fields. The GRADE criteria for evidence quality are very stringent, so we expect very few interventions that we evaluate for wellbeing in LMICs (which tend to be less well-studied) will score more than ‘moderate’ on the quality of their evidence.
  • Robustness: We made the analytical choices that we consider to be the most appropriate. Nevertheless, we explore how robust our results are to other analytical choices which we think are less appropriate but may be plausible to others.
  • Site visits: We conducted site visits to both Friendship Bench and StrongMinds, which reassured us that they were operating professional and effective programmes. While we do not think site visits inform us much about cost-effectiveness, they are an important part of due diligence.

Friendship Bench

Friendship Bench is a charity operating in Zimbabwe that treats people with common mental health disorders (e.g., depression and anxiety) using a type of psychotherapy called problem-solving therapy. Friendship Bench’s standard programme consists of 1-6 sessions of individual counselling, which are delivered by trained community health workers.

We estimate that Friendship Bench has an overall effect of 0.80 WELLBY, and costs $16.50 to treat one client. This leads to a cost-effectiveness of 49 WBp1k, or a cost per WELLBY of $21. This is 6.4 times more cost-effective than cash transfers.

Our analysis was ‘in-depth’, which means we believe we have reviewed most or all of the relevant available evidence on the topic, and we have completed nearly all (e.g., 90%+) of the analyses we think are useful.

This is one of the most well-evidenced interventions we have evaluated to date. That being said, based on our stringent GRADE-adapted criteria, we rate the quality of evidence for Friendship Bench to be ‘low to moderate’. This means there is more uncertainty about the effects than if high(er) quality evidence were available. This should be seen as reflecting how little excellent data there is for charity evaluations in LMICs, not on Friendship Bench in particular. As mentioned previously, we expect few interventions that we evaluate in LMICs will have more than ‘moderate’ quality evidence.

Despite this uncertainty, we find Friendship Bench is still more cost-effective than the benchmark of GiveDirectly – even if we had applied more conservative analytic choices throughout our analysis rather than using the choices we think are most plausible (we present these robustness checks in Section 9.3).

Our biggest uncertainty is that on average, recipients attended only 1.12 sessions out of the 6 possible sessions, which is very low attendance (or dosage). Although we apply an adjustment to account for this, it is lower than we would expect. That being said, the programme is still plausibly cost-effective, despite the low attendance (see Section 5.2.3 for more detail) because:

  • The first psychotherapy session is actively therapeutic, and guides participants through a complete problem-solving cycle (i.e., it is not just an orientation).
  • The first session involves psychoeducation (i.e., teaching participants about mental health), which can be particularly useful in LMICs where awareness of mental health issues tends to be limited.
  • The results are still more cost-effective than cash transfers, even if we apply the most stringent adjustment to account for the low attendance.

Our confidence would increase with further high quality studies, evaluations of why clients attend few sessions, or improvements in participant attendance. We have also discussed this with Friendship Bench, who have told us that they have planned future external monitoring and evaluating of their programme.

StrongMinds

StrongMinds provides group interpersonal psychotherapy (IPT) for people struggling with depression. The core programme uses lay community health workers to deliver group IPT in 90-minute weekly sessions over six weeks, primarily in Uganda and Zambia.

We estimate that StrongMinds has an overall effect of 1.80 WELLBYs, and costs $44.56 to treat one client. This leads to a cost-effectiveness of 40 WBp1k, or a cost per WELLBY of $25. This is 5.3 times more cost-effective than cash transfers.

StrongMinds’ programme is more expensive but also more effective than Friendship Bench. Hence, the overall cost-effectiveness of the charities is very similar.

Our analysis was ‘in-depth’, which means we believe we have reviewed most or all of the relevant available evidence on the topic, and we have completed nearly all (e.g., 90%+) of the analyses we think are useful.

This is also one of the most well-evidenced interventions we have evaluated to date. That being said, we rate the quality of evidence for StrongMinds to be ‘low to moderate’ based on our stringent GRADE-adapted criteria. This means there is more uncertainty about the effects than if high(er) quality evidence were available. Again, we think this reflects on how little good data there is, not on StrongMinds specifically. We expect few interventions that we evaluate in LMICs will have more than moderate quality evidence.

Despite this uncertainty, StrongMinds would remain more cost-effective than GiveDirectly even if we had applied more conservative analytic choices rather than using the choices we think are most plausible and correct – except for one analytical choice, which we do not consider plausible (explained below). We present these robustness checks in Section 9.3.

There is only one randomised control trial involving StrongMinds (i.e., a working paper by Baird et al., 2024) and this finds only very small effects compared to the other evidence sources. If one puts 100% of the weight on Baird et al., instead of the other sources, this reduces the cost-effectiveness to 6.95 WBp1k (this is just below, but close to, cash transfers).

However, despite the trial taking place in Uganda (where StrongMinds operates) and using a version of StrongMinds’ model, there are several ways in which the Baird et al. study is different from StrongMinds’ actual programme in the field today, which means we cannot generalise from it as much as one might expect.

Stated succinctly (see Section 3.2.2), the RCT was a pilot from 2019 of the first time StrongMinds had implemented their programme via a partner organisation (BRAC), the first time they had worked with adolescents, and the first time they had used youth facilitators (StrongMinds primarily does therapy for adults led by adults). The facilitators were inexperienced and given insufficient supervision. Attendance was low, with 44% of participants failing to attend any sessions. Furthermore, the long-term data collection overlapped with COVID. These issues are noted by Baird et al. (2024) and/or StrongMinds themselves (StrongMinds, 2024).

Hence, despite this study being, at first glance, an RCT of StrongMinds’ programme, we do not think it is very informative about StrongMinds own operations today. We give an appreciable, but limited, weight to this source of evidence: 20% of the total, with the remaining 80% coming from the meta-analysis and the monitoring and evaluation data (see Section 7).

Our confidence in our estimate of StrongMinds’ cost-effectiveness would increase with further high quality, relevant RCTs. We have discussed this with StrongMinds and a more relevant RCT is in the works.

Comparison to previous versions of the report

This report is the fourth iteration of our analysis. Our updates over time have been driven by new data and methodological improvements which have strengthened our confidence in the findings.

Readers may be interested to know that the emergence of wellbeing – or WELLBY – cost-effectiveness analysis is very recent, with all attempts we know of (either for charities or government policies) having happened in the last 5 years. We are the first organisation to have conducted these analyses in low-income countries, and also the first to have performed systematic reviews and meta-analysis of any intervention in terms of wellbeing. We hope others – and ourselves(!) – can learn from our processes and methods and, in the future, produce analyses in fewer versions.

Here, extremely briefly, is an account of various versions (see Appendix A for more detail):

  • Version 1 was our first meta-analysis of psychotherapy in LMICs and wellbeing cost-effectiveness analysis of StrongMinds (McGuire & Plant, 2021a; McGuire & Plant, 2021b).
  • In Version 2 (McGuire et al., 2022b), we added ‘household spillovers’ (i.e., the impact that receiving cash or therapy had on partners and children).
  • In Version 3 (McGuire et al., 2023), we made a large update by overhauling our analysis with a systematic review of psychotherapy in LMICs with 74 studies after exclusion of outliers (the V1-V2 meta-analysis was not a fully systematic review). We also added a cost-effectiveness analysis of Friendship Bench, and paid extra attention to internal and external validity adjustments (publication bias, dosage, etc.).

Our cost-effectiveness estimates changed between Version 3 and Version 4 in the following ways:

  • StrongMinds increased from 30 to 40 WBp1k.
  • Friendship Bench decreased from 58 to 49 WBp1k. And we have upgraded the depth of our analysis of Friendship Bench from ‘shallow’ to ‘in-depth’.

Here are the changes between versions 4 and 3 (some of which are already presented in an interim update, Version ‘3.5,’ McGuire et al., 2024):

  • We extracted 44 additional small sample studies we did not have time to extract before (and, since Version 3.5, double checked the extraction of all studies).
  • We rated studies for ‘risk of bias’ and excluded those with ‘high’ risk. And, since Version 3.5, we performed a second risk of bias evaluation.
  • The Baird et al. (2024) working paper came out, so we could include it in our analysis. The authors had shared some summary information with us in time for V3, but not a full draft paper, and we had used a placeholder value.
  • We updated our system for weighing and aggregating different pieces of evidence. Previously we relied on weights suggested by a formal Bayesian analysis, which were only based on statistical uncertainty. Now, we use subjective weights that are informed by the Bayesian analysis and a structured assessment of relevant characteristics based on the GRADE criteria.
  • We have also added charity monitoring and evaluation (‘M&E’) pre-post results as an additional source of evidence. However, ​​we do not put much weight on it, because it is not causal evidence.
  • We now present a revised and expanded set of factors that influence our confidence in our cost-effectiveness analysis figures, including the depth of the analysis, quality of evidence, and robustness checks.
  • We have now conducted site visits of the charities as part of due diligence.
  • We also updated specific details of how the StrongMinds and Friendship Bench programmes are implemented to include more up-to-date 2023 figures for StrongMinds and Friendship Bench. This includes, for example, their costs, the number of people treated, and the average dosage received per person. Increases in cost-effectiveness have in part been driven by a decrease in the ‘cost to treat’ of the charities.
  • We also made a number of smaller updates and changes to our analysis, which we describe throughout this report.

The table below outlines the key topics covered in this report and its appendix.

 

Topic Location
Mental health, the role of psychotherapy, previous research, and the research gap we address Section 1; Appendix M1
General methodology Section 2; Appendix C
Data from the different sources of evidence Section 3; Appendix B (systematic review)
Methods and results for the general meta-analysis of psychotherapy Section 3.1; Section 4.1; Appendix C; Appendix D
Charity-related causal data and results Section 3.2; Section 4.2
Charity-related pre-post data and results Section 3.3; Section 4.3; Appendix K
Validity adjustments Section 5; Appendix E (publication bias); Appendix F (range restriction); Appendix G (moderator analysis); Appendix I (other adjustments)
Discussion of Baird et al.’s relevance Section 3.2.2; Section 4.2.2; Section 5.2.4; Section 7; Appendix L3
Discussion of Friendship Bench dosage Section 5.2.3; Appendix H
Household spillovers Section 6; Appendix M
Weighting of the evidence sources Section 7; Appendix L
Cost and cost-effectiveness Section 8; Appendix N
Confidence Section 9
Quality of evidence (GRADE) Section 2.6; Section 9.2; Appendix J
Sensitivity analysis and alternative choices Section 9.3; Appendix O; Appendix P
Site visits Section 9.4; Friendship Bench in Zimbabwe and StrongMinds in Uganda
Major uncertainties Section 9.5; Section 7; Appendix L3; Section 5.2.3; Appendix H
Details from previous versions of this analysis Appendix A
Comparisons with other charities Our charity evaluations page on our website

 

Notes and acknowledgements

Updates note: This is Version 4 of this project. Our work may be updated in the future.

External appendix and summary spreadsheet note: This report is accompanied by an external appendix (see buttons at the top of this report). There is a summary spreadsheet available. But note that our analysis is conducted in R and explained in the report.

Author note: Joel McGuire (HLI), Samuel Dupret (HLI), and Ryan Dwyer (HLI) contributed to the conceptualization, investigation, analysis, data curation, and writing of the project. Michael Plant (HLI, University of Oxford) contributed to the conceptualization, supervision, and writing.

Joel McGuire, Maxwell Klapow (University of Oxford), Samuel Dupret, and Ryan Dwyer conducted the systematic review.

Maxwell Klapow, Deanna Giraldi (University of Oxford), Benjamin Olshin (University of Oxford), Thomas Beuchot (Institut Jean Nicod, ENS-PSL), Juliette Michelet (Université Paris Nanterre), Joel McGuire, and Samuel Dupret conducted the risk of bias analysis.

James Goddard (LSE), Ben Stewart (HLI), and Samuel Dupret double-checked the data extraction.

The views expressed in this document do not necessarily reflect the perspectives of reviewers or employees of the evaluated charities.

Reviewer note: We thank, in alphabetical order, the following reviewers for their help: Ben Alsop-ten Hove (Founders Pledge), Sam Bernecker (BetterUp), Paul Bolton (Johns Hopkins), Laura Castro (IPA), Ruby Dickson (Rethink Priorities), Barry Grimes (World Happiness Report), Ishaan Guptasarma (SoGive), Julian Jamison (University of Exeter, GPI Oxford), Ulf Johansson (Örebro), Casper Kaiser (University of Warwick), Matt Lerner (Founders Pledge), Crick Lund (KCL), Domenico Marsala (HLI), Katherine Venturo-Conerly (Shamiri, Harvard), Lingyao Tong (VU University Amsterdam), and the reviewers who have decided to remain anonymous.

We also thank Statistics Without Borders, a volunteer organisation that provides statistical consulting, for their advice on synthetic control methodology and the weighting of the different sources of evidence. Thank you to Nadja Rutsch, Naval Singh, Jacob Strock, and Francisco Avalos.

Charity information note: We thank Elly Atuhumuza, Jen Bass, Jess Brown, Rasa Dawson, Andrew Fraker, Roger Nokes, Kim Valente for providing information about StrongMinds. We also thank Lena Zamchiya, Ephraim Chiriseri, and Tapiwa Takaona for providing information about Friendship Bench.