fbpx

Charity comparisons

Overview of our 2024 charity recommendations

This video gives a high level overview of our 2024 recommended charities, and a walkthrough of our new groundbreaking research reports.

The charities cover these cause areas:

  • Nutrition – Treating acute malnutrition in children (Taimaka)
  • Parenting – Improving child development by enriching parenting practices (Reach Up)
  • Crime – Using psychotherapy and cash transfers to reduce crime in Liberia (NEPI)
  • Lead poisoning – Preventing lead exposure in Ghana (Pure Earth)
  • Psychotherapy – An updated analysis of the impact of treating depression in low income countries (StrongMinds and Friendship Bench)

How do we make our recommendations?

We aim to find the charities that improve happiness most effectively, so you can help people as much as possible. We focus on charities in low-income countries, as that’s where we think donations will have the biggest impact. This page summarises the charity evaluations we have completed to date.

To help donors make decisions about where to donate, we distinguish between our top recommended charities, promising charities, and non-recommended charities (see below).

Top charities meet all of the following three criteria:

  • Cost-effective: they are among the most cost-effective interventions we’ve found so far1We don’t currently use a ‘bar’ to judge cost-effectiveness: we simply recommend the best things we know of. Though, we often estimate how cost-effective interventions are relative to cash transfers as a convenient point of comparison. Other evaluators may also use cash transfers as a benchmark, but unless they use the WELLBY framework, the cost-effectiveness ratios would not be directly comparable.
  • Evidence-based: they have moderate or strong evidence underlying them
  • Investigated: we have conducted a medium or deep assessment of the organisation

Promising charities are likely extremely cost-effective, but we need stronger evidence for them to qualify as a “Top recommended charity”. For now, we see these charities ‘higher-risk, but possibly higher-reward’.

Non-recommended charities either are not highly cost-effective, or do not have a current funding gap. They may, nevertheless, be much more impactful than average charities.

Charity comparison

We evaluate charities on a single metric called wellbeing-adjusted life years (WELLBYs). The metric is simple: one WELLBY is equivalent to a 1-point increase on a 0-10 wellbeing scale for one year. You can learn more about WELLBYs on our charity evaluation methodology page.

We show the cost-effectiveness of the charities or interventions we have evaluated so far in Figure 1. The graph shows our estimate of how many WELLBYs would be created from a $1,000 donation.

For an overview of each charity we have evaluated, see our Give Now page.

Figure 1. Cost-effectiveness of charities and interventions.

Note. The bars represent the central estimate of cost-effectiveness (i.e., the point estimates). Our top charities are yellow, promising charities are blue, and non-recommended charities are grey2Deworming charities are not shown, because we are very uncertain of their cost-effectiveness. The Against Malaria Foundation is also not shown, because the estimate relies on philosophical views..

We address common questions about our recommendations in the expandable sections below.

Which recommended charities should I donate to?

If you are a donor who wants to be confident that your donation is making a real difference, we recommend donating to our top charities. We think our two top charities are roughly as cost-effective, so you might consider splitting your donation between them.

If you are a donor who is comfortable with more risk and uncertainty, but you want to maximise the expected value of your donation, you might consider donating to our promising charities. We think these are “higher risk, but potentially higher reward” options. If you want to donate to any of these charities, we recommend reading our full report so you are familiar with the uncertainties we have in our evaluation.

How do you work out the cost-effectiveness of the different charities?

At a high level, we separately collect and analyse evidence for (A) impact a charity has and (B) its costs to produce that impact. We then divide the impact by the cost to get cost-effectiveness.

What does that look like, in practice? For the impact, we often start with academic studies of the effects an intervention has. For instance, to work out the effects of GiveDirectly cash transfers, we looked at the results of 12 published randomised controlled trials (RCTs) of cash transfers, where one group got nothing (the control group) and the other group got the cash (the treatment group). We measure impact in changes to self-reported wellbeing, that is how much happier the treatment group was compared to the control group. We use this data to work out the initial effect, and the effect over time, to get to the total effect for the direct recipient. We tend to make lots of (small) adjustments to account for issues with the studies or the context, and we may bring in other evidence (e.g., the monitoring and evaluation data from the organisation itself). We also account for ‘spillovers’, such as effects on the household. Once we have combined all this, we get a figure, in WELLBYs, for the total effect per intervention (in this example, a  $1,000 cash transfer).

We then look at the cost per intervention. So, for cash, it costs roughly $200 in overheads to provide the $1,000, so the total cost to a donor to provide a $1,000 cash transfer is about $1,200. We often get this data directly from the organisation, or public records. In general, we assume the average cost to provide an intervention is just the organisation’s total costs divided by the number of interventions provided.

From these, we get to WELLBYs created per $1,000 of funding provided.

For a fuller explanation, see our cost-effectiveness methodology page.

Why aren’t the ‘top recommendations’ just the most cost-effective ones?

While high cost-effectiveness is a necessary condition for us to recommend a charity, it is not the only condition. We also consider the quality of the evidence and the depth of our review.

Why do these things matter? Ultimately, we want to have reasonable confidence in the accuracy of our estimates, so donors can trust that their money is reliably making people happier. When the quality of evidence or depth of our review is lower, we are less certain about the exact cost-effectiveness, and our estimate is more likely to change in light of new information3This can be put in a Bayesian framework. Where one has a prior that charities have no effect. The stronger the evidence there is about a charity’s (cost-)effectiveness, the more likely one is to update away from the prior that there is no effect..

A factor in this is that, in our experience, cost-effectiveness estimates usually – but not always – go down with further scrutiny. Our ‘promising charities’ have been analysed in less depth than the top charities, so we would expect their cost-effectiveness to decrease somewhat on closer examination. However, we can’t point to particular factors that would alter the numbers: if we could, we would already have adjusted the numbers.

So, our top charities are our best bet for donors who want to reliably have a positive impact on wellbeing. Our promising charities are a good option for donors who are comfortable with “higher risk, potentially higher reward”, or have  a particular interest in a cause area.

Why is NEPI a promising charity, but Fortify Health is not, even though the cost-effectiveness estimates are very similar?

For NEPI, we’ve been particularly conservative in our analysis: there is limited data and our educated guesses played a larger role than normal. This is unusual for a ‘promising’ charity – we expect to revise up the numbers here on further inspection, but we are currently ‘playing it safe’. Fortify Health we think is a good option – it is about 3x better than cash – but we don’t have the same expectation that its numbers could be revised up by as much.

Why do you use cash transfers as a benchmark?

We often refer to the effectiveness of interventions in terms of ‘multiples of cash transfers’. Namely, we treat cash transfers from GiveDirectly as our benchmark.

Why choose cash as our benchmark? There are two main reasons.

  1. The benefits of cash transfers are backed by a large and high-quality evidence-base. This means we are confident both (A) that cash transfers are good and (B) about how good they are. That the estimate is relatively stable makes them a useful benchmark.
  2. Cash transfers can easily scale. Compared to other interventions, cash is simple to deliver and adaptable to many contexts.

Because cash provides reliable benefits that can be easily scaled, it is a good default, and we should only give to other charities if they are clearly more cost-effective.

Also, we recognise people aren’t used to thinking in WELLBYs; a comparison to multiples to cash is more intuitive for most4Note that other charity evaluators also use cash transfers as their benchmark, but this does not mean that the ratios are comparable. Unlike most other evaluators, we use a wellbeing framework for our evaluations. Furthermore, we typically include long-term effects and household spillovers. Hence, the ratios we have are in the context of our own rigorous methodology (see our methodology webpage and our GiveDirectly evaluation)..

We show the cost-effectiveness of the charities or interventions we have evaluated in comparison to cash transfers in Table 1..

Table 1. Cost effectiveness of charities and interventions in comparison to cash transfers.

Charity x GiveDirectly
StrongMinds 5.3
Friendship Bench 6.4
Pure Earth 14.3
Taimaka 8.7
Reach Up (icddr,b) 6.6
NEPI 2.9
Fortify Health 2.9
GiveDirectly 1.0

 

Shouldn’t we just give people cash?

People usually take one of two perspectives when asking this question.

The first is that we should give people cash on the assumption it will be the most cost-effective way of helping people. This view is common among economists, who think that people are generally the best judges of what’s good for them.

We agree this is a reasonable starting view but it should be tested. In the real world, people have imperfect information, limited motivation, and inadequate options. We need research – here, as elsewhere – to find out what works. Individuals can’t always just go out and purchase the intervention, such as psychotherapy in low- and middle-income countries, or it might be more beneficial to them than they expect, particularly when we account for factors like stigma. It also seems unlikely that ‘just giving cash’ can solve problems which need coordination, like those addressed by advocating for lead regulations.

The advantage of the WELLBY approach is that, through people’s self-reports, we get evidence on what actually makes a difference to their lives as they live them – not just what they (or evaluators) expect would matter.

The second perspective is that we should give people cash even if we have good evidence that something else would be more cost-effective. Those in this camp tend to prioritise autonomy. As a philosophical position, we think autonomy is a contributor to happiness, but it’s people’s happiness that ultimately matters: an autonomy-maximising option may not be the best overall for someone. As it happens, our recommended charities are typically great for autonomy too: mental health conditions are debilitating, for instance, and addressing them empowers people.

Are all the charities directly tested against cash?

When we explain that we use cash as a comparison point and that focus on low-income countries (LICs), some people assume that we’re looking at an average very poor person in LICs, then comparing the cost-effectiveness for people in that group of (A) providing cash, or (B) providing another intervention, like psychotherapy or high-protein food.

This is not what we are comparing. For the tailored interventions, we are considering their effects on a particular population with a specific problem. For instance, we are considering therapy for those diagnosed with depression, or protein-supplement for malnourished people. It wouldn’t make much sense to look at the cost-effectiveness of therapy for an average person: they aren’t the ones who will benefit most and we are looking to make the biggest difference. In comparison, cash transfers are less tailored intervention: these are typically given to anyone below a certain wealth threshold. Part of the reason that the non-cash interventions can be more cost-effective is because they are solving a specific problem very well.

Let’s take the examples of StrongMinds and GiveDirectly. StrongMinds provides psychotherapy just for people with depression (who are also generally poor people in poor countries). We estimate the overall effect of  treating one person with depression through StrongMinds is 1.8 WELLBYs. Each course of therapy costs $45 to deliver. So, StrongMinds creates 40 WELLBYs per $1,000.

GiveDirectly sends $1,000 cash transfers to people targeting people who are very poor (some might be depressed, some won’t). We estimate the impact of a $1,000 cash transfer to a person in poverty via GiveDirectly is 9.2 WELLBYs. The cost to provide each cash transfer is $1,221 (the $1,000 the person receives plus overheads). This means GiveDirectly creates 7.55 WELLBYs per $1,000, making it about 5 times less cost-effective than StrongMinds. But notice StrongMinds focuses on a subset of the people that GiveDirectly does. So, we’re saying helping depressed poor people LICs via therapy is 5x more cost-effective for those people than helping poor people in LICs via cash would be for another group5Put another way – and making linear assumptions about dosage – $45 for a course of psychotherapy for individuals with depression has the impact of a $224 cash transfer to a poor person. Or, again, another way, for $1,220, one could fund 27 courses of psychotherapy for people with depression and produce 48.6 WELLBYs instead of just 9.2..

Endnotes

  • 1
    We don’t currently use a ‘bar’ to judge cost-effectiveness: we simply recommend the best things we know of. Though, we often estimate how cost-effective interventions are relative to cash transfers as a convenient point of comparison. Other evaluators may also use cash transfers as a benchmark, but unless they use the WELLBY framework, the cost-effectiveness ratios would not be directly comparable.
  • 2
    Deworming charities are not shown, because we are very uncertain of their cost-effectiveness. The Against Malaria Foundation is also not shown, because the estimate relies on philosophical views.
  • 3
    This can be put in a Bayesian framework. Where one has a prior that charities have no effect. The stronger the evidence there is about a charity’s (cost-)effectiveness, the more likely one is to update away from the prior that there is no effect.
  • 4
    Note that other charity evaluators also use cash transfers as their benchmark, but this does not mean that the ratios are comparable. Unlike most other evaluators, we use a wellbeing framework for our evaluations. Furthermore, we typically include long-term effects and household spillovers. Hence, the ratios we have are in the context of our own rigorous methodology (see our methodology webpage and our GiveDirectly evaluation).
  • 5
    Put another way – and making linear assumptions about dosage – $45 for a course of psychotherapy for individuals with depression has the impact of a $224 cash transfer to a poor person. Or, again, another way, for $1,220, one could fund 27 courses of psychotherapy for people with depression and produce 48.6 WELLBYs instead of just 9.2.