We’re all committed to evidence-based practice. So what does that actually mean?

Lately I’ve been asked a version of the same question more times than I can count: What does good practice in philanthropy actually look like?

It’s not a surprising question, given my role as expert content lead at The Giving Academy. But the frequency of it has made me think. We’ve got a substantial and growing body of research on philanthropic practice, in Australia and globally. We have researchers and practitioners who’ve spent careers building the evidence base for what works, what doesn’t, and what we still don’t know. And yet, when I walk into rooms with funders, the thing I hear most often isn’t a reference to any of that. It’s some version of: ‘we’re unique, we’re the only ones doing this, we’re the first.’

Sometimes that’s true. Usually it’s not.

The claim nobody interrogates

‘We’re committed to evidence’ appears in some form in a remarkable proportion of foundation strategy documents. Often it’s treated as a statement of values rather than a description of method. Almost no one who writes it has paused to ask how it shows up in practice, including, honestly, those of us who commission and get excited about research.

It’s worth being precise about the language, because the terms can get muddled.

Good practice is what experienced, thoughtful practitioners have learned to do. It’s real knowledge. Genevieve Timmons’ Savvy Giving is one of the best examples of this the Australian sector has produced: four decades in philanthropic management, governance, and teaching, distilled into practical tools and hard-won insight. The knowledge in that book isn’t theoretical. It comes from being in the room, repeatedly, across many different contexts, and paying close attention. Good practice is what communities and leaders doing the work have learnt from doing. Treating that knowledge as anecdote rather than evidence is one of the ways philanthropy has historically over-privileged certain forms of knowing over others.

Best practice implies comparison: the most effective known approach, based on some form of validation. In philanthropy, the term is applied loosely. ‘Best practice in grantmaking’ often means ‘what the most well-resourced foundations are doing,’ which isn’t the same thing. Where best practice is derived from genuine comparative evaluation, it’s useful. Where it’s simply the practice of respected organisations, it may replicate structural advantage as readily as effectiveness.

Evidence-informed practice is the honest position for most of what philanthropic funders do. It means taking research seriously, drawing on it, and using professional judgment to interpret what it means in a specific context. PRISM’s Action Learning Sets are a good example: the design draws directly on the adult learning literature and the evidence on peer learning and reflective practice. But each cohort is different. The research sets the design parameters; what happens in the room is shaped by who’s in it, what they’re carrying, and what questions are live for them right now. That requires more skill, not less, than following a strict protocol.

Evidence-based practice is the highest claim. It means rigorous, replicated research: randomised controlled trials, systematic reviews, results that hold across multiple contexts. It’s a high bar. It’s also, in most philanthropic contexts, a bar that hasn’t been cleared. And that’s fine. The mistake is pretending otherwise.

What the research actually shows

The good news is that the evidence base for philanthropic practice is richer than most funders seem to know. A few things it tells us with reasonable confidence:

Unrestricted, multi-year funding produces stronger organisations. The Ford Foundation’s BUILD initiative, evaluated over four years by NIRAS, found that 85% of grantees reported greater financial resilience by the end of their BUILD grant, and 83% said it had contributed significantly to their mission impact. The mechanism matters: it was the combination of long-term commitment, flexible funding, and institutional strengthening support working together, not any single element.

Large, trust-based grants without conditions or reporting burdens work at scale. The Centre for Effective Philanthropy spent three years studying MacKenzie Scott’s giving across more than 800 grantees. Almost 90% of leaders said the gifts had strengthened their organisation’s long-term financial sustainability. Leaders, particularly leaders of colour, reported measurably increased confidence and reduced burnout. Organisations grew faster than comparable organisations that didn’t receive a Scott grant. None of this was accidental. It’s what happened when funders got out of the way. But what is the Scott research really showing? It's that funders trusting what grantees already know about their own organisations and their own contexts produces measurably better results than funders second-guessing it.

Participatory grantmaking shows genuine promise, but the evidence is still being built. The 2023 report from the Centre for Evidence and Implementation, commissioned by the Paul Ramsay Foundation, found only six peer-reviewed papers on participatory grantmaking at the time of review. The findings were encouraging: better relationships, more relevant projects, stronger grantee capacity. The honest conclusion was that the approach warrants piloting and evaluation, not that it’s proven. That distinction matters.

Direct cash transfers are among the most rigorously tested interventions in existence. GiveDirectly has more than 200 independent studies behind it, including a 2013 RCT in Kenya that found significant improvements in consumption, assets, food security, and psychological wellbeing, with a 2018 follow-up confirming the asset gains held three years later. GiveWell updated its cost-effectiveness estimate of GiveDirectly’s core program upward by three to four times in November 2024, based on new data on local economic multiplier effects. This is what genuinely evidence-based practice looks like. It’s also, worth noting, quite different from the context most Australian funders operate in.

The uniqueness problem

One of the most persistent habits in philanthropy is the claim to uniqueness. And I say this as someone who’s sat on both sides of the table.

In the early 2000s I was leading a national for-purpose organisation, and funders would ask us: What makes you unique, and how are you alone going to solve the complex problem of educational attainment for young people from low-income backgrounds? That framing now looks obviously wrong. We understand that no single organisation solves complex problems, and we largely stopped asking grantees to prove their uniqueness.

And then, as giving and philanthropic institutions grew, we picked the catch-cry back up ourselves. Now it’s funders claiming uniqueness, not grantees. Every new initiative believes it’s doing something no one has ever tried before.

Context matters enormously in social change work. A participatory grantmaking model that worked well in a US urban foundation may not transfer directly to a First Nations-led initiative in remote Australia. The cash transfer evidence from Kenya isn’t straightforwardly applicable to someone experiencing housing stress in Sydney.

But uniqueness is also being used to do something else. It’s being used to avoid the discomfort of finding out that someone’s tried this before and it didn’t work as hoped, or that there were fundamental elements of the approach that needed to be incorporated for it to succeed.

When I hear ‘we’re the only ones doing this,’ my instinct now is to ask: Have you looked? There’s work worth engaging with before claiming to be the first. May Miller-Dawkins’ national report maps twenty years of what catalytic funding for building ecosystems has and hasn’t done in this country. Dr Sally McGeoch’s PhD research at CSI Swinburne examines cross-sector collaboration and what it actually takes to scale social enterprise impact. Erin Dolan’s Churchill Fellowship research investigated international housing funds across five countries to understand what philanthropic practice in housing could look like here. The CEP research on grantee perceptions, the BUILD evaluation on the conditions under which institutional strengthening actually works. The literature’s there.

Often, though, it goes unread. Not because people are lazy or incurious, but because the culture of philanthropy doesn’t consistently require engagement with it. We don’t yet have strong professional norms that say: before you design a new approach, you engage seriously with the existing evidence. Before you claim you’re the first, please check.

What I want to see change

I want us to hold formal research alongside what communities and leaders have learnt from doing the work. Not as a softer form of evidence, but as a vital companion. To hold this as a genuine obligation. The evidence on what builds organisational capacity, what strengthens the funder-grantee relationship, what participatory approaches can and can’t deliver: it exists. It isn’t perfect or complete. But it’s better than starting from scratch and calling it innovation.

I want the sector to get more comfortable saying ‘we don’t know yet’ and ‘the evidence here is early.’ The CEI participatory grantmaking report is a model for this. It named what the research showed, and its limits. It was honest about where we are in the knowledge-building cycle. That kind of honesty is more useful to the field than confident claims that outrun the evidence.

And I’d love us to replace the ‘we are unique’ reflex with a different instinct: asking what the existing evidence tells us, and where our context genuinely requires us to adapt it.

Because we don’t need to be the first. But we do need to be useful. The evidence, lived and on the page, will help us all get there.

Sources and further reading

Ang, C., Abdo, M., Rose, V., Lim, R. & Taylor, J. (2023). Participatory Grantmaking: Building the Evidence. Centre for Evidence and Implementation, commissioned by Paul Ramsay Foundation.

Centre for Effective Philanthropy (2022–2025). Breaking the Mold: The Transformative Effect of MacKenzie Scott’s Big Gifts. cep.org

NIRAS / Ford Foundation (2022; 2025). BUILD Developmental Evaluation Final Report; BUILD Longitudinal Evaluation Final Report. fordfoundation.org

GiveDirectly (ongoing). Research on Cash Transfers. givedirectly.org. Synthesises 200+ independent studies.

GiveWell (November 2024). Updated cost-effectiveness estimate, GiveDirectly Cash for Poverty Relief. givewell.org

Haushofer, J. & Shapiro, J. (2016; 2018). Short- and long-term effects of unconditional cash transfers, Kenya. Quarterly Journal of Economics / J-PAL.

Miller-Dawkins, M. et al. (2026). Shaping & Fuelling Change: The Role and Practices of Philanthropy in the Social Enterprise Sector in Australia. Siddle Family Foundation, English Family Foundation, Philanthropy Australia.

Productivity Commission (2024). Future Foundations for Giving. pc.gov.au

Timmons, G. (2024). Savvy Giving (2nd ed.). Australian Communities Foundation / Bowen Street Press, RMIT University. savvy-giving.com

McGeoch, S. (in progress). PhD by Practice: Cross-sector collaboration in scaling Work Integration Social Enterprises. Centre for Social Impact, Swinburne University of Technology.

Dolan, E. (2024). Churchill Fellowship: Investigating international housing funds to increase affordable housing in Australia. Winston Churchill Memorial Trust.


Next
Next

A legacy of learning