What the evidence actually says
The structures we’ve built in philanthropy are often working directly against the outcomes we say we want. Short-term grants. Restricted funding. Compliance-heavy reporting. Annual cycles. Each one designed in good faith. Each one, often, doing damage we struggle to see.
We’ve each spent more than 25 years in and around the sector, on both sides of the funding relationship. As grantseekers trying to make the case, and as funders deciding where to put money. We’ve watched organisations do extraordinary work on shoestrings, and we’ve watched the same organisations burn through talented people because the funding model demanded it. We’ve also watched funders, Jo included, confuse activity with impact, and reporting with accountability.
What we’ve learned is that the gap between what funders intend and what organisations experience is larger than either side realises. This is not a new observation. But it’s one the sector keeps having to relearn, often without reference to what we already know.
There’s a growing body of rigorous evidence on what actually works when funders invest in organisations for the long term. Independent evaluations of major programs. Documented outcomes from community-led partnerships. Emerging practice from learning and evaluation specialists who are building better frameworks for understanding change. This evidence exists. It’s publicly available. As Jo wrote about in her last post, it’s not making it into the room often enough.
This series is our attempt to change that.
Over fifteen weeks, publishing every Wednesday from May through to August, we’ll draw on three bodies of knowledge…
Evidence: what major funder evaluations tell us about unrestricted funding, organisational development, and the conditions for impact.
Experience: what partners in the Australian field are telling us, including findings from research we’ve been part of ourselves.
Expertise: what learning and evaluation practitioners are building, here and internationally, that changes how we understand and document change.
May Miller-Dawkins and Jo Taylor leading a Haven Retreat for social change leaders in Aotearoa New Zealand.
We’re doing this because it connects directly to the work we do at PRISM. We work with organisations and funders trying to think more clearly about collective leadership, learning, and how change actually happens. We lead The Giving Academy, which exists to build the knowledge and practice of people who give, or are thinking about giving, philanthropically. We convene Action Learning Sets for leaders navigating complexity. And we’re increasingly convinced that the conversation about funder practice in Australia needs more evidence, more honesty and less consensus, less performance.
A word on what prompted the timing
Alongside colleagues at Westpac Foundation, English Family Foundation, Siddle Family Foundation and Philanthropy Australia, May co-produced a report called Shaping & Fuelling Change, the most comprehensive analysis of philanthropic investment in Australia's social enterprise ecosystem produced to date, designed deliberately to put civil society's voice at the heart of the inquiry. What came back was illuminating, uncomfortable, and necessary.
The research found genuine contributions: catalytic funding that proved new approaches before government would, patient capital that stayed long enough to matter, active stewardship that brought networks and expertise alongside money. And it found that funders were, sometimes in the same relationships, constraining what they thought they were enabling. Relationships experienced as partnerships were, from the other side, often transactions. And critically, across 25 years of philanthropic investment in the sector, the question of what organisations needed to be genuinely capable and sustainable had rarely been asked directly or funded consistently.
That finding sits at the heart of this series.
Impact isn’t just about what you fund. It’s about whether the conditions exist for organisations to think clearly, learn honestly, and work toward change over time.
Those conditions don’t happen by accident. They have to be designed and funded deliberately.
The posts that follow look first at what three major funder evaluations tell us: CEP’s three-year study of MacKenzie Scott’s giving; Ford Foundation’s independently evaluated BUILD program; and the PropelNext initiative incubated by Edna McConnell Clark Foundation. Then we’ll turn to the Australian field, what partners and practitioners here are experiencing and documenting. And finally we’ll look at what the learning and evaluation community is developing as the methodological infrastructure for a more honest conversation about impact.
Each post stands alone. But we are building an argument. We hope you’ll read it as a whole and that by the end, you’ll have something concrete to take back into your own practice. But if you dip in and out, that’s ok too.
We’d love to hear what lands, what challenges you, and what you’d push back on. The comments are open on LinkedIn and we are curious to hear what you think.