Chapter 05 of 06
Foundations say they want innovation. But strategic philanthropy's obsession with measurable outcomes has quietly killed risk-taking — and most funders don't know it yet.
New here? Start with the overview for context on why failure matters.
Based on: Kasper & Marcoux (SSIR, 2014) · Jen Ford Reedy (SSIR) · Atta Tarki (SSIR, 2020)
of the 40,000+ applications to Gates Grand Challenges Explorations were funded — not because 98% were bad, but because funding innovation at scale requires a filter most foundations don't build
"When doing innovation, the first question is not 'Is this going to work?' but rather, 'If it works, would it matter?'"
— Eric Toone, former ARPA-E official, quoted in SSIR
Gates Grand Challenges Explorations processed 40,000+ applications, awarded 850+ initial grants across 57 countries, and deliberately let over a third of approvals hinge on a single reviewer's judgment — to protect unconventional ideas from being filtered out by committees. Most foundations do the opposite.
The central irony
The push for measurable outcomes — evidence-based grantmaking, program logic models, 90-day milestones — made foundations more accountable. It also made them worse at funding innovation. Foundations that score highest on accountability metrics fund the fewest experiments. The two goals are structurally opposed.
Innovation is rarely linear. It requires flexibility, iteration, and the willingness to count failures as data. But funders who demand quantifiable proof of success before the next grant cycle are — structurally — funding execution, not exploration.
Two types of philanthropic risk
Opportunity cost: diverting funds from predictable grants to uncertain experiments. Reputational: potential damage to funder credibility if the bet fails publicly. Most foundations optimize for neither — they just avoid both by funding safe, proven interventions.
Gates GCE by the numbers
A framework from the Bush Foundation
Jen Ford Reedy (President, Bush Foundation) proposes "failure optimization planning" — designing your bets so that when things don't work, they fail in the most valuable way possible.
Failures that educate the field, not just your org. Rockefeller's 1980s welfare-to-work study ran with 4,000 women across four cities and a control group — it didn't fix welfare, but it produced actionable data on childcare's role in employment that shaped policy for a decade.
In frontier missions: a digital outreach pilot for a specific unreached people group that doesn't gain traction can still produce the field's first real engagement data on what content formats work in a closed country — data every org in the space benefits from.
Ford Foundation's 1960s Indonesia economics initiative failed to strengthen universities — but the graduates went into government instead, accidentally strengthening national economic planning capacity under Suharto. The failure landed somewhere useful.
In frontier missions: a talent-matching platform that doesn't achieve product-market fit still puts 40 engineers and mission org leaders in the same room — relationships that produce informal partnerships and referrals for years after the product is sunset.
The Lasker Foundation's "War on Cancer" didn't cure cancer. But it built U.S. medical research infrastructure — labs, institutions, funding pipelines — that made every subsequent breakthrough possible. The failure left the system stronger than it found it.
In frontier missions: a fellowship program that doesn't produce the intended cohort of marketplace missionaries still trains org leaders to recruit and onboard professionals — institutional muscle that didn't exist before and makes the next attempt cheaper and faster.
The implication: Before you launch a program, ask not just "what does success look like?" but "if this fails, what's the best failure we could generate?" Rockefeller built this into their 1980s study design. Most foundations design for success and are surprised by failure. The Bush Foundation flips this — build slower, stay flexible, invest in capability over intervention.
When your success becomes your enemy
Atta Tarki — CEO of ECA and board member at Beautify Earth — tried to replicate a growth strategy from his for-profit firm in a nonprofit context. He hired an experienced fundraiser, gave her three months, and watched the experiment collapse. The post-mortem revealed three compounding failures: a flabby job description, a three-month timeline for a six-to-nine month sales cycle, and — most importantly — an organization that had calcified around its current model.
Drawing on Clay Christensen's Innovator's Dilemma: as organizations succeed, they develop antibodies against anything that disrupts the delivery mechanism that made them successful. They don't kill innovation out of malice — they do it out of coherence. The org has learned what works, and new ideas feel like threats to that learning.
"Do it right, or not at all. Nonprofit innovation demands more intentionality than for-profit ventures — requiring full commitment or abandonment rather than compromised half-measures."
— Atta Tarki, Stanford Social Innovation Review, 2020
Beautify Earth's solution: don't try to innovate inside the org. Spin off an independent for-profit entity (BeautifyEarth.com) to run the risky bet — building a platform connecting artists, property owners, and donors. It raised $350K in pre-seed funding without triggering the antibody response. The insight: sometimes the org can't be the vehicle for what it needs to become.
Major donor fundraising requires 6-9 month relationship cycles. Beautify Earth agreed to a 3-month trial with no defined success metric. The lesson: trial periods without pre-committed success criteria are not experiments — they're just funding things you're already emotionally invested in, with a fake off-ramp you won't actually use.
The new hire was asked to do major donor fundraising, source mural walls, and manage small-donor membership simultaneously. "Strategy is choosing what not to do" — but the org didn't apply that principle to the role it was building. One function at a time. One metric. One clear mandate.
The untapped edge
Foundations don't answer to shareholders. They have no IPO, no earnings call, no stock price. This should make them the boldest risk-takers in the innovation ecosystem. Instead, most are more conservative than the markets — because reputational risk fills the vacuum where financial risk used to live.
Foundations can fund 10-year bets that no government agency can sustain through an election cycle. Robert Wood Johnson Foundation funded 911 system pilots when ambulances were operated by morticians. Aaron Diamond funded HIV/AIDS research before the federal government acknowledged the crisis. The time horizon is the advantage — but only if you use it.
Kasper & Marcoux identify where innovation actually enters the funding process: sourcing (reach beyond typical grantees), selecting (balance analysis with intuition), supporting (flexible, iterative structures), measuring (evaluate learning, not just outcomes), scaling (grow what works systematically). Most foundations only do the last two.
The cases are there: 911 infrastructure, antiretroviral HIV cocktails, Indonesia's economic planning capacity, the U.S. cancer research system. All foundation-funded bets that looked wrong at the time and turned out to be world-shaping. The cases go back 60 years. The behavior hasn't changed.
If you're funding FC-adjacent work, you have structural advantages that no VC or government agency has. You can take 15-year bets. You can fund things that don't have a spreadsheet. You can let a single reviewer protect an unconventional idea. Most of us don't use those advantages — we drift toward safety because safety feels like stewardship.
But playing it safe with money given for Kingdom purposes is its own kind of failure. The question isn't whether to take risk — it's how to fail well when you do. Design for good failures. Move slower. Build capacity, not just programs. And when something doesn't work, make sure it dies in a way that teaches the next person something worth knowing.
Key Takeaways
Sources