Measuring what matters

Measuring what matters

Evaluation strategies for challenges, prizes, and open calls.

Measuring the ROI on external innovation investments remains surprisingly difficult. While other investment methods have well-established measurement frameworks — venture capital tracks capital returns, grant programs evaluate effectiveness — organizations leveraging innovation challenges or open calls often operate without clear benchmarks. Citing research from McKinsey, a NESTA report found that more than 40% of challenge prizes were never evaluated for impact. This evaluation gap isn’t due to negligence. The same factors that make open innovation such a powerful tool also make it challenging to measure.

Open innovation is a dynamic strategy that can help organizations across sectors solve a broad range of problems. This flexibility means that innovation programs often have fundamentally different definitions of success. One program might prioritize stimulating innovation in an existing market, while another might focus on soliciting proposals for tools that don’t yet exist. External innovation programs work across diverse fields and issues, from health and climate to CPGs and automotive, but different sectors have different expectations for return on their investments. Further complicating measurement and evaluation efforts, open innovation programs can span months or years, from initial solution solicitation through extended accelerators and technical assistance programs.

At Luminary Labs, we work with our clients to design programs with specific outcomes in mind — and we measure them. While our published case studies document program context, successes, challenges, and early results, our open innovation outcomes surveys take a wider view of longer-term impact.

In our most recent survey, we reached out to winning entities from 26 open innovation challenges we designed and produced from 2011 through 2024. The results demonstrate the sustained value of these programs: Nine out of 10 winning teams continued to develop their solutions after the challenge, with 62 teams collectively raising at least $630 million in post-challenge capital and achieving remarkable success beyond the competition itself. The survey also sheds light on what really motivates participants to enter a challenge, and what kind of support matters most — beyond the funding. This kind of ongoing evaluation can help us better understand how innovation programs actually produce outcomes.

Learning from the field

As we continue refining our own evaluation approaches, we’re constantly learning from how others in the field are defining impact and tackling these measurement challenges. The reports and frameworks below represent just a small sampling of the work being done to understand when open innovation really works. To be sure, this list is far from comprehensive. With a growing body of research and practical experience, these examples offer a starting point for understanding the diverse ways practitioners and researchers are approaching open innovation evaluation.

Are you working on open innovation evaluation yourself or know of analyses we should be reading? We’re always looking to learn from the field — share your examples and insights with us.

Open Innovation Outcomes Survey, Luminary Labs (2025)

Luminary Labs’ outcomes survey captures post-program outcomes from challenge winners across our focus areas. Building on results from a similar survey conducted in 2018, the most recent survey spanned 26 open innovation challenges we designed and produced from 2011 through 2024. In December 2025, we hosted a live webinar discussing open innovation strategy and outcomes. You can view a recording of the event on our website.

The Innovation Equation: Challenge Works RCT Evaluation, NESTA (ongoing)

Working alongside leading academics from the University of California San Diego and policy experts from the Innovation Growth Lab, Challenge Works will conduct a Randomized Controlled Trial (RCT) to evaluate how financial incentives should be structured to best support innovation in a prize context. The study is currently seeking collaborators.

The effects of prize structures on innovative performance, NBER (2020)

This 2020 working paper describes a field experiment comparing two prize purse designs. The study concludes that a winner-takes-all compensation scheme generates significantly more novel innovation relative to a compensation scheme that offers the same total compensation, but shared across the 10 best innovations.

Outcome-Driven Open Innovation at NASA, NASA (2018)

This paper provides several case studies of NASA open innovation activities and maps the outcomes of those activities to a successful set of outcomes that challenges can help drive alongside traditional tools such as contracts, grants, and partnerships.

How do we measure the value of grand challenges?, Gates Foundation (2014)

This blog post outlines an approach to evaluating the ROI of grand challenges. The Gates Foundation proposes a relatively simple formula, considering value (such as number of projects in development, field knowledge gained, and leveraged external funding) against cost (such as direct project costs, prize funds, and opportunity costs).

InnoCentive.com (A), Karim R. Lakhani, Harvard Business School (2008)

This seminal case study describes the rationale for participation by solvers in innovation contests and the benefits that accrue to firms — with some surprising results.

Ready to solve ambitious challenges? If your organization is tackling problems with no obvious solutions or looking to make strategic investments that advance your mission, we’d love to talk. Contact us to set up a conversation, or reach out to any member of our team.

Publication Date

January 29, 2026

Authors

Andrew Wallace
Manager, Communications & Insights