Incentivizing the game industry to solve for education

Balancing requirements and incentives to attract innovators.


Numerous challenge design questions surfaced during the research phase for the U.S. Department of Education EdSim Challenge, which called upon augmented reality (AR), virtual reality (VR), and gaming developers to produce educational simulations that would strengthen academic, technical, and employability skills. In 2012, a Gartner study found the gaming industry had overtaken the movie industry, earning $79 billion globally. But despite growth in consumer adoption, especially among youth, there was limited innovation in development of simulations for the K-12 and postsecondary education markets. The growing base of experts in AR, VR, and immersive game technologies either did not recognize the market opportunity or found that the commercial gaming opportunity was far more lucrative.


Open innovation challenge design is both an art and a science that requires balancing the interests of the sponsor and the solver through motivation and incentives. When these are in harmony, they inform a suite of highly interrelated elements — including the call to action, criteria, timeline, terms and conditions, intellectual property stance, prize amounts and structure, submission form, jury selection, and judging rubrics — to support the overarching goal and desired outcome.

In addition to the clarity of the problem statement definition, the level of investment in challenge design is a good indicator of how successful the prize competition will be. The market is flooded with platforms that aim to democratize open innovation, and better access to tools and crowds is a good thing. But in the absence of challenge design, even the strongest problem statement is not guaranteed to meet its objectives.

Thoughtful challenge design first addresses why the problem has not yet been solved. Some problems are hard nuts to crack, expensive, or even dangerous to solve. In other cases, the solver base might be unaware of the problem, uninterested in the problem, or unaware that their current work has applicability in other fields. In rare situations, there are simply not enough solvers with the required expertise.

The answers to this question are then balanced with incentives. People enter prize competitions for a variety of reasons. A common framework, inspired by the age of exploration and popularized by the U.S. Prize Authority, is Good, Glory, Guts, and Gold. Good speaks to the intrinsic motivation, glory to external validation, guts to the challenge itself, and gold to resources (both monetary and nonmonetary) offered as an incentive. Any given challenge might have one or more primary motivators, and solvers tend to be rational, weighing the benefits of allocating their time and energy in pursuit of a prize.

The challenge design itself can serve as an additional motivator — or deterrent — to participation. For example, onerous criteria may result in a smaller pool of submissions. This might be acceptable to the sponsor, but if the sponsor is seeking a large number of solutions from a cross-section of solvers, it would be wise to reduce the barriers to entry or reconsider the challenge timeline and incentives in order to attract more solvers. Intellectual property (IP) stance is also a hot button issue for solvers. While many prize competitions allow the innovator to keep the intellectual property, sponsors frequently include protections against future claims or a license to the solution and its derivatives. Solvers weigh these trade-offs against the prize purse. If the purse is too small, an early stage team might feel that it has more to gain by not entering.


Simulation development, even at prototype stage, is a very costly endeavor — to the tune of over $1 million. We quickly discovered that to stimulate interest in developing education simulations, we would need to communicate the market opportunity (today’s students learn from textbooks, but the future will include simulations), the cash prize purse would need to be significant enough to offset the costs of development, and nonmonetary incentives would need to have real value to participants.

While traditional research and intellectual property searches are helpful tools to understand what has been done, engaging with real solvers is the best way to understand the combination of incentives — Good, Glory, Guts, Gold — to inspire participation. Early in the process, we had gained input and buy-in from influential stakeholders, including educators, the game industry, academia, big tech, and hiring organizations, through a formal convening, expert panels, and public feedback. In addition to providing valuable information, these conversations fostered relationships that came to bear later in the program; adding to the government-provided $680,000 prize purse, respected organizations such as IBM, Microsoft, Oculus, and Samsung provided both software and gear, including recently released VR headsets and free cloud services. These resources sent a clear signal to the market that there was an opportunity to transform learning through commercial game-quality simulations.

But what exactly were we asking participants to submit? And what would the parameters be for the winning solution? The design and development of a working simulation has many phases, and while the prize purse was significant, solvers made it clear that the requirements of the first-round submission would need to be achievable enough to merit the effort.

We designed the flow to include two rounds of judging, each requiring different degrees of fidelity. The open submissions round would seek a detailed concept and design, including a description of the concept, simulation experience, and learning objectives; development plan and technical consideration; early thinking around implementation and scaling; and storyboards or visual mockups. During this round, the jury would narrow the pool to five finalists who would each receive $50,000, hardware and software from the sponsors, and access to a virtual accelerator to support development of a playable prototype to be presented at a demo day. The second and final round of judging would require detailed plans, including a description of the learning outcomes and assessment metrics; interoperability considerations and open source elements, and a playable prototype. Following a demo day, the grand prize winner would take home $430,000.


In September of 2017, the jury, which hailed from organizations including Ford, Microsoft, and Girls Who Code, had the chance to immerse themselves in fully functional simulations during a demo and pitch day. From a hands-on visit to the operating room to an exploration of astronomy concepts, the participants explored a wide range of educational experiences that teach career and technical skills. The winner was Osso VR, a surgical training platform that enables users to practice cutting-edge techniques through realistic, hands-on simulations, bridging the gap between career exploration and career preparation. By late 2018, Osso VR’s team had raised $2.4 million in capital and launched a partnership with eight American medical residency programs including those at Columbia, UCLA, Harvard, and Vanderbilt. Smart Sparrow, a finalist of the same challenge, received a $7.5 million investment from global education nonprofit ACT.

This case study is adapted from Luminary Labs CEO Sara Holoubek’s chapter in “Perspectives on Impact: Leading Voices On Making Systemic Change in the Twenty-First Century.” Learn more about open innovation outcomes: view highlights from our 2018 survey of prize recipients.