Skip to content

Why do researchers often prefer safe projects to risky ones? Explaining risk aversion in science

A mathematical framework that draws on the economic theory of hidden action models provides insights into how the unobservable nature of effort and risk shapes researchers’ research strategies and the incentive structures within which they work, according to a study published August 15 in the open-access journal. PLOS Biology by Kevin Gross of North Carolina State University, USA, and Carl Bergstrom of the University of Washington, USA.

Scientific research requires risk-taking, as the most cautious approaches are unlikely to lead to the most rapid breakthroughs. Yet much funded scientific research plays it safe, and funding agencies lament the difficulty of attracting high-risk, high-reward research projects. Gross and Bergstrom adapted an economic contracting model to explore how the unobservability of risk and effort discourages risky research.

The model considers a hidden action problem, in which the scientific community must reward discoveries in a way that encourages effort and risk-taking, while protecting researchers’ livelihoods from the unpredictability of scientific outcomes. The problem in doing so is that incentives to motivate effort clash with incentives to motivate risk-taking, because a failed project may be evidence of a risky initiative, but it could also be the result of simple laziness. As a result, the incentives needed to encourage effort actively discourage risk-taking.

Scientists respond by working on safe projects that generate evidence of effort but do not advance science as rapidly as riskier projects would. A social planner who values ​​scientific productivity over the welfare of researchers could remedy the problem by rewarding major discoveries with amounts sufficient to induce high-risk research, but doing so would expose scientists to a degree of risk to their livelihoods that would ultimately leave them worse off. Because the scientific community is roughly autonomous and constructs its own reward schedule, the incentives that researchers are willing to impose on themselves are inadequate to motivate the scientific risks that would best accelerate scientific progress.

In deciding how to reward discoveries, the scientific community must confront the fact that reward schemes that motivate effort inherently discourage scientific risk-taking, and vice versa. Because the community must motivate both scientific effort and risk-taking, and because effort is costly to researchers, the community inevitably establishes a tradition that encourages more conservative science than would be optimal for maximizing scientific progress, even when risky research is no more burdensome than safer lines of research.

The authors add: “Commentators regularly bemoan the dearth of high-risk, high-reward scientific research and assume that this is evidence of institutional or personal failings. We argue here that this is not the case; instead, scientists who do not want to risk their careers will inevitably choose projects that are safer than those their funders would prefer.”