Analysing failure to uncover the ingredients of success

None of the proposals from two Gothenburg-based universities tasted success with a big Formas call from 2019/2020. A recent analysis comprehensively probes the possible causes to explore ways of improving performance in the future.

Competition for research funding is fierce, with barely two proposals out of ten succeeding with even the “bread-and-butter grants” from funders such as the Swedish Research Council. The situation can be even tougher for non-recurring calls that generate huge interest because of both their profile and the funds on offer. One such call from Formas in 2019/2020, entitled Realising the sustainable development goals, garnered no less than 174 proposals of which only 11 were funded. This translates to a success rate of just over 6 percent.

Clearly, even some really good proposals do not make the cut when the competition is so high. Still, the University of Gothenburg (GU) and Chalmers University would have hoped that at least a couple among the 30 proposals submitted by their researchers would have been successful. Especially given the targeted efforts from the Gothenburg Centre for Sustainable Development (GMV) and GU’s Research and Innovation Office, which arranged an information meeting and a matchmaking session. As it turned out, although five proposals were shortlisted for the panel meeting, none was funded. Was it just tough luck or was something systematically lacking in the Gothenburg proposals? To pose the question differently, what was unique about the proposals that did get funded?

To answer such questions, GMV’s Madelene Ostwald analysed a variety of documents and held interviews with 19 Gothenburg applicants; 5 Swedish researchers who were funded; the chair and vice-chair of the evaluation panel; and 3 other evaluators. In addition, she interacted with the Formas research officer responsible for the call to understand the evaluation and decision-making processes. Finally, she also held discussions with the leaderships of the two universities and researchers. The results of this extensive analysis were presented in a recently published report (in Swedish). As this type of analysis is not common (or not commonly made public), the results make for interesting and informative reading.

Formas’s evaluation panel assessed each proposal based on five criteria (e.g., methods and implementation, societal benefit), with each criterion being graded using a seven-point scale (1=insufficient, 7=outstanding). Five of the Gothenburg proposals were among the thirty proposals that were shortlisted for further discussion at the panel meeting. Whereas the median overall grade for all the Gothenburg proposals was 4.7, it was 5.6 for the five shortlisted ones. Nevertheless, even being adjudged as very good to excellent was clearly insufficient: the funded proposals received a median overall grade of 6.4, which puts them in the excellent to outstanding category.

The panel’s remarks reveal some of the lacunae in the five proposals that were shortlisted but not funded. For example, the lack of an explicit link to the Sustainable Development Goals (SDGs), unclear rationale and unclear methodology. Interestingly, in some cases the panel also remarked on the inattention to ethical issues and gender- and diversity-related aspects, which suggests that researchers can ignore such aspects only at their peril.

According to the call text, interdisciplinary collaboration was encouraged but was not a prerequisite. However, six out of the eleven funded projects did come from centres or institutions with a strongly multidisciplinary character. Some of the Gothenburg proposals that scored relatively highly were also associated with interdisciplinary environments. According to Ostwald, one explanation might be that such environments foster the development of scientifically strong proposals for this type of call. I concur with this explanation. In my view, the higher quality is assured by not only the scientific breadth in such environments but probably also their expertise in communication and engagement. Indeed, the call emphasised collaboration with beneficiaries, and the interviewed panel members underscored the importance of well-written proposals.

Notably, neither the age, gender and academic level of the applicants nor the number of project members from outside the host institution seem to have mattered much in the evaluation. The data suggest that having multiple participating researchers enhanced the chances of success but not if it increased the disciplinary breadth beyond a limit. I interpret this to imply that although some breadth is essential for SDG-relevant research, having too many disciplines can inhibit the development of a coherent narrative. Interestingly, the analysis did not find any evidence to suggest that the absence of a “star” researcher was a lacuna.

Ostwald’s analysis does not identify anything that unambiguously distinguishes the successful proposals from the unsuccessful ones. However, some attributes tie the successful proposals loosely together. For example, many proposals addressed problems with an international or global flavour, and many involved a group of cohesive and productive researchers. Again, the panel’s remarks hold clues of what worked. The funded proposals were deemed to have been innovative, tackling significant research questions or outstanding challenges, and highly relevant for meeting the SDGs.

Interestingly, none of the five successful researchers interviewed by Ostwald had sought support from their heads, financial or other administrators and grants/innovation offices. Perhaps these PIs had considerable experience of applying for complex funding calls. They might also happen to have exceptional proposal-writing skills. Both of these possibilities would need to be confirmed by a close reading of their proposals and additional interviews with them. I do know that one successful group (presumably not interviewed by Ostwald) sought proposal-development support from a private consultancy that I worked for until late last year. Not only did I help the group frame the proposal at the outset, but I also provided extensive feedback and thorough edits. I would definitely not underestimate the value that grants/innovation offices can add, but this relies on their having the right combination of expertise and experience. If that is lacking, then it needs to be developed.

Finally, geography does seem to have been destiny: the successful PIs come from either institutions in the Stockholm-Uppsala region or Lund University. One possibility that came up in Ostwald’s discussions with the two Gothenburg-based universities is path dependence. In the context of a call for funding, this could be defined as the tendency of past performance to determine present performance. I find path dependence to be a plausible explanation. Thus, geography may be a determinant only incidentally: the real determinant might be mature multidisciplinary host institutions with a proven record of stakeholder-oriented research and involvement in the SDG process.

To conclude, conducting a study of this nature is obviously time consuming, but it seems worthwhile whenever feasible based on the evidence of this report. I should note that besides its analysis of the determinants of success and failure, the report also provides a valuable insight into the evaluation process itself and offers several recommendations to the universities, researchers as well as grants/innovation offices. It also comments on broader issues to do with the allocation of research funding. Although those aspects are outside the scope of this post, I encourage you to read the full report or at least this press release.

Ninad Bondre (Research Coordinator)

 

 

 

 

 

 

 

2 comments

  1. Analysing failure (or for that matter, success) by studying evaluation reports is indeed an important aspect of grant writing. For researchers, this is often made difficult by many Swedish funders’ peculiar strategy to not provide any feedback at all on the bulk of submitted proposals. We at the support services can make an important contribution by systematically examine outcomes from “bread and butter” calls from, e.g., VR and Formas and at least provide a background to the scores and rejections.

    1. Thanks Stefan, I agree. Authors whose research articles are declined publication receive an explanation from the editor and, if the papers were rejected after review, the reviewers’ comments too. I don’t see why it should be different with grant proposals. The European Research Council (ERC) does a good job in this regard — the panels’ summaries and comments from individual reviewers are quite useful, more often than not. Providing meaningful feedback is an art, and at least the public funders in Sweden could do more in this regard.

Leave a comment

Your email address will not be published. Required fields are marked *