A promising approach for assessing research quality and impact

The Swedish Research Council has just published an evaluation of political-science research in Sweden. The approach used for the evaluation is arguably as interesting as its outcomes.

The Swedish Research Council (Vetenskapsrådet; VR) is the nation’s largest research funder and is mandated by the government to assess the quality and impact of research in Sweden. Such assessments, although undoubtedly important, can be time consuming and impose a substantial administrative burden. Indeed, VR received this type of feedback from Swedish universities, prompting it to develop a less cumbersome alternative. In 2019, it commissioned a pilot evaluation of the quality and impact of political-science research in Sweden to test the new approach.

The evaluation was undertaken by a panel that included several political scientists from outside Sweden and two experts from the Swedish policy community. The results, presented in a recent VR report, shed a generally positive light on political-science research in Sweden and are certainly interesting. Just as interesting, if not more, are the details about the evaluation process, not least the panel’s views on impact assessment and its recommendations for VR, the universities, and others. In a suitably modified form, the process adopted by this pilot evaluation could lend itself to application in other contexts in Sweden and abroad.

The results: excellent but not exceptional

The results, pertaining to the 2014–2018 period, point to the high quality of Swedish political-science research from an international perspective. Swedish publications are cited very well and thus quite visible, and many are published in reputable international journals. Further, all the 14 departments evaluated clearly make a sincere effort to make an impact by engaging with stakeholders beyond academia. Intriguingly, the degree of societal impact as gauged by an analysis of case studies correlates, if not perfectly, with the quality of the research as judged by the publications. As the panel states, this is an insight that may be useful to avoid any kind of perceived contradiction between high-quality research and the potential to generate ‘effect’ in a more applied sense”. On the flip side, the quality of the research and societal impact vary substantially between the departments. Perhaps predictably, larger and established departments tend to do better than smaller, less-established ones. Importantly, there is a paucity of research that international reviewers unanimously agree breaks new ground.

Having demonstrated a track-record of conducting high-quality research, the political-science departments in Sweden (and funders) could now develop strategies to approach the summit. For the departments, encouraging ambition in the choice of research topics, the journals that research is submitted to and the grants that are sought could be one way to go about it. The evaluation panel underscores the need for departments to support early-career researchers with administrative and financial aspects of grant proposals. Such support can certainly supplement that offered by many grants offices at the central level. But, in our experience, researchers would also benefit from higher level support with crystallizing a nascent idea, assessing how well it matches the funder’s expectations, and developing a clear and engaging narrative.

The process: a model worth replicating

Traditional assessments require the universities to supply much of the raw material, sometimes in the form of self-evaluation reports, and can also involve site visits. Given the grumblings about the burden imposed by traditional processes, the challenge for VR was to devise a streamlined yet effective model. Once it developed such a model in consultation with the universities, it needed testing. The choice of political science was influenced by the fact that it enabled the inclusion of a range of universities, big and small, and distributed across Sweden. VR obtained much of the requisite information from various databases, whereas the universities were asked to provide primarily case studies for assessing impact. Once the raw material was in place, its evaluation was overseen by the independent panel, with VR staff assisting in data analysis.

Quality

As one of the most tangible outputs of research, publications are naturally the focus of assessments of research quality – be that of an individual researcher or of an institution. The VR model, too, relied on analysis of publications in two different ways:

  1. Standard global metrics. VR’s in-house staff compared the field-normalised citations of political-science publications from Swedish institutions with those from institutions in the United States as well as 10 western-European and Nordic countries.
  2. Review of selected publications. The panel invited 34 international political-science experts to review a random selection of 285 publications in terms of their originality, significance and rigour. These criteria, which were defined by the panel, were then scored using a scale ranging from “unclassified” (poor quality) to four stars (world-leading quality). The majority of the publications were reviewed by at least two reviewers.
Societal impact

The debate about what constitutes impact and how to measure it continues to date. The scientific significance is perhaps easier to gauge, but how do we assess societal impact, which depends on so many factors beyond the control of academia? For the purpose of this pilot evaluation, the panel relied on 46 case studies from the 14 departments, written following a standard template. Each was read by two panel members and discussed by the full group. The criteria to evaluate impact are naturally more numerous and fluid than the ones used to evaluate quality. The panel relied on such criteria as visibility in the media and a clear link between the impact and the scholarly work that made it. A score between 0 (poor) and 5 (excellent) was assigned to each study.

Interestingly, because this pilot evaluation considered scientific quality as well as societal impact, it was possible to ask the question about the extent to which these two are related. To answer this question, the panel compared the departments’ overall ranks in terms of research quality and impact. In fact, the panel explored this issue in detail via graphs and statistical analysis (see Appendix 5 of the report). As noted in the discussion of the results above, departments that tend to produce high-quality research also tend to generate greater societal impact.

Evaluating the evaluation

The panel assessed not only the quality and impact of political-science research but also the adequacy of the evaluation model itself. The following are the take-home messages:

  1. An easier process. VR succeeded in reducing the administrative burden by carrying out the quality-related analyses in-house. The panel found that as compared with departmental evaluations in countries such as the UK (The Research Excellence Framework), VR “managed a light touch.” Of course, a less burdensome model entails less information and, hence, a less comprehensive assessment. As the panel states in the report, this means that it could evaluate the outcomes of research but not the circumstances under which research was produced.
  2. A fair process. The evaluation of quality relied on citation-based metrics and a random selection of publications. The former have been legitimately criticised as a measure of research quality, whereas the latter could have overlooked some potentially significant publications. Yet, as the panel notes, the judicious use of metrics, aggregated at the level of political science in Sweden, helps to overcome some of the limitations of their use at the individual or departmental levels. A random selection of publications sets the bar higher than asking departments to provide their preferred publications. Interestingly, the panel suggests that this approach might be rare or even unique, something that other countries could consider.
  3. A process that might not suit all purposes. The panel points out astutely that how effective an evaluation is depends on the purpose of conducting it. VR’s pilot model is appropriate for, say, demonstrating good value for the funds it invests. But it may not suffice to take decisions such as, for example, where to set up centres of excellence.

In summary, VR’s pilot evaluation does not only provide insight into the performance and impact of political-science research in Sweden. It also provides a potentially widely applicable model for undertaking administratively light but nevertheless effective and rigorous assessments.

Ninad Bondre (Research Coordinator)

 

 

Leave a comment

Your email address will not be published. Required fields are marked *