Researchers’ views on assessment practices

After forming an understanding on how researchers are currently assessed, we then wanted to further explore their views on these practices, as well as the support their institutions provide. 

Researchers are mostly positive about their experiences of evaluation

In this part of the survey, we asked respondents to think about the types of evaluations they had recently experienced, as outlined earlier in Figure 1. Overall, perspectives were positive (Fig. 9), and this trend was consistent amongst all regions (though at differing levels, ranging from 56-73% net-positive in Europe to 82-88% net-positive in Africa). 

Figure 9: Whether or not it was successful, did you feel positively about your experiences of the following types of evaluation? (n=5,291) 

Set of doughnut charts showing sentiment of researchers on different evaluation types. Data is available in the 'show data below' button.

*Numbers may not add up to 100% due to rounding

Figure 9: Whether or not it was successful, did you feel positively about your experiences of the following types of evaluation? (n=5,291*)

It was a positive opportunity to showcase the
impact of my work

Somewhat positive

Neutral

Somewhat negative

It was an
administrative burden that poorly reflected the impact of my work

New job

46%

29%

15%

6%

4%

Career advancement

45%

30%

13%

7%

6%

External institutional assessment or ranking exercise

44%

27%

16%

7%

6%

Internal institutional assessment

41%

27%

16%

8%

8%

Grant application and/or reporting process

34%

30%

17%

13%

6%

*Numbers may not add up to 100% due to rounding

This is perhaps surprising considering Part 1 of this white paper, where we found that many activities researchers commonly undertake are not considered within evaluations. When we look at views on assessment practices more broadly, a more nuanced picture emerges. On the whole results remained positive, with only 20% of researchers indicating dissatisfaction with the way their research is evaluated. However, a significant proportion of respondents expressed neutral views (Fig. 10), and in Europe and Oceania negative views were more prevalent. 

Figure 10: To what extent do you agree with the following statements? (n=5,201) 

Set of doughnut charts showing researchers' agreement on several statements about research assessment. Data is available in the 'show data below' button.

*Numbers may not add up to 100% due to rounding

Figure 10: To what extent do you agree with the following statements? (n=5,201)

The way my research is evaluated...

Broadly agree

Neutral

Broadly disagree

Is fair to me

41%

41%

18%

Is an appropriate measure of my contribution to the academic discipline

49%

33%

18%

Is an appropriate measure of its wider contribution outside academia

39%

39%

23%

Is an appropriate measure of its quality

44%

36%

20%

Is a good use of my time

41%

39%

21%

*Numbers may not add up to 100% due to rounding

Within free-text comments (n=1,279), some researchers had substantive criticisms of the assessments they experience. These can be summarised as follows: 

Concerns relating to trust in the process, including a lack of transparency, inadequate experience of evaluators, and bias and subjectivity within evaluation practices.

“The measures used for research evaluation are usually not objective and they can be manipulated so that faculty members can choose their preferred candidates for promotion, hiring etc.”

Switzerland, early career stage, multiple disciplines

An overreliance on publication metrics leading to a focus on quantity over quality, as well as a disadvantage to certain disciplines, niche or interdisciplinary research.

“It is suggested to comprehensively consider the various abilities and qualities of researchers, rather than relying solely on published papers and funding ... Sometimes, overcoming a difficult problem may take a long time during which there may not be any research output.”

China, postgraduate student, Biological & Life Sciences

Frustration that research assessment often judges things that a researcher cannot control or influence – relying on the downstream impacts of their work compared to the quality and effort they put in.

“The evaluation criteria are based on quantitative indicators or on indicators that are not depending on my effort and that I cannot improve by simply working more and better.”

Romania, established career stage, Physical Sciences & Engineering

The difficulty in assessing the societal impact of research due to inadequate measures or indicators with broad applicability.

“I think most evaluations miss the genuine impact of research. However, I am aware that that is a very difficult thing to do systematically.”

South Africa, established career stage, Biomedical Sciences

Many of these are in line with our findings in Part 1 of this white paper. However, we were interested to find that a significant number of researchers expressed concerns about qualitative forms of evaluation which, as discussed, are considered by research assessment reform initiatives to be fairer indicators. This could also go some way to explaining the positive views on existing (quantitative) approaches, though more research would need to be done to validate this theory. 

Awareness and perceived impact of initiatives to reform research assessment remain low  

As we have already discussed, there are several initiatives within the research community that advocate for more transparent and inclusive research assessment methods. To understand how these movements have affected institutional practices, we asked researchers to identify whether their institution was signed up to any initiatives, and whether they had observed any changes in research assessment as a result.

Awareness was low, with around half of researchers (n=5,039) stating that they do not know whether their institution is signed up to any initiatives; this ratio did not change when segmenting the data by region or career stage.  

For those who stated that their institution is signed up to one of these initiatives, we then asked whether they had noticed any changes in the way research assessment is carried out. Responses were mixed, with 19% of researchers perceiving significant or very significant change, 27% perceiving some change, and 40% perceiving little to no change (Fig. 11). 

Figure 11: Do you perceive any changes in the way that research assessment has been carried out in your institution since signing up to any initiatives? (n=1,290) 

Pie chart showing perceptions of change within institutions since signing up to research assessment reform initiatives. Data is available in the 'show data below' button.

*Numbers may not add up to 100% due to rounding

Figure 11: Do you perceive any changes in the way that research assessment has been carried out in your institution since the signing up to any initiatives? (n=1,290)

Percentage

Very significant change

5%

Significant change

14%

Some change

27%

Very small change

21%

No change

19%

I don't know

15%

*Numbers may not add up to 100% due to rounding

Within free-text comments (n=366), respondents stated that their institutions acknowledged the need for change, and tended to communicate this during evaluation processes. However they also recognised the difficulty in implementing these changes at all levels – especially when it comes to disincentivising ingrained behaviour on a large scale. 

“The DORA principle is invoked whenever we interview people, which is good. However, it is hard to eradicate the instant pull of a Nature, Science or Cell paper on a CV for senior researchers of my generation.”

UK, established career stage, Biological & Life Sciences

“The importance of benefit to society is acknowledged as is the need to change the way we are evaluating. The will is there, but the methods are lacking.”

Netherlands, established career stage, multiple disciplines

“We are instructed to keep a holistic view of research evaluation and not consider metrics like h-index. However, most evaluators still keep that in mind, and let it guide qualitative assessments.”

United States, established career stage, Natural Sciences

There are mixed views on workloads and support for evaluation processes  

When we asked researchers about their opinions on the workload involved in research assessment practices, along with access to institutional support, results were quite varied – with around 45% positive responses compared with around 35% negative (Fig. 12). As with other questions, researchers in Europe were the most pessimistic, with negative views rising to around 45%. 

Figure 12: To what extent do you agree with the following statements? (n=4,284) 

Set of doughnut charts showing researchers' agreement on several statements about evaluation workloads. Data is available in the 'show data below' button.

*Numbers may not add up to 100% due to rounding

Figure 12: To what extent do you agree with the following statements? (n=4,284) 

Strongly agree

Somewhat agree

Neither agree nor disagree

Somewhat disagree

Strongly disagree

Requirements can be prepared and
evidenced easily

16%

30%

23%

20%

12%

I have access to administrative support (e.g. research administrator, librarian) in
preparing for research evaluation

17%

27%

17%

17%

22%

I can easily fit the preparation for research evaluation into my workload

18%

30%

19%

20%

13%

I am well supported by my department and/or institution when preparing for
research evaluation

17%

26%

21%

18%

19%

*Numbers may not add up to 100% due to rounding

In the free-text comments (n=922), researchers had several suggestions on how their institutions could help them prepare for evaluations, from which four common themes emerged: 

  • Simplification and standardisation of processes to make it easier for researchers to understand requirements and reduce duplicated work. 
  • Increased use of systems and technology to streamline or automate manual tasks. 
  • Improved administrative support for evaluation activities. 
  • A reduction in workload and institutional bureaucracy in general, to give researchers more time for both conducting research activities and evaluation of research. 

“We need to feed data to different automated systems (databases) that do not connect to each other. Interconnection among these systems would greatly improve how much time is spent on evaluation reports.”

Brazil, established career stage, Biological & Life Sciences

“Simplifying reporting requirements and providing clear guidelines of what constitutes acceptable evidence of impact would streamline the process.”

India, postgraduate student, Physical Sciences