How are researchers
currently assessed?

Research assessment is a multifaceted concept, encompassing various types of evaluation, each with their own methodologies. To gain a benchmark understanding of researchers’ experiences, we asked them to define the types of assessment they participate in, how these are measured, and which aspects of their work are considered.
Research assessment
means many different things

To put the rest of our survey into context, we first asked researchers to select which types of evaluation they had recently experienced (Fig. 1).
Figure 1: Over the last five years, have you been evaluated for any of the following purposes? (n=5,479)
Figure 1: Over the last five years, have you been evaluated for any of the following purposes? (n=5,479)
Career advancement |
45% |
Grant application and/or reporting process |
62% |
Internal institutional assessment |
51% |
New job |
29% |
External institutional assessment or ranking |
27% |
Other |
3% |
Grant application reviews were the most frequently reported type of evaluation across all demographics (54%). However, as outlined within our survey background and aims, we can also see the wide scope of what is experienced by researchers – with each type of assessment being conducted by different stakeholders, and having differing processes, desired outcomes, and levels of formality. This is something we will refer back to throughout the rest of this analysis.
For most researchers,
metrics continue to dominate

Initiatives for research assessment reform have universally challenged an overreliance on journal and publication metrics as a proxy for research quality.
When used responsibly and in context with other indicators, metrics can provide key insights to inform evaluations – these can include:
- Publication metrics, such as number of publications and citations
- Indicators of societal impact, often measured through “alternative metrics”
- A researcher’s quantifiable contributions to their institution and the wider research community (e.g. value of grant income, number of graduate students supervised).
However, metrics cannot capture the full scope of a researchers’ work, and The Leiden Manifesto, DORA and CoARA all advise using quantitative indicators only within the context of a larger qualitative narrative.3,4,5
Despite industry awareness, within our survey researchers still identified metrics as the dominant factor in how their work is assessed (Fig. 2). This pattern was observed across all regions and disciplines, except in the UK, where 57% of researchers reported using methods that were either equally qualitative and quantitative, mostly qualitative, or entirely qualitative.
Figure 2: At present, what is the mix of quantitative and qualitative methods for how your work is evaluated? (n=3,827*)
*For this analysis we removed researchers who are not actively employed (identifying as postgraduate students, retired or other)
**Numbers may not add to 100% due to rounding
Figure 2: At present, what is the mix of quantitative and qualitative methods for how your work is evaluated? (n=3,827*)
Percentage |
|
Entirely uses metrics |
16% |
Mostly uses metrics |
39% |
Approximately equal |
28% |
Mostly uses qualitative |
10% |
Entirely uses qualitative |
2% |
I don't know/not applicable |
4% |
*For this analysis we removed researchers who are not actively employed (identifying as postgraduate students, retired or other)
**Numbers may not add to 100% due to rounding
This presents a disconnect between desired evaluation frameworks and what researchers feel happens in practice, a point which was evident within free-text responses (n=1,011). Conducting comprehensive evaluations is often labour-intensive, and many researchers acknowledged a need for indicators and metrics that can be used to make quick judgements where assessment is being done at scale.
Responses to questions further along in this survey flagged other potential reasons for this discrepancy, including concerns about bias in qualitative review, and a difficulty to influence ingrained behaviour change.
“The evaluation process is principally quantitative and bibliometric, there is no possibility for qualitative evaluation and impact of research.”
Argentina, early career stage with tenure/permanent position, Biological/Life Sciences
“Institutes are more keen on numbers, as it has a direct impact on their rankings. Once the rankings are stabilized, then they think about the qualitative elements of the evaluation.”
India, mid-career stage, Social Sciences & Humanities
Journals and publication
metrics remain prominent
when assessing research outputs

The survey then delved deeper into four areas of research assessment, the first being research outputs. When we asked researchers which types of outputs they create and which types are evaluated, the results were as expected – with journals leading the way followed by books and conference contributions (Fig. 3). However, the survey also showed that there are several other output types that are commonly created by researchers across all regions, but unlikely to be used within evaluations (particularly datasets/databases and reports for external bodies).
Figure 3: Which of the following research output types have you used to disseminate your research, and which of these are part of any evaluation of your work? (n=4,687)
* Numbers may not add up to 100% due to rounding
Figure 3: Which of the following research output types have you used to disseminate your research, and which of these are part of any evaluation of your work? (n=4,687)
Have used, and IS something I am |
Have used, but IS NOT something I am |
Have not used |
|
Journals |
90% |
7% |
3% |
Conference contributions and proceedings |
61% |
28% |
11% |
Books/book chapters |
57% |
20% |
22% |
Reports for an external body (e.g. for a private company, government department) |
23% |
30% |
47% |
Patents |
22% |
13% |
65% |
Data set/databases |
22% |
34% |
44% |
Software code |
12% |
23% |
65% |
Design or visual media (e.g. film, computer game) |
11% |
19% |
71% |
* Numbers may not add up to 100% due to rounding
We then asked about the indicators used to evaluate these research outputs. Publication metrics were by far the most commonly cited, surpassing all other options by a significant margin, followed by which publisher the output is published by and which language the output is published in. “Other” indicators included Journal Impact Factor and rankings, number and quantity of publications, and authorship position (some of which are contained within the presented options).*
Figure 4: Which of the following indicators are used to evaluate your research output? (n=4,369)
Figure 4: Which of the following indicators are used to evaluate your research output? (n=4,369)
Percentage |
|
Publication metrics e.g. citations or article level impact |
85% |
Whether the outputs are published by specific publishers |
31% |
Whether the outputs are published in English or a domestic language |
28% |
Whether the outputs include international co-authors |
27% |
Whether the outputs are open access |
23% |
Whether the outputs are interdisciplinary |
18% |
Alternative metrics e.g. policy citation, social media mentions |
16% |
Other |
6% |
There are no specific indicators that assess the quality of my research outputs |
5% |
I don't know |
2% |
These two charts further emphasise the disconnect identified above. DORA, for example, recommends funders and institutions to highlight “that the scientific content of a paper is much more important than publication metrics” and to “consider the value and impact of all research outputs”6 - but it’s clear that there is still more work to be done.
* Our intention was to include “Whether the journals you publish in have a certain minimum level of Impact Factor” as a separate option within this survey question – however a technical error meant that no data was collected. From the high prevalence of publication metrics in this data, along with common acknowledgement on the overreliance on Journal Impact Factor for research assessment, our expectation is that this option would have been widely selected by respondents.
Contributions to society are highly regarded, but with some caveats

Over recent years, positive contributions to society – or "the public good” – have become increasingly important criteria for research assessment, with researchers, institutions and funding bodies expected to demonstrate the ways in which they help address large societal challenges such as the UN Sustainable Development Goals (SDGs).7 It is also common for researchers to collaborate across disciplines and with industry/community organisations, and to participate in community engagement activities to maximise the societal impact of their work.
In many cases, this is now embedded into assessment practices:
- Grant applications often require researchers to articulate how their work will have a positive contribution to society, and how affected communities or organisations are involved in the design of the research.
- National evaluation exercises such as the Research Assessment Framework in the UK, the Excellence in Research initiative in Australia, and the Hong Kong Research Assessment Exercise include societal impact as a core component.
- University rankings from organisations such as Times Higher Education and QS often include SDGs rankings.
This is reflected in our survey results, which found that 75% of researchers are assessed on at least one type of contribution to “the public good” (Fig. 5). This differed somewhat between regions and disciplines, ranging from 70% in Europe and the Americas to 82% in Asia, and from 68% in Mathematics & Computer Science to 80% in Biological/Life Sciences and Biomedical Sciences.
Figure 5: Is your work currently evaluated on any of the following “public good”* areas? (n=4,554)
* “Public good” was defined throughout the survey as “how your research contributes to and influences society, the economy, and the environment beyond academia”
Figure 5: Is your work currently evaluated on any of the following “public good”* areas? (n=4,554)
Percentage |
|
Research that provides a national or local benefit |
40% |
Public good of your research e.g. any effect on society on economy |
32% |
Innovation activity e.g. patents, start-ups, spin-outs, social enterprise |
28% |
Research that benefits specific groups (e.g. patients) |
28% |
Industry collaborations |
26% |
Research that benefits global challenges (e.g. UN SDGs) |
25% |
Public engagement activity |
22% |
The dissemination activities you carry out to ensure end-user access |
20% |
Influence on policy |
18% |
Co-production of research directly with communities |
15% |
Other |
2% |
My research is not evaluated for its contribution to any of these aspects |
25% |
* “Public good” was defined throughout the survey as “how your research contributes to and influences society, the economy, and the environment beyond academia”
However, as with research outputs, we observed a discrepancy between the actions researchers commonly undertake and what is included in evaluations. Notably:
- 57% of researchers undertake public engagement activities (n=3,839), but only 22% are assessed on these.
- Almost all researchers engage in dissemination activities to ensure end-user access to their research (n=4,557) – most commonly publishing research open access (89%) – but this is only assessed 20% of the time.
- 25% of researchers are not evaluated on these types of activities at all.
Free text comments (n=926) also highlighted a variation between different types of assessment, with grant applications and national assessment exercises providing more opportunity to highlight positive contributions to society. Some researchers expressed that these activities – while highly regarded by their institutions – are difficult to measure and standardise, and therefore not often included in the more “informal” day-to-day evaluations.
“My institution values knowledge transfer and public engagement activities, but has no way of evaluating them.”
Portugal, other career stage, Social Sciences & Humanities
“These activities are usually regarded positively but often not included in evaluation of my research for research advancement. Then almost only metrics regarding publications count. Contrastingly, in grant applications it is often possible to elaborate on less 'numeric' measurements of the value of my research.”
Denmark, established career stage, Clinical Medicine & Healthcare
“In the UK, most of this comes via the 5-6 yearly REF cycle, in the form of impact case studies. Being able to provide or contribute to a case study is viewed very positively.”
United Kingdom, established career stage, Biological & Life Sciences
Wider contributions to research
are commonly considered within evaluations, with regional differences

When thinking about research assessment in a holistic manner, we also wanted to understand how researchers’ wider contributions are considered; including activities relating to their research (Fig. 6), as well as other contributions to the research community (Fig. 7). Our survey found that much of this type of work is included within evaluation practices, with few respondents reporting not being assessed on any of these measures.
Figure 6: Are you currently evaluated on any of the following regarding your research? (n=4,500)
Figure 6: Are you currently evaluated on any of the following regarding your research? (n=4,500)
Percentage |
|
Supervising graduate students to completion |
59% |
Invited lectures (e.g. as plenary/invited speaker at a conference) |
49% |
Contributing to departmental and or institutional research initiatives/strategies |
48% |
External recognition (e.g. via prizes or prestigious fellowships) |
48% |
How much grant income I receive |
46% |
Developing international collaborations |
43% |
Number of grant applications I make |
39% |
Developing interdisciplinary collaborations |
37% |
Other |
2% |
I am not evaluated on any of these activities |
11% |
Figure 7: Are you currently evaluated on any of the following contributions you make to the wider research community? (n=4,500)
Figure 7: Are you currently evaluated on any of the following contributions you make to the wider research community? (n=4,500
Percentage |
|
Teaching/mentoring |
66% |
Carrying out peer review |
50% |
Organising conferences and/or training events |
48% |
Acting in an editorial role for a journal or book series |
47% |
Leadership in national research (e.g. on advisory committees) |
38% |
Taking part in initiatives to improve the research environment (e.g. projects regarding ethics, integrity or diversity) |
28% |
I am not evaluated on any of these activities |
14% |
Other |
2% |
Supervising graduate students, teaching and mentoring, and carrying out peer review were frequently assessed in all regions. However, this set of questions also saw some regional variations, indicating differences in practices between countries. For example:
- Securing grant income was very variable in its ranking, ranging from 67% in Oceania, to 54% in Europe, down to 28% in Africa (though this could in part be influenced by the availability of research funding in these regions).
- External recognition, invited lectures and leadership in national research were more commonly assessed in Europe, Oceania and the Americas than in Asia and Africa.
- 17% of European researchers stated that they are not evaluated on this type of work at all – compared with 10-13% in other regions.
“Research culture” is evaluated and understood differently in different regions

In recent years, there has been an increasing focus on the development of healthy research cultures, with researchers being encouraged to take part in activities that contribute to this.8 This is a broad term, and for this survey we gave the definition adopted by the Royal Society: “Research culture encompasses the behaviours, values, expectations, attitudes and norms of our research communities. It influences researchers’ career paths and determines the way that research is conducted and communicated.”9
When we asked researchers whether they are assessed on this, the results varied by region, with respondents in Europe and the Americas much less likely to think these types of activities are included in evaluations versus those in Africa and Asia (Fig. 8).
Figure 8: To what extent does your contribution to "research culture" form any part of how you are evaluated in your role? (n=4,456)
Figure 8: To what extent does your contribution to "research culture" form any part of how you are evaluated in your role? (n=4,456)
A great deal |
Somewhat |
Not at all |
|
Whole survey, |
21% |
51% |
28% |
Africa, n=491 |
39% |
47% |
14% |
Asia, n=1,474 |
26% |
56% |
18% |
Oceania, n=113 |
19% |
54% |
27% |
Americas, n=1,015 |
18% |
51% |
31% |
Europe, n=1,363 |
13% |
45% |
42% |
However, an analysis of the free-text comments within this part of the survey (n=2,130) showed that there are a range of activities researchers consider as contributing to a healthy research culture, and many of the most commonly cited (mentoring, engaging in peer review, public engagement) overlap with activities already mentioned within previous categories. Relatively few comments discussed diversity, equity and inclusion, something which features strongly in the UK’s view of research culture.
It is clear that there is not yet a common global understanding of research culture, which makes it difficult to interpret these results without additional context. This may be a topic to explore in more depth within future research.
References (Links open in a new window)
3 Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. https://doi.org/10.1038/520429a
4 DORA. (2024). Guidance on the responsible use of quantitative indicators in research assessment. https://doi.org/10.5281/zenodo.10979644
5 CoARA. (2022). Agreement on reforming research assessment. Retrieved from https://coara.eu/agreement/the-agreement-full-text/
6 DORA. (n.d.). San Francisco Declaration on Research Assessment. https://sfdora.org/read/
7 Dotti, N. F., & Walczyk, J. (2022). What is the societal impact of university research? A policy-oriented review to map approaches, identify monitoring methods and success factors. Evaluation and Program Planning, 95, 102157. https://doi.org/10.1016/j.evalprogplan.2022.102157
8 UKRI. (2022, December 8). We all have a role in building a positive research culture. https://www.ukri.org/blog/we-all-have-a-role-in-building-a-positive-research-culture/
9 The Royal Society. (n.d.). Research Culture. https://royalsociety.org/news-resources/projects/research-culture/