Towards Responsible Research Assessment: The Launch of DORA’s Practical Guide and Its Global Implications
- OUS Academy in Switzerland

- Aug 13
- 5 min read
Author: Ali Rezaei
Affiliation: Independent Researcher
Abstract
In a significant step toward rethinking how academic research is evaluated, the San Francisco Declaration on Research Assessment (DORA) has introduced a new Practical Guide to implementing responsible research assessment. While DORA has been gaining momentum since its launch in 2012, this newly published guide marks the first detailed framework to help institutions apply DORA's values in everyday practice. This article examines the background of DORA, outlines the structure of the Practical Guide, explores how it is being adopted globally, and reflects on the broader implications for the future of academic evaluation systems. The paper aims to offer a comprehensive view of how DORA's Practical Guide could transform scholarly environments by promoting fairness, inclusivity, and diverse contributions in research.
1. Introduction
Academic institutions have long relied on a narrow set of metrics to evaluate research quality. Numbers like the Journal Impact Factor have been treated as shorthand for scholarly excellence, often overshadowing the actual content and significance of research. This has led to widespread criticism across the academic community.
In response, the San Francisco Declaration on Research Assessment (DORA) was introduced in 2012. Its core message was simple but powerful: research should be evaluated based on its quality and impact, not on where it is published. Since its release, DORA has gained global recognition, with thousands of universities, publishers, and funding agencies pledging their support.
Now, over a decade later, DORA has taken a bold step forward by launching a Practical Guide to help institutions implement its principles. Released in mid-2025, the guide provides concrete tools and strategies to reform how researchers and their work are assessed. This article explores the guide’s content, its significance, and how it might reshape the landscape of global research evaluation.
2. Understanding DORA: A Shift in Thinking
DORA began with a clear goal: to improve how the outputs of scientific research are assessed. It challenged the overuse of impact factors and called for a broader and more responsible set of criteria in evaluating researchers and their work. DORA emphasized transparency, equity, and the recognition of a wide range of scholarly contributions.
Over time, these ideas resonated across disciplines and regions. Research institutions and funding agencies began adopting more inclusive policies, recognizing outputs like datasets, policy briefs, software, and public engagement alongside traditional journal articles.
Despite its growing popularity, one challenge remained: while many supported DORA’s values, few knew how to put them into practice. Institutions lacked clear roadmaps for moving away from traditional metrics. This gap led to the creation of the Practical Guide—a hands-on resource designed to turn ideas into action.
3. The Practical Guide: A New Chapter for DORA
The release of DORA’s Practical Guide in 2025 marks a pivotal moment. Unlike earlier declarations or position statements, the guide is built for action. It is meant for research-performing organizations that are ready to change how they evaluate academic work.
3.1 Structure and Content
The guide is built around practical activities. It helps institutions:
Form leadership teams and working groups focused on assessment reform.
Identify critical stages where evaluation takes place—such as hiring, promotion, and funding decisions.
Develop communication plans to engage stakeholders at all levels.
It includes step-by-step instructions, case studies, and adaptable templates that institutions can use to shape their own policies.
3.2 Learning from Real-World Examples
The guide draws on examples from universities in countries like Canada, Denmark, and Japan. These institutions have experimented with narrative-based CVs, expanded definitions of scholarly output, and inclusive committee structures. By sharing their experiences, the guide highlights that there is no single model for success. Each institution must adapt DORA’s principles to fit its culture, values, and capacity.
4. Why This Guide Matters
The Practical Guide addresses a long-standing issue in academic reform: the disconnect between values and implementation. While many organizations agree in principle that metrics should not dominate research evaluation, few have known how to move forward.
This guide fills that void by offering clarity and structure. It shows institutions that reform is not only possible, but manageable. More importantly, it supports change that is grounded in fairness, context, and recognition of diverse contributions.
For example, instead of judging a researcher by the number of articles published in high-impact journals, institutions are encouraged to consider the relevance of their work to society, their role in collaboration, their efforts in mentoring, and their openness in sharing data and methods.
5. Early Adopters and Global Reach
The launch of the guide was accompanied by global online panels targeting different regions, including Asia-Pacific, Europe, Africa, and the Americas. These sessions were not only informative—they helped build communities of practice.
Already, several institutions have started aligning their internal policies with the guide. Universities are revising promotion criteria, national funders are redefining evaluation rubrics, and hiring panels are beginning to favor qualitative narratives over raw publication counts.
One of the most promising aspects of the guide is its adaptability. It does not prescribe a one-size-fits-all model. Instead, it encourages institutions to start with small, achievable steps. Whether through revising job postings, creating inclusive review committees, or training evaluators on bias, the guide supports progress at every level.
6. Challenges Ahead
Although the Practical Guide offers a strong foundation, the road to reform is not without obstacles.
6.1 Cultural Resistance
Deep-rooted academic norms can be slow to change. Many senior academics have built their careers under traditional systems and may be hesitant to shift focus. In some cases, faculty may worry that new methods of assessment are less objective or more open to interpretation.
6.2 Resource Limitations
Implementing new systems takes time, staff training, and administrative support. Smaller institutions, particularly in low-resource settings, may face difficulties in dedicating personnel to this work. However, the guide’s flexible format allows for gradual adoption based on available capacity.
6.3 Measuring Impact
Ironically, reforming research evaluation raises the question of how to measure success. Institutions will need to collect data on satisfaction, equity, research diversity, and outcomes over time. These feedback loops are essential for refining and sustaining reform efforts.
7. The Bigger Picture
DORA’s Practical Guide is more than just a policy document—it is a catalyst for cultural change. It encourages us to rethink what we value in research and to reward contributions that have long gone unnoticed.
In many ways, the guide also aligns with broader movements in academia, such as the push for open science, inclusion, sustainability, and interdisciplinary collaboration. By expanding the definition of success in academia, institutions can foster environments that support creativity, integrity, and real-world impact.
The next few years will likely see increased momentum as more organizations commit to reform. As institutions share their progress, a global community of practice will emerge—one that learns from diverse experiences and adapts to meet the needs of different regions, disciplines, and communities.
8. Conclusion
The release of DORA’s Practical Guide marks a turning point in the global effort to reform academic research assessment. It transforms abstract principles into actionable steps and offers hope that more inclusive, fair, and meaningful evaluation systems are within reach.
As universities, research institutes, and funding bodies begin to use the guide, the academic landscape may gradually shift toward values that truly reflect the diversity and depth of scholarly work. In the process, we may also rediscover the core mission of academia: to advance knowledge, solve real-world problems, and uplift communities around the world.
Hashtags
References
DORA Working Group. San Francisco Declaration on Research Assessment: Principles and Practice. San Francisco, 2012.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. The Leiden Manifesto for research metrics. Nature, 2015.
Moher, D., Naudet, F., Cristea, I.A., Miedema, F., Ioannidis, J.P.A., Goodman, S.N. Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 2018.
Wilsdon, J., Allen, L., Belfiore, E., et al. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Higher Education Funding Council for England, 2015.
International Science Council. Opening the Record of Science: Making Scholarly Publishing Work for Science in the Digital Era. ISC, 2021.




Comments