John Chodacki is the Director of the UC Curation Center at California Digital Library, and Stephanie Lieggi is Executive Director of OSPO (the Open Source Program Office) at UC Santa Cruz.
Research data are central to modern scholarship, and building clear pathways to reward data contributions is increasingly important across academia, including in hiring, promotion, and review decisions. Because of this, we joined the Implementing Data Evaluation in Academia Working Group (IDEA WG) to showcase peer examples of data evaluation and data impact from around the world, and help build practical resources that institutional departments can adopt now.
The IDEA WG, a collaboration between Make Data Count and HELIOS Open, which includes several UC locations (UCLA, UCI, UCSF, UCOP) as members, develops recommendations and resources to advance the implementation of data evaluation in institutional processes. Together, we are creating practical resources to help departments and committees recognize data, software, and other open outputs as part of scholarly achievement.
Importantly, this work complements the existing experimentation happening across the UC system. IDEA offers shared, peer-tested language and tools that campuses can adapt to local structural and governance models. If your department has tried something in this space, we’d love to include it in our case studies to reflect UC’s leadership in responsible research assessment.
Building on Earlier Work
IDEA builds on a growing tradition of experimentation and innovation in research assessment, including efforts led by SF DORA, CoARA, and many initiatives across the UC system. These movements have advanced the conversation about moving beyond traditional publication metrics and toward more holistic and responsible approaches to evaluation.
Our work builds on that foundation by focusing on implementation. We are taking those principles and turning them into tangible tools and examples that institutions can adopt right away.
IDEA also draws directly from community conversations. In 2023, Make Data Count hosted a Summit where many of the participants in research assessment panel discussions collaborated to write Ten Simple Rules for Evaluating Data as a Research Output. That paper serves as both a conceptual foundation and a practical checklist for campuses ready to take action. Together with the Make Data Count Institutions Hub, these resources explain why data evaluation matters and show how to implement it in practice.
Elevating Researcher-led Approaches
IDEA centers on researcher-led and leadership-supported examples. By documenting where data, software, and other outputs are already recognized, analyzing why those changes succeeded, and sharing actionable tools, we aim to make adoption easier. Experience across UC and beyond shows that durable change seldom follows top-down advice; it happens when committees see trusted peers and departments redefining what counts and demonstrating success.
“We wanted to build something that feels owned by the academic community, not prescribed from outside. Seeing examples from other researchers and academic leaders is what gives departments the confidence to act.” – Iratxe Puebla, Make Data Count Director and IDEA WG Coordinator
1. Implementation Guide
A short, adaptable set of templates now available on the Make Data Count Institutions Hub helps researchers and evaluators include data in academic review:
- Sample policy language for tenure and promotion documents that recognize data, software, and open scholarship practices
- A data-friendly CV template that researchers can use to report their dataset and software contributions, and annotate these with context about their reuse and potential for impact
- Guidance for review committees and external evaluators, offering consistent questions and prompts to assess rigor, reuse, and community value of open outputs
These materials can be adopted as-is or customized to fit institutional and departmental norms.
2. IDEA Maturity Model
The IDEA Maturity Model now available on the Make Data Count Institutions Hub is a structured, learning-oriented framework designed to help campuses understand their current stage of progress and chart realistic next steps toward recognizing data and open outputs in academic review.
The model outlines four progressive phases—Initial, Emerging, Established, and Optimizing—across six key areas: Cultural Norms & Expectations, Policies & Processes, Incentives & Rewards, Infrastructure, Capacity Building, and External Alignment & Engagement.
Rather than serving as a benchmarking or scoring tool, the model supports self-assessment and planning. It helps departments and institutions identify where momentum already exists, where gaps remain, and what practical actions can move them toward more consistent and transparent evaluation of data contributions. The goal is to promote reflection and alignment across diverse teams, from faculty committees to administrative leadership, while encouraging steady, evidence-based improvement.
3. Success Stories and Champions
Case studies from peer institutions are now available on the Make Data Count Institutions Hub. These each highlight how departments and research leaders are already putting these ideas into practice. Examples include:
- University of Virginia: incorporating datasets into tenure and promotion guidelines at the School of Data Science
- University of Maryland: adding data and software to departmental review rubrics to broaden how research productivity is defined
- Duke University: demonstrating researcher-led workflows for dataset deposition and reuse recognition
If your department, school, or campus has tried processes that recognize data in evaluation, we would like to include it in the IDEA case studies. Please reach out to us or to idea-wg@makedatacount.org with a short description or a contact for a quick interview.
Putting Ideas Into Practice
The IDEA Working Group is coordinating a rollout to help campuses and departments put these materials to work. Our aim is to make it simple for local champions (faculty, department heads, administrators, or librarians) to adapt the tools, pilot changes, and share what they learn. We’re supporting this effort with ready-to-use communication materials, open office hours, and community calls that bring together those leading data evaluation efforts across institutions.
All of the resources, including the Implementation Guide, Maturity Model, and Use Case Catalog, are now available at Make Data Count’s Institutions Hub. Together, they provide practical entry points for reviewing criteria, updating language in promotion materials, and using peer examples to build confidence and momentum for change. Steps UC departments and campuses can take today:
- Review your criteria: Identify where data contributions could be added in existing hiring, tenure, and promotion documents
- Borrow the language: Use the sample policy text, CV template, and committee guidance to pilot a small change in a willing unit
- Leverage peer examples: Point decision makers to the IDEA case studies to build momentum
- Help us add UC examples to the toolkit. If your department, school, or campus is experimenting with ways to recognize data and open outputs in evaluation, we’d love to hear from you. Share your experience with us at idea-wg@makedatacount.org. Each new example helps strengthen the growing network of researcher- and department-led efforts that are reshaping how scholarly contributions are valued
- Stay connected. Sign up for the Make Data Count newsletter to receive updates on these resources and information on other community work regarding the assessment of research data



