Discussion and Solution Sessions
Discussion Sessions

Opportunities and risks at the intersection of AI and Open Research Data
Ilona Lipp (1), Cornelia van Scherpenberg (2)
Organization(s): 1: University of Leipzig; 2: VDI/VDE Innovation + Technik GmbH
How does the rapid advancement of AI tools reshape our approach to research data—and what does this mean for Open Science? This interactive session explores the complex relationship between AI technologies and research data management (RDM), focusing on three key issues: (1) AI’s dependence on accessible, high-quality data; (2) its potential to improve data workflows, from metadata generation to error detection; and (3) the risks tied to data misuse, opacity in training, and ethical concerns around consent and dual-use.
Participants will examine both the promise and challenges of aligning AI development with Open Science principles. The session begins with a short impulse talk, followed by structured argument mapping. Participants—online and on-site—will debate propositions such as “AI development demands more open research data sharing” or “AI will lead to higher quality data”.
We aim to produce a shared map of community perspectives and concrete suggestions for responsible data practices. By session’s end, participants will gain a clearer view of how RDM infrastructures might evolve to address the technical, legal, and political tensions emerging at the AI–Open Science interface.

Streamlining Data Publication: Automatic Metadata and Large Datasets in the Age of AI
Anna Jacyszyn (1), Felix Bach (1), Tobias Kerzenmacher (2), Mahsa Vafaie (1)
Organization(s): 1: FIZ Karlsruhe – Leibniz Institute for Information Infrastructure; 2: Karlsruhe Institute of Technology, Institute of Meteorology and Climate Research
Research data repositories are essential infrastructure for enabling Open Science and ensuring data is Findable, Accessible, Interoperable, and Reusable (FAIR). However, researchers working on repositories face significant challenges in handling ever-increasing volumes of large datasets and the often time-consuming manual process of creating comprehensive, quality metadata. These issues can hinder data publication workflows and limit the findability and usability of valuable research output.
We will present the challenges encountered, propose solutions and first implementations for automatic metadata extraction and large data handling, and discuss how these innovations contribute to a more streamlined and scalable data publication workflow. Participants will have the opportunity to engage with developers and users of repositories, explore the practical implications for their own data management practices, and discuss the potential for adopting similar solutions in other repository contexts.
This discussion session will provide insights into the approaches, technologies, and lessons learned from the Leibniz Science Campus “Digital Transformation of Research” (DiTraRe) work on RADAR and implementation of AI methods in the topic of metadata standardisation. The session is especially relevant for everyone interested in the practical implementation of advanced research data management features that promote reproducibility, efficiency and FAIR principles.

Is Openness in Decline? Data Sharing Between Commons, Control, and Research Security
Katja Mayer (1), Stefan Skupien (2)
Organization(s): (1) University of Vienna; (2) Berlin University Alliance
Open Science is meant to make research more transparent, collaborative, and equitable. But with the rise of machine learning and generative AI, new challenges around data sharing have emerged. AI development often depends on publicly shared or scraped data, yet the resulting models and infrastructures are typically closed and corporate-controlled. Thus, researchers increasingly express concern that open data are exploited in ways that strip context, reinforce bias, and concentrate power in proprietary AI systems (Birhane et al., 2022; Jernite et al. 2022, ; Widder et al., 2023; Zueger et al. 2023).
In my own research, I have observed growing caution among researchers who once advocated openness. This is not just about lack of incentives but reflects deeper unease with how data circulates and is reused and often misused in AI-driven, commercialized environments.
This moderated discussion session aims to learn from researchers and Open Science practitioners about how they experience these tensions. After a 10-minute introduction framing the issues, participants will discuss three guiding questions in three 15-minute rounds, sharing perspectives, dilemmas, and ideas. Contributions via live polls and an online board will help document collective insights and potential paths forward.
Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bao, M. (2022). The values encoded in machine learning research. arXiv preprint arXiv:2106.15590. https://doi.org/10.48550/arXiv.2106.15590
Jernite, Y., Nguyen, H., Biderman, S., Rogers, A., Masoud, M., Danchev, V., Tan, S., Luccioni, A. S., Subramani, N., Johnson, I., Dupont, G., Dodge, J., Lo, K., Talat, Z., Radev, D., Gokaslan, A., Nikpoor, S., Henderson, P., Bommasani, R., & Mitchell, M. (2022). Data governance in the age of large-scale data-driven language technology. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2206–2222. https://doi.org/10.1145/3531146.3534637
Widder, D. G., West, S., & Whittaker, M. (2023). Open (For Business): Big Tech, concentrated power, and the political economy of Open AI. SSRN Scholarly Paper 4543807. https://doi.org/10.2139/ssrn.4543807
Züger, T., Asghari, H. AI for the public. How public interest theory shifts the discourse on AI. AI & Soc 38, 815–828 (2023). https://doi.org/10.1007/s00146-022-01480-5

AI in Peer Review: Promise, Pitfalls, and Practical Pathways
Johanna Havemann (1), Nancy Nyambura (1), Maria Machado (1), Gareth Dyke (1), Veronica Espinoza (1), Tim Errington (2)
Organization(s): 1: Access 2 Perspectives; 2: Center for Open Science
This interactive session examines how artificial intelligence is transforming the peer review process in scientific publishing. We will showcase leading AI tools, discuss their benefits and limitations, and bring together diverse viewpoints—from enthusiastic adopters to critical skeptics and ethicists. Through demonstrations, a panel discussion, and participant engagement in the room and online, participants will explore practical strategies for integrating AI with human expertise, ensuring fairness and transparency, and addressing ethical concerns. Online attendees are included via live streaming, real-time Q&A, and a shared editable document. Whether you are a researcher, reviewer, editor, or publisher, join us to gain insights, share your perspective, and help shape the responsible future of AI-assisted peer review. As a result of the session, actionable takeaways will ensure that the conversation leads to concrete next steps that can be implemented by all participants and for their respective institutions and communities.

Promoting Shared Understanding and Global Pathways for Open Science and AI in Emerging Research Environments
Firas Al Laban, Jan Bernoth
Organization(s): University of Potsdam
The UNESCO Recommendation on Open Science outlines a global roadmap based on shared values, principles, and standards. However, nearly 120 countries still lack open data policies, limiting their ability to fully participate in and benefit from open science. This gap remains a major obstacle to inclusive and effective global research collaboration.
Simultaneously, open science provides a foundation for trustworthy, reproducible, and inclusive AI. Making research artifacts FAIR (Findable, Accessible, Interoperable, and Reusable) enhances AI performance and helps reduce risks such as bias, opacity, and lack of accountability.
This discussion session is a part of the community-building efforts of the NFDIxCS consortium within Germany’s National Research Data Infrastructure (NFDI). The aim of this session is to foster international collaboration by engaging researchers, policymakers, and data stewards to address shared challenges from the perspective of their respective roles through a Delphi-inspired approach.
Solution Sessions

How FAIR-R Is Your Data? Enhancing Legal and Technical Readiness for Open and AI-Enabled Reuse
Katharina Miller, Vanessa Guzek
Organization(s): Miller International Knowledge
How reusable is your open dataset, for humans, machines, and AI tools?
In this hands-on session, participants will evaluate real datasets for their FAIR-R readiness: technical openness and legal clarity.
Teams will use a quick audit checklist to assess licensing, metadata, and AI suitability, and propose simple improvements.
Together, we will build a shared set of recommendations to help researchers prepare their data for ethical and open reuse in the age of AI. Participants can publish their audits and will leave with tools for reuse in their own projects.

Research Transparency Priorities and Infrastructure
Rene Bekkers
Organization(s): VU Amsterdam
In this interactive session, we invite participants to try out and give feedback on Research Transparency Check, new software providing a quick assessment of the transparency of research reports. The software determines the presence of information on criteria of research transparency, and, for a set of criteria for which ground truth is available, the accuracy of the information in the research report. The assessments feed a dashboard providing a colourful overview of the level of transparency. Comparisons with best practices produce a detailed set of actionable suggestions for improvements in the research report.

Marbles – Upcycling research waste and making every effort count in the era of Open Science
Pablo Hernández Malmierca, Isabel Barriuso Ortega
Organization(s): Research Agora
Research Agora (researchagora.com) invites you to help shape a more inclusive, transparent, and collaborative research ecosystem. Our solution session addresses the challenge of research waste and limited recognition of non-conventional research outputs. We propose “Marbles” -short, peer-reviewed, open-access reports linked to published articles- to make every experimental effort visible and valued. But we go further: this session is a call to action for the community to define what research outputs beyond traditional papers should be recognized and how. Together, we will discuss how platforms like Research Agora can support diverse research contributions -including replications, negative results, alternative methods, etc.- and how these can be integrated into open science infrastructures. Participants will collaboratively explore practical strategies for fairer research assessment, greater reproducibility, and equitable recognition of all researchers’ work. Join us to co-create solutions that ensure every scientific contribution counts and to help build a research culture that is open, robust, and truly collaborative. All ideas and perspectives are welcome as we collectively shape the future of open science.

Open Science Capacity Building in times of AI: Finding solutions with the GATE
Anika Müller-Karabil (1), Marie Alavi (2), Julia Claire Prieß-Buchheit (2), Tim Errington (3), Daniel Mietchen (4)
Organization(s): 1: Miller International Knowledge (MIK) / Open Science Learning GATE; 2: Kiel University / Open Science Learning GATE; 3: Center for Open Science (COS); 4: FIZ Karlsruhe – Leibniz Institute for Information Infrastructure
Advancing Open Science (OS) in the age of AI requires shared understanding, community engagement, and capacity building. The Open Science Learning GATE (GATE) initiative supports this by facilitating a continuous cycle of knowledge exchange on OS guiding thoughts and practices to inform and connect communities, support open research, and promote responsible AI use. This session invites participants to find collaborative solutions around current OS practices – particularly where they intersect with AI – using data gathered through the GATE Service (questionnaire).
Format:
I. Pre-conference: Participants share OS insights via the GATE questionnaire (10–15 mins).
II. In-conference: Introduction to GATE key data (5 – 10 min mins) // collaborative work to co-design (1) targeted OS training actions and solutions for their communities or (2) innovative, accessible formats to present OS/AI data in the GATE Report (40-45 mins) // wrap-up (10 mins).
III. Post-conference: Outcomes feed into the open-access GATE Report 2025, a community-based output of the GATE initiative continuously informing the research ecosystem on OS.
Outcomes:
Participants gain practical insights into OS and AI practices, co-create sustainable training strategies/OS actions, and help shape future GATE Reports. The session promotes FAIR, evidence-based OS education and supports a transparent, collaborative research culture.

AI, plagiarism and text recycling: information resources for academic authors
Aysa Ekanger
Organization(s): UiT The Arctic University of Norway
For the last few years generative AI has been worrying journal editors: what uses of generative AI should be allowed in scientific articles, how should these uses be declared at submission and on publication, and what are the associated copyright and ethical considerations?
Challenges brought about by AI come on top of older issues that editors are more familiar with, and that also may involve copyright and ethics: namely plagiarism and text-recycling (so-called self-plagiarism).
The objective of the session is to create informational resources directed at academic authors that scientific journals can use to help their potential authors avoid plagiarism and problematic uses of AI and text recycling. Session participants will work in groups to create resources such as flowcharts, checklists, or FAQs. The participants will be provided with information cards and case descriptions that can help them in their work. The groups are expected to document their results in the collaborative document during this session part.
In order for the session to be productive, participants are expected to be familiar with the basics of copyright and open licenses.