Discussion and Solution Sessions
Discussion Sessions

Does the rapid development of AI tools affect our commitment to Open Research Data?
Ilona Lipp (1), Cornelia van Scherpenberg (2)
Organization(s): 1: University of Leipzig; 2: VDI/VDE Innovation + Technik GmbH
How does the rapid advancement of AI tools reshape our approach to research data—and what does this mean for Open Science? This interactive session explores the complex relationship between AI technologies and Open Data, focusing on three key issues: (1) AI’s dependence on accessible, high-quality data; (2) its potential to improve data workflows, from metadata generation to error detection; and (3) the risks tied to data misuse, opacity in training, and ethical concerns around consent and dual-use.
Participants will examine both the promise and challenges of aligning AI development with Open Data principles. The session begins with a short impulse talk, followed by structured argument mapping. Participants—online and on-site—will debate the proposition: “AI development demands more open research data sharing.”
We aim to produce a shared map of community perspectives and concrete suggestions for responsible data practices. By session’s end, participants will gain a clearer view of how RDM infrastructures might evolve to address the technical, legal, and political tensions emerging at the AI–Open Data interface.

Streamlining Data Publication: Automatic Metadata and Large Datasets in the Age of AI
Anna Jacyszyn (1), Felix Bach (1), Tobias Kerzenmacher (2), Etienne Posthumus (1), Shufan Jiang (1), Kerstin Soltau (1), Stefan Hofmann (1)
Organization(s): 1: FIZ Karlsruhe – Leibniz Institute for Information Infrastructure; 2: Karlsruhe Institute of Technology, Institute of Meteorology and Climate Research
Research data repositories are essential infrastructure for enabling Open Science and ensuring data is Findable, Accessible, Interoperable, and Reusable (FAIR). However, researchers working on repositories face significant challenges in handling ever-increasing volumes of large datasets and the often time-consuming manual process of creating comprehensive, quality metadata. These issues can hinder data publication workflows and limit the findability and usability of valuable research output.
We will present the challenges encountered, propose solutions and first implementations for automatic metadata extraction and large data handling, and discuss how these innovations contribute to a more streamlined and scalable data publication workflow. Participants will have the opportunity to engage with developers and users of repositories, explore the practical implications for their own data management practices, and discuss the potential for adopting similar solutions in other repository contexts.
This discussion session will provide insights into the approaches, technologies, and lessons learned from the Leibniz Science Campus “Digital Transformation of Research” (DiTraRe) work on RADAR and implementation of AI methods in the topic of metadata standardisation. The session is especially relevant for everyone interested in the practical implementation of advanced research data management features that promote reproducibility, efficiency and FAIR principles.

Is Openness in Decline? Data Sharing Between Commons, Control, and Research Security
Katja Mayer
Organization(s): University of Vienna
The principles of Open Science — transparency, collaboration, and collective knowledge — are under pressure. While openness has been widely promoted as a way to democratize science, recent developments are prompting a growing number of researchers to withdraw from data sharing. This shift cannot be explained by infrastructure gaps alone. Instead, it reflects deeper concerns about the political economy of data, the rise of commercial AI, and the increasing entanglement of science with questions of national security and geopolitical power.
In many contexts, especially in the US, Europe, and China, discourses around research security have become prominent. Data access is no longer merely a technical or ethical issue, but one tied to strategic control, risk prevention, and international competition. At the same time, authoritarian regimes and commercial actors alike exert pressure on scientific openness — through censorship, surveillance, or extractive practices. In this landscape, the once-celebrated ideal of openness begins to feel naïve, even dangerous.
This discussion session will examine how these developments affect the everyday practices, ethical orientations, and institutional frameworks of researchers. Drawing on insights from sociology, critical data studies, and science and technology studies, we ask: How are ideas of openness being reshaped in the name of security and control? What forms of resistance or redefinition are emerging? And what governance models might ensure that openness remains a tool for public knowledge rather than a vulnerability to be exploited?
We invite participants from across disciplines and sectors to share perspectives, concerns, and experiences in order to reflect collectively on the future of openness in science and AI

AI in Peer Review: Opportunities, Challenges, and the Future of Scientific Evaluation
Johanna Havemann (1), Nancy Nyambura (1), Maria Machado (1), Gareth Dyke (1), Veronica Espinoza (1), Tim Errington (2)
Organization(s): 1: Access 2 Perspectives; 2: Center for Open Science
This interactive session examines how artificial intelligence is transforming the peer review process in scientific publishing. We’ll showcase leading AI tools, discuss their benefits and limitations, and bring together diverse viewpoints—from enthusiastic adopters to critical skeptics and ethicists. Through demonstrations, panel discussion, and small-group breakout sessions (both in-person and online), participants will explore practical strategies for integrating AI with human expertise, ensuring fairness and transparency, and addressing ethical concerns. Online attendees are fully included via live streaming, virtual breakouts, and real-time Q&A. Whether you’re a researcher, reviewer, editor, or publisher, join us to gain insights, share your perspective, and help shape the responsible future of AI-assisted peer review.

Promoting Shared Understanding and Global Pathways for Open Science and AI in Emerging Research Environments
Firas Al Laban, Jan Bernoth
Organization(s): Universität Potsdam
The UNESCO Recommendation on Open Science outlines a global roadmap based on shared values, principles, and standards. However, nearly 120 countries still lack open data policies, limiting their ability to fully participate in and benefit from open science. This gap remains a major obstacle to inclusive and effective global research collaboration.
Simultaneously, open science provides a foundation for trustworthy, reproducible, and inclusive AI. Making research artifacts FAIR (Findable, Accessible, Interoperable, and Reusable) enhances AI performance and helps reduce risks such as bias, opacity, and lack of accountability.
This discussion session, as part of the community-building efforts of the NFDIxCS consortium within Germany’s National Research Data Infrastructure (NFDI), aims to foster international collaboration by bringing together researchers, policymakers, and data stewards to:
– Investigate the current state of open science and AI readiness in emerging regions, such as Arabic countries.
– Discuss systemic barriers—whether infrastructural, policy-related, or cultural—to wider participation.
– Develop action plans to support global cooperation in building open science as a foundation for ethical and effective AI in research.
Applying the Delphi method, the session will gather expert input through structured prompts and a live questionnaire to shape future collaboration.
Solution Sessions

How FAIR-R Is Your Data? Enhancing Legal and Technical Readiness for Open and AI-Enabled Reuse
Katharina Miller, Vanessa Guzek
Organization(s): Miller International Knowledge
How reusable is your open dataset, for humans, machines, and AI tools?
In this hands-on session, participants will evaluate real datasets for their FAIR-R readiness: technical openness and legal clarity.
Teams will use a quick audit checklist to assess licensing, metadata, and AI suitability, and propose simple improvements.
Together, we will build a shared set of recommendations to help researchers prepare their data for ethical and open reuse in the age of AI. Participants can publish their audits and will leave with tools for reuse in their own projects.

Research Transparency Priorities and Infrastructure
Rene Bekkers
Organization(s): VU Amsterdam
In this interactive session, we invite participants to try out and give feedback on Research Transparency Check, new software providing a quick assessment of the transparency of research reports. The software determines the presence of information on criteria of research transparency, and, for a set of criteria for which ground truth is available, the accuracy of the information in the research report. The assessments feed a dashboard providing a colourful overview of the level of transparency. Comparisons with best practices produce a detailed set of actionable suggestions for improvements in the research report.

Marbles – Upcycling research waste and making every effort count in the era of Open Science
Pablo Hernández Malmierca, Isabel Barriuso Ortega
Organization(s): Research Agora
Created: 7th May 2025, 01:55:25pmTopics: Infrastructures and tools supporting the synergy of Open Science & AI, Other (Please leave a remark below)
Keywords: Reproducibility, Research Waste, Non-traditional Publications, Inclusive Research, Online Open Platform
Research Agora (researchagora.com) invites you to help shape a more inclusive, transparent, and collaborative research ecosystem. Our solution session addresses the challenge of research waste and limited recognition of non-conventional research outputs. We propose “Marbles” -short, peer-reviewed, open-access reports linked to published articles- to make every experimental effort visible and valued. But we go further: this session is a call to action for the community to define what research outputs beyond traditional papers should be recognized and how. Together, we will discuss how platforms like Research Agora can support diverse research contributions -including replications, negative results, alternative methods, etc.- and how these can be integrated into open science infrastructures. Participants will collaboratively explore practical strategies for fairer research assessment, greater reproducibility, and equitable recognition of all researchers’ work. Join us to co-create solutions that ensure every scientific contribution counts and to help build a research culture that is open, robust, and truly collaborative. All ideas and perspectives are welcome as we collectively shape the future of open science.

Open Science Capacity Building in times of AI: Finding solutions with the GATE
Anika Müller-Karabil (1), Marie Alavi (2), Julia Claire Prieß-Buchheit (2), Tim Errington (3), Daniel Mietchen (4)
Organization(s): 1: Miller International Knowledge (MIK) / Open Science Learning GATE; 2: Kiel University / Open Science Learning GATE; 3: Center for Open Science (COS); 4: FIZ Karlsruhe – Leibniz-Institut für Informationsinfrastruktur GmbH
Advancing Open Science (OS) in the age of AI requires shared understanding, community engagement, and capacity building. The Open Science Learning GATE (GATE) initiative supports this by facilitating a continuous cycle of knowledge exchange on OS guiding thoughts and practices to inform and connect communities, support open research, and promote responsible AI use. This session invites participants to find collaborative solutions around current OS practices – particularly where they intersect with AI – using data gathered through the GATE Service (questionnaire).
Format:
I. Pre-conference: Participants share OS insights via the GATE questionnaire (10–15 mins).
II. In-conference: Introduction to GATE key data (15 mins) // collaborative work to co-design (1) targeted OS training actions and solutions for their communities and (2) innovative, accessible formats to present OS/AI data in the GATE Report (35 mins) // wrap-up (10 mins).
III. Post-conference: Outcomes feed into the open-access GATE Report 2025, a community-based output of the GATE initiative continuously informing the research ecosystem on OS.
Outcomes:
Participants gain practical insights into OS and AI practices, co-create sustainable training strategies/OS actions, and help shape future GATE Reports. The session promotes FAIR, evidence-based OS education and supports a transparent, collaborative research culture.

AI, plagiarism and text recycling: information resources for academic authors
Aysa Ekanger
Organization(s): UiT The Arctic University of Norway
For the last few years generative AI has been worrying journal editors: what uses of generative AI should be allowed in scientific articles, how should these uses be declared at submission and on publication, and what are the associated copyright and ethical considerations?
Challenges brought about by AI come on top of older issues that editors are more familiar with, and that also may involve copyright and ethics: namely plagiarism and text-recycling (so-called self-plagiarism).
The objective of the session is to create informational resources directed at academic authors that scientific journals can use to help their potential authors avoid plagiarism and problematic uses of AI and text recycling. Session participants will work in groups to create resources such as flowcharts, checklists, or FAQs. The participants will be provided with information cards and case descriptions that can help them in their work. The groups are expected to document their results in the collaborative document during this session part.
In order for the session to be productive, participants are expected to be familiar with the basics of copyright and open licenses.